CN114967121B - An end-to-end single-lens imaging system design method - Google Patents
An end-to-end single-lens imaging system design method Download PDFInfo
- Publication number
- CN114967121B CN114967121B CN202210522840.9A CN202210522840A CN114967121B CN 114967121 B CN114967121 B CN 114967121B CN 202210522840 A CN202210522840 A CN 202210522840A CN 114967121 B CN114967121 B CN 114967121B
- Authority
- CN
- China
- Prior art keywords
- imaging system
- loss
- lens imaging
- lens
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 62
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013461 design Methods 0.000 title claims abstract description 27
- 230000006870 function Effects 0.000 claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013135 deep learning Methods 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims description 33
- 238000013528 artificial neural network Methods 0.000 claims description 32
- 238000004088 simulation Methods 0.000 claims description 22
- 238000005457 optimization Methods 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 12
- 230000003044 adaptive effect Effects 0.000 claims description 10
- 210000002569 neuron Anatomy 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 10
- 238000005070 sampling Methods 0.000 description 9
- 230000000875 corresponding effect Effects 0.000 description 5
- 238000009792 diffusion process Methods 0.000 description 5
- 230000004075 alteration Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011423 initialization method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 229920003229 poly(methyl methacrylate) Polymers 0.000 description 1
- 239000004926 polymethyl methacrylate Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0012—Optical design, e.g. procedures, algorithms, optimisation routines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/043—Architecture, e.g. interconnection topology based on fuzzy logic, fuzzy membership or fuzzy inference, e.g. adaptive neuro-fuzzy inference systems [ANFIS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Fuzzy Systems (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Automation & Control Theory (AREA)
- Optics & Photonics (AREA)
- Lenses (AREA)
Abstract
Description
技术领域technical field
本发明属于计算光学成像技术领域,特别是涉及一种端到端的单透镜成像系统设计方法。The invention belongs to the technical field of computational optical imaging, in particular to an end-to-end single-lens imaging system design method.
背景技术Background technique
单透镜相比于复杂透镜组有着体积小、质量轻和结构简单等优势。但是,普通单透镜的像差较大,尤其在大视场成像时,得到的图像往往很模糊。若要实现高质量的光学系统成像,往往需要用复杂的透镜组合来矫正像差。然而,复杂透镜组光学系统有着体积大、质量大和成本高等缺点,在手机镜头、无人机平台摄像系统、遥感相机等有小型化需求的应用中存在一定的限制。Compared with complex lens groups, single lenses have the advantages of small size, light weight and simple structure. However, the aberration of ordinary single lenses is large, especially when imaging with a large field of view, the obtained images are often blurred. To achieve high-quality optical system imaging, it is often necessary to use complex lens combinations to correct aberrations. However, the complex lens group optical system has the disadvantages of large volume, high mass and high cost, which has certain limitations in applications that require miniaturization, such as mobile phone lenses, drone platform camera systems, and remote sensing cameras.
单透镜成像系统通过单透镜获取图像,用后处理算法矫正图像中单透镜像差带来的模糊,是实现镜头轻型化的主要方法,可应用于智能手机摄像头、无人机平台摄像系统、遥感相机等亟需小型化成像系统领域。单透镜成像系统可以分为分立设计系统和端到端设计系统。分立设计系统先设计单透镜,根据单透镜成像效果再设计后处理复原算法。端到端的单透镜成像系统设计则将成像仿真与复原算法连接在一起,利用深度学习技术同时对单透镜面形参数与复原算法参数做学习训练。端到端设计相比于分立设计,具有全局寻优的优势。The single-lens imaging system acquires images through a single lens, and uses post-processing algorithms to correct the blur caused by the single-lens aberration in the image. There is an urgent need for miniaturized imaging systems such as cameras. Single-lens imaging systems can be divided into discrete design systems and end-to-end design systems. The discrete design system first designs the single lens, and then designs the post-processing restoration algorithm according to the imaging effect of the single lens. The end-to-end single-lens imaging system design connects the imaging simulation and restoration algorithm, and uses deep learning technology to simultaneously learn and train the single-lens surface parameters and restoration algorithm parameters. Compared with discrete design, end-to-end design has the advantage of global optimization.
现有的端到端单透镜设计方法中复原网络部分均使用卷积神经网络,无法学习位置信息,对与空间位置相关的像差模糊复原效果较差。并且,现有的这些方法缺少对单透镜边缘厚度、中心厚度以及能量分布等附加边界条件的约束,这将导致通过算法设计得到的光学透镜不符合实际加工应用的要求。In the existing end-to-end single-lens design method, the restoration network part uses a convolutional neural network, which cannot learn position information, and has a poor restoration effect on aberrations related to spatial positions. Moreover, these existing methods lack constraints on additional boundary conditions such as single lens edge thickness, center thickness, and energy distribution, which will cause the optical lens designed by the algorithm to fail to meet the requirements of practical processing applications.
发明内容Contents of the invention
本发明的目的是提供一种端到端的单透镜成像系统设计方法,以解决上述现有技术存在的问题。The purpose of the present invention is to provide an end-to-end single-lens imaging system design method to solve the above-mentioned problems in the prior art.
为实现上述目的,本发明提供了一种端到端的单透镜成像系统设计方法,包括:To achieve the above object, the present invention provides an end-to-end single-lens imaging system design method, including:
计算复原图与原图的平方差损失与附加约束损失,基于所述平方差损失与所述附加约束损失构建损失函数;Calculate the square difference loss and additional constraint loss of the restored image and the original image, and construct a loss function based on the square difference loss and the additional constraint loss;
构建单透镜成像系统框架,基于深度学习与所述损失函数,对所述单透镜成像系统框架进行迭代优化,获得优化完成的系统,将优化完成的系统作为单透镜成像系统。A single-lens imaging system framework is constructed, and based on deep learning and the loss function, the single-lens imaging system framework is iteratively optimized to obtain an optimized system, and the optimized system is used as a single-lens imaging system.
可选地,所述单透镜成像系统框架包括:光学图像模糊仿真模块、模糊核学习模块和逆滤波图像复原模块;Optionally, the single-lens imaging system framework includes: an optical image blur simulation module, a blur kernel learning module and an inverse filter image restoration module;
构建单透镜成像系统框架的过程包括:The process of building the framework of a single-lens imaging system includes:
建立单透镜各视场和各波段的点扩散函数关于其面形参数的映射方程,基于所述点扩散函数与原图进行卷积插值,获取单透镜模糊仿真图,构建所述光学图像模糊仿真模块;Establish the mapping equation of the point spread function of each field of view and each waveband of the single lens with respect to its surface shape parameters, perform convolution interpolation based on the point spread function and the original image, obtain a single lens blur simulation map, and construct the optical image blur simulation module;
将所述点扩散函数中0°视场的点扩散函数作为预估模糊核,构建神经网络,基于所述神经网络对所述预估模糊核进行修正,获取修正结果,构建所述模糊核学习模块;Using the point spread function of the 0° field of view in the point spread function as an estimated blur kernel, constructing a neural network, correcting the estimated blur kernel based on the neural network, obtaining a correction result, and constructing the fuzzy kernel learning module;
基于自适应维纳滤波法与所述模糊核学习模块输出的模糊核构建所述逆滤波图像复原模块;Constructing the inverse filter image restoration module based on the adaptive Wiener filtering method and the fuzzy kernel output by the fuzzy kernel learning module;
基于所述光学图像模糊仿真模块、所述模糊核学习模块与所述逆滤波图像复原模块构建所述单透镜成像系统框架。The single-lens imaging system framework is constructed based on the optical image blur simulation module, the blur kernel learning module and the inverse filter image restoration module.
可选地,构建所述光学图像模糊仿真模块的过程中,基于几何光学原理与光线追迹的方法获取所述点扩散函数;将单透镜的非球面参数设为面形参数,所述单透镜模糊仿真图关于所述非球面参数是可微的。Optionally, in the process of constructing the optical image fuzzy simulation module, the point spread function is obtained based on the principle of geometric optics and ray tracing; the aspherical parameters of the single lens are set as surface parameters, and the single lens The blur simulation map is differentiable with respect to the aspheric parameters.
可选地,所述神经网络为具有跳过连接结构的三层全连接神经网络,每层全连接层包括27×27个神经元。Optionally, the neural network is a three-layer fully connected neural network with a skip connection structure, and each fully connected layer includes 27×27 neurons.
可选地,所述神经网络对所述预估模糊核进行修正的方法为:将所述预估模糊核变换为一维矩阵形式并输入所述神经网络中;计算每一层神经网络的输出结果,将第三层的输出结果变换为图像矩阵的形式,形成修正的模糊核并进行补零。Optionally, the method for the neural network to modify the estimated blur kernel is: transform the estimated blur kernel into a one-dimensional matrix form and input it into the neural network; calculate the output of each layer of neural network As a result, the output of the third layer is transformed into the form of an image matrix, forming a corrected blur kernel and zero-padding.
可选地,所述自适应维纳滤波法的算法表达式为:Optionally, the algorithmic expression of the adaptive Wiener filtering method is:
其中,表示复原图像,F(·)表示傅里叶变换,F-1(·)表示傅里叶逆变换,K为通过学习训练自适应调整的可优化的参数,I1为单透镜模糊仿真图像,为模糊核学习模块输出的模糊核。in, Represents the restored image, F( ) represents the Fourier transform, F -1 ( ) represents the inverse Fourier transform, K is the parameter that can be optimized through learning and training adaptive adjustment, I 1 is the single-lens blur simulation image, The fuzzy kernel output by the fuzzy kernel learning module.
可选地,计算复原图与原图的平方差损失mseloss的公式为:Optionally, the formula for calculating the square difference loss mseloss between the restored image and the original image is:
其中,m,n表示图像的尺寸,i,j表示图像中的像素位置,I0表示原图,mseloss表示原图与复原图的平方差损失。Among them, m, n represent the size of the image, i, j represent the pixel position in the image, I 0 represents the original image, mseloss represents the square difference loss between the original image and the restored image.
可选地,所述附加约束损失的计算方法为:Optionally, the calculation method of the additional constraint loss is:
式中,loss0表示附加约束损失,kn表示变量当前值,ky表示变量阀值,sigmoid表示激活函数,当kn≥ky时附加损失值不激活,当kn<ky时激活附加损失值。In the formula, loss 0 represents the additional constraint loss, k n represents the current value of the variable, k y represents the variable threshold value, and sigmoid represents the activation function. When k n ≥ k y , the additional loss value is inactive, and when k n < k y is activated Additional loss value.
可选地,基于所述平方差损失与所述附加约束损失构建损失函数的方法为:将所述平方差损失与所述附加约束损失进行加权求和,基于加权求和的结果构建损失函数。Optionally, the method of constructing a loss function based on the square difference loss and the additional constraint loss is: performing weighted summation of the square difference loss and the additional constraint loss, and constructing a loss function based on a result of the weighted summation.
可选地,对所述单透镜成像系统框架进行迭代优化的方法为:Optionally, the method for iteratively optimizing the frame of the single-lens imaging system is:
将初始非球面参数与神经网络的参数初始为0,对非球面参数、神经网络和逆滤波自适应参数共同做优化。The initial aspheric parameters and neural network parameters are initially set to 0, and the aspheric parameters, neural network and inverse filter adaptive parameters are jointly optimized.
本发明具有以下技术效果:The present invention has the following technical effects:
(1)本发明建立了一种端到端的单透镜成像系统框架,根据系统的成像效果同时对单透镜成像系统框架中的光学系统面形参数、ResDNN3神经网络参数和维纳滤波图像复原算法中的噪声常量参数进行优化。(1) The present invention has set up a kind of end-to-end single-lens imaging system frame, according to the imaging effect of the system simultaneously in the optical system surface shape parameter in the single-lens imaging system frame, ResDNN3 neural network parameter and Wiener filtering image restoration algorithm The noise constant parameters are optimized.
(2)本发明提出了具有跳过连接结构的全连接神经网络(ResDNN3),此网络以预估的模糊核作为输入,可用于光学系统模糊核的学习修正。(2) The present invention proposes a fully-connected neural network (ResDNN3) with a skip connection structure. This network takes the estimated blur kernel as input and can be used for learning and correcting the blur kernel of the optical system.
(3)本发明在端到端的单透镜成像系统的训练优化中加入了光学系统附加约束损失,可以对设计的单透镜的边缘厚度和能量分布等做约束。(3) The present invention adds an additional constraint loss of the optical system in the training optimization of the end-to-end single-lens imaging system, which can constrain the edge thickness and energy distribution of the designed single-lens.
(4)本发明为设计单透镜成像系统框架提出了一套初始化方法,使本发明的单透镜成像系统具有较好的初始结构,此系统框架在训练时具有良好的初值,大幅降低了对此系统框架做训练优化的难度。(4) The present invention proposes a set of initialization methods for designing the single-lens imaging system framework, so that the single-lens imaging system of the present invention has a better initial structure, and this system framework has a good initial value during training, greatly reducing the need for The difficulty of training optimization for this system framework.
附图说明Description of drawings
构成本申请的一部分的附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings constituting a part of the application are used to provide further understanding of the application, and the schematic embodiments and descriptions of the application are used to explain the application, and do not constitute an improper limitation to the application. In the attached picture:
图1为本发明实施例中的端到端的单透镜成像系统设计方法流程框图;FIG. 1 is a flow chart of an end-to-end single-lens imaging system design method in an embodiment of the present invention;
图2为本发明实施例中的ResDNN3网络结构图;Fig. 2 is the ResDNN3 network structure diagram in the embodiment of the present invention;
图3为本发明实施例中的单透镜光路图;Fig. 3 is the optical path diagram of single lens in the embodiment of the present invention;
图4为本发明实施例中的优化前后单透镜各视场的点扩散函数对比图,其中(a)为未优化的光学系统的各视场点扩散函数,(b)为学习优化得到的光学系统的各视场点扩散函数;Fig. 4 is the comparison diagram of the point spread function of each field of view of the single lens before and after optimization in the embodiment of the present invention, wherein (a) is the point spread function of each field of view of the unoptimized optical system, (b) is the optical obtained by learning optimization Point spread function of each field of view of the system;
图5为本发明实施例中的模糊核的学习优化效果,其中(a)为未优化的预估模糊核,(b)为学习优化后的模糊核;Fig. 5 is the learning optimization effect of the fuzzy kernel in the embodiment of the present invention, wherein (a) is an unoptimized estimated fuzzy kernel, (b) is a fuzzy kernel after learning optimization;
具体实施方式Detailed ways
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。It should be noted that, in the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. The present application will be described in detail below with reference to the accompanying drawings and embodiments.
需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。It should be noted that the steps shown in the flowcharts of the accompanying drawings may be performed in a computer system, such as a set of computer-executable instructions, and that although a logical order is shown in the flowcharts, in some cases, The steps shown or described may be performed in an order different than here.
实施例一Embodiment one
本实施例提出了一种端到端的单透镜成像系统设计方法,并可有效应用于智能手机摄像头、无人机平台摄像系统、遥感相机等亟需小型化成像系统领域,其流程图如图1所示。本实施例以一个焦距为43.5mm、通光孔径为23.4mm,全视场47°的单透镜成像系统为例,介绍本发明的具体实施方式:This embodiment proposes an end-to-end single-lens imaging system design method, which can be effectively applied to areas such as smartphone cameras, drone platform camera systems, and remote sensing cameras that urgently need miniaturized imaging systems. The flow chart is shown in Figure 1 shown. In this embodiment, a single-lens imaging system with a focal length of 43.5mm, a clear aperture of 23.4mm, and a full field of view of 47° is taken as an example to introduce specific implementation methods of the present invention:
步骤一:建立的端到端的单透镜成像系统框架包含三个模块:光学图像模糊仿真模块、模糊核学习模块和逆滤波图像复原模块。这三个模块的建立过程如下:Step 1: The established end-to-end single-lens imaging system framework includes three modules: optical image blur simulation module, blur kernel learning module and inverse filter image restoration module. The establishment process of these three modules is as follows:
步骤1-1,建立光学图像模糊仿真模块。首先建立单透镜各视场和各波段的点扩散函数关于其面形参数的映射方程,然后用这些点扩散函数与原图做卷积插值得到单透镜模糊仿真图,实现光学图像模糊仿真。Step 1-1, establishing an optical image blurring simulation module. Firstly, the mapping equation of the point spread function of each field of view and each waveband of the single lens with respect to its surface shape parameters is established, and then the point spread function is used to perform convolution interpolation with the original image to obtain a single lens blur simulation image to realize optical image blur simulation.
将单透镜的非球面参数设为可优化的面形参数,建立的图像仿真模型中模糊图像关于非球面参数是可微的。本发明使用的单透镜为平凸偶次非球面透镜,透镜前表面为平面,后表面为使用第4、6、8、10阶偶次非球面参数的曲面。The aspheric parameters of the single lens are set as the surface parameters that can be optimized, and the blurred image in the established image simulation model is differentiable with respect to the aspheric parameters. The single lens used in the present invention is a plano-convex even-order aspheric lens, the front surface of the lens is a plane, and the rear surface is a curved surface using the 4th, 6th, 8th, and 10th-order even-order aspheric parameters.
本发明首先基于几何光学原理,通过光线追迹建立单透镜光学系统下光线入射位置和方向到像面位置的映射方程建立面形参数与点扩散函数之间的函数关系:Firstly, based on the principle of geometric optics, the present invention establishes the mapping equation of the incident position and direction of the light under the single-lens optical system to the position of the image plane through ray tracing to establish the functional relationship between the surface shape parameters and the point spread function:
G(x0,y0,θ,{a4,a6,a8,a10})=(x1,y1)G(x 0 ,y 0 ,θ,{a 4 ,a 6 ,a 8 ,a 10 })=(x 1 ,y 1 )
其中x0,y0,θ为入射位置与方向,{an}为待优化的面形参数的集合,x1,y1为像面位置,G(·)为光线入射位置和方向到像面位置的映射变换。建立此映射方程,需要已知光学透镜的面形函数F(x,y,z,an)=0、透镜材料的折射率以及各光学表面的位置。偶次非球面的通用面形方程为:Where x 0 , y 0 , θ are the incident position and direction, {a n } is the set of surface shape parameters to be optimized, x 1 , y 1 are the position of the image plane, G( ) is the incident position and direction of light to the image The mapping transformation of the face position. To establish this mapping equation, the surface shape function F(x, y, z, a n )=0 of the optical lens, the refractive index of the lens material and the position of each optical surface need to be known. The general surface shape equation for an even-order aspheric surface is:
其中,r为径向方向的位置,z为相应位置处的矢径,c为顶点处的曲率,r为半径,k为圆锥度,a2、a4、a6等为非球面系数。在本发明实例中,透镜厚度d为6mm,顶点曲率半径r为-21.4mm,圆锥度k为0,a4、a6、a8和a10为待优化的非球面系数。Among them, r is the position in the radial direction, z is the vector radius at the corresponding position, c is the curvature at the apex, r is the radius, k is the conicity, and a 2 , a 4 , a 6 etc. are aspheric coefficients. In the example of the present invention, the lens thickness d is 6 mm, the vertex curvature radius r is -21.4 mm, the conicity k is 0, and a 4 , a 6 , a 8 and a 10 are the aspheric coefficients to be optimized.
实例中所用的透镜材料为PMMA,其对可见光波段中心波长的折射率n1=1.49,使用光线的空间折射定律可得到光线折射后的方位角利用空间折射定律与光的直线传播定律可求解得到光线在透镜面每一次折射的空间位置和方向角,逐层求解可由x0,y0,θ得到x1,y1。The lens material used in the example is PMMA, and its refractive index to the central wavelength of the visible light band is n 1 =1.49. Using the law of spatial refraction of light, the azimuth angle after light refraction can be obtained by using the law of space refraction and the law of linear propagation of light. The spatial position and direction angle of each refraction of light on the lens surface can be solved layer by layer to obtain x 1 , y 1 from x 0 , y 0 , θ.
在像面均匀选取7×7个点扩散采样区域,每个采样区域的大小27×27个像素,计算这些点扩散采样区域中心位置对应的主光线的入射角。对每一个入射角,分别在入射面创建64×64个平行采样光线阵列,并计算这些光线在点扩散采样区域内的落点分布,点扩散采样区域内每个像素块内的光线量作为点扩散函数中相应参数的参数值。Uniformly select 7×7 point diffusion sampling areas on the image plane, and the size of each sampling area is 27×27 pixels, and calculate the incident angle of the chief ray corresponding to the center position of these point diffusion sampling areas. For each incident angle, create 64×64 parallel sampling ray arrays on the incident surface, and calculate the distribution of these rays in the point-diffusion sampling area, and the amount of light in each pixel block in the point-diffusion sampling area is taken as the point The parameter value of the corresponding parameter in the spread function.
设像面各点扩散采样区域内各像素中心的空间位置为x2,y2,则像面各像素中心与光线落点间的距离为:Assuming that the spatial position of each pixel center in the diffuse sampling area of each point on the image plane is x 2 , y 2 , then the distance between the center of each pixel on the image plane and the falling point of light is:
随后,用高斯函数对像面的光线落点做扩散化,根据光线落点与像素空间中心距离对像素赋予不同的权重:Then, the Gaussian function is used to diffuse the light falling point of the image plane, and different weights are assigned to the pixels according to the distance between the light falling point and the center of the pixel space:
其中,σ高斯函数标准差,可根据探测器的像元尺寸调整。将所有同一入射角的采样光线落点的强度分布叠加起来并归一化,便可获得相应位置的点扩散函数:Among them, the standard deviation of the σ Gaussian function can be adjusted according to the pixel size of the detector. The point spread function of the corresponding position can be obtained by superimposing and normalizing the intensity distribution of all the sampling ray landing points at the same incident angle:
分别用这些点扩散函数与原始图像做卷积,再用插值函数将这些与点扩散函数卷积后的降质图融合为模糊仿真图并加上噪声项。此过程描述为:Use these point spread functions to convolve with the original image, and then use the interpolation function to fuse the degraded images convolved with the point spread functions into a fuzzy simulation image and add noise terms. This process is described as:
其中,I0为原始图像,I1为单透镜模糊仿真图像,η为噪声,SINCij表示中心点在(i,j)位置的SINC函数权重图。Among them, I 0 is the original image, I 1 is the single-lens blur simulation image, η is the noise, and SINC ij is the weight map of the SINC function with the center point at (i,j) position.
步骤1-2,建立模糊核优化学习模块。将步骤1-1中得到的中心视场的点扩散函数作为预估模糊核,建立神经网络用于将预估模糊核变换为表征整个光学系统特性的模糊核,从而实现学习模糊核的功能。Step 1-2, establishing a fuzzy kernel optimization learning module. Use the point spread function of the central field of view obtained in step 1-1 as the estimated blur kernel, and establish a neural network to transform the estimated blur kernel into a blur kernel that characterizes the characteristics of the entire optical system, thereby realizing the function of learning the blur kernel.
使用的预估模糊核psf00为步骤一中得到的中心视场中心波段的点扩散函数,将其变换为一维矩阵的形式H0=[h0,h1,…,h27×27],输入本发明建立的ResDNN3网络中。The estimated blur kernel psf 00 used is the point spread function of the central band of the central field of view obtained in step 1, which is transformed into a one-dimensional matrix H 0 =[h 0 ,h 1 ,…,h 27×27 ] , input into the ResDNN3 network established by the present invention.
本文建立ResDNN3网络的结构如图2所示,是具有跳过连接结构的三层全连接神经网络,全连接神经网络中所有的神经元是相互连接的,每层全连接层均有27×27个神经元。跳过连接是指神经网络每一层的输出都要与其输入相加,起到丰富网络信息等作用。The structure of the ResDNN3 network established in this paper is shown in Figure 2. It is a three-layer fully connected neural network with a skip connection structure. All neurons in the fully connected neural network are connected to each other, and each fully connected layer has 27×27 neurons. Skip connection means that the output of each layer of the neural network must be added to its input to enrich the network information.
跳过连接的方式可以以简单的方式引入上一层神经元的信息,并且几乎不增加额外运算量,目前这种方法广泛应用于卷积神经网络中,其在全连接神经网络中同样可以发挥出相应的效果。令表示第x层全连接网络中的第j个神经元,表示上一层输入中的第i项,表示上一层中第j个神经元,和分别表示此条连接的权重和偏置,则该神经网络中每个神经元的计算方式为:The way of skipping the connection can introduce the information of the neurons of the previous layer in a simple way, and it hardly increases the amount of additional calculation. At present, this method is widely used in the convolutional neural network, and it can also play a role in the fully connected neural network. produce the corresponding effect. make Represents the jth neuron in the fully connected network of the xth layer, Represents the i-th item in the input of the previous layer, Indicates the jth neuron in the previous layer, and represent the weight and bias of this connection respectively, then the calculation method of each neuron in the neural network is:
其中的权重wi和偏置bi作为神经网络的优化变量,每层全连接神经网络中所有神经元的计算结果作为这层神经网络的输出。于是,本发明中神经网络每一层的输出结果依次为:Among them, the weight w i and bias b i are used as the optimization variables of the neural network, and the calculation results of all neurons in each layer of fully connected neural network are used as the output of this layer of neural network. Thus, the output results of each layer of the neural network in the present invention are in turn:
将第三层的输出结果H3变换为图像矩阵的形式,形成修正的模糊核psfH3。将修正的模糊核补零至与模糊图像I1尺寸相同,补零后的点扩散函数用表示。Transform the output result H 3 of the third layer into the form of image matrix to form the corrected blur kernel psf H3 . Pad the corrected blur kernel to the same size as the blurred image I 1 , and use the point spread function after zero padding express.
步骤1-3,建立逆滤波图像复原模块。用自适应维纳滤波法作为图像复原模块的逆滤波算法,图像复原模块使用的模糊核为模糊核学习模块输出的模糊核。Steps 1-3, establishing an inverse filter image restoration module. The adaptive Wiener filtering method is used as the inverse filtering algorithm of the image restoration module, and the blur kernel used by the image restoration module is the blur kernel output by the fuzzy kernel learning module.
维纳滤波也称为最小均方误差滤波,是一种考虑了噪声干扰的逆滤波复原算法。令表示复原图像,Sη为噪声的功率谱,Sf为退化函数的功率谱,F(·)表示傅里叶变换,为模糊核优化学习模块输出的模糊核,于是利用维纳滤波复原图像的算法表达式为:Wiener filtering, also known as minimum mean square error filtering, is an inverse filtering restoration algorithm that considers noise interference. make Represents the restored image, S η is the power spectrum of the noise, S f is the power spectrum of the degradation function, F(·) represents the Fourier transform, The fuzzy kernel output from the learning module is optimized for the fuzzy kernel, so the algorithm expression for image restoration using Wiener filtering is:
其中,项难以被精确计算,以往使用时通常将其设为常量。本发明中将其设为可优化的参数,通过学习训练自适应调整。设F-1(·)为傅里叶逆变换,K值为自适应参数,则复原图像的表达式为:in, The term is difficult to be calculated precisely, and it is usually set as a constant when used in the past. In the present invention, it is set as an optimizable parameter, which is adaptively adjusted through learning and training. Let F -1 ( ) be the inverse Fourier transform, K is an adaptive parameter, then restore the image The expression is:
所述步骤2中,令m,n表示图像的尺寸,i,j表示图像中的像素位置,mseloss表示原图与复原图的平方差损失,则原图与复原图的平方差损失为:In said step 2, let m, n represent the size of the image, i, j represent the pixel position in the image, and mseloss represent the square difference loss between the original image and the restored image, then the square difference loss between the original image and the restored image is:
光学系统的附加约束损失通过变量当前值kn与设置的变量阈值ky的差值引入,并用sigmoid函数做激活函数,当kn≥ky时此损失项不激活,当kn<ky时激活附加损失值。本发明具体可对透镜边缘厚度和能量分布做附加约束,其通用范式为:The additional constraint loss of the optical system is introduced by the difference between the current value of the variable k n and the set variable threshold k y , and the sigmoid function is used as the activation function. When k n ≥ k y , this loss item is not activated. When k n < k y Activate the additional loss value when . Specifically, the present invention can impose additional constraints on the lens edge thickness and energy distribution, and its general paradigm is:
其中,对于单透镜,变量当前值kn主要有透镜边缘厚度和能量分布两种类型。设r为径向方向的位置,半径R1(r)与R2(r)分别为前一面和后一面在光轴方向上的坐标函数,r0为透镜的径向半径,则边缘厚度的变量当前值kn1的表达式为:Among them, for a single lens, the variable current value k n mainly has two types: lens edge thickness and energy distribution. Suppose r is the position in the radial direction, the radii R 1 (r) and R 2 (r) are the coordinate functions of the front surface and the rear surface in the direction of the optical axis respectively, and r 0 is the radial radius of the lens, then the edge thickness The expression of variable current value k n1 is:
kn1=R2(r0)-R1(r0)k n1 =R 2 (r 0 )-R 1 (r 0 )
系统能量值可用上述采样平行光线阵列在像面落点值的总和来表示,可用点扩散采样范围内的能量总和来表示,也可用更小范围乃至中心像素内的光线落点能量和表示,设m,n为考察的能量传递范围,则能量分布的变量当前值为:The energy value of the system can be expressed by the sum of the above-mentioned sampled parallel light arrays on the image plane, the sum of the energy within the point diffusion sampling range, or the energy sum of the rays in a smaller range or even in the central pixel. Let m and n are the range of energy transfer under investigation, then the current value of the variable of energy distribution is:
kn2=sum(psf(x,y)),x≤m,y≤nk n2 = sum(psf(x,y)),x≤m,y≤n
将平方差损失与附加约束损失加权求和,得到整个端到端设计的损失函数loss,设平方差损失与附加约束损失的权重分别为α和β,则:The weighted sum of the square difference loss and the additional constraint loss is obtained to obtain the loss function loss of the entire end-to-end design, and the weights of the square difference loss and the additional constraint loss are respectively α and β, then:
loss=α*mseloss+β1*loss0(kn1)+β2*loss0(kn2)loss=α*mseloss+β 1 *loss 0 (k n1 )+β 2 *loss 0 (k n2 )
步骤三:利用深度学习技术,对端到端的单透镜成像系统的可训练参数做迭代优化,得到最优的单透镜成像系统参数Step 3: Use deep learning technology to iteratively optimize the trainable parameters of the end-to-end single-lens imaging system to obtain the optimal single-lens imaging system parameters
将初始非球面参数与resDNN3网络参数均初始化为0,于是在深度学习的初次训练时:光学系统为标准球面镜,预估模糊核为标准球面镜在0°视场下的点扩散函数,resDNN3网络对预估模糊核的运算结果仍为标准球面镜在0°视场下的点扩散函数,维纳滤波噪声特性参数的初值设为0.01。相比于随机初始化参数,本发明的初始化方法具有一定的物理意义,在对系统做初次训练时便可得到较好的成像效果。所以,本发明的单透镜成像系统在具有较好的初始结构,大幅降低了深度学习的训练难度。Initialize the initial aspheric parameters and resDNN3 network parameters to 0, so in the initial training of deep learning: the optical system is a standard spherical mirror, and the estimated blur kernel is the point spread function of the standard spherical mirror in the 0° field of view. The calculation result of the estimated blur kernel is still the point spread function of the standard spherical mirror in the 0° field of view, and the initial value of the noise characteristic parameter of the Wiener filter is set to 0.01. Compared with random initialization parameters, the initialization method of the present invention has a certain physical meaning, and better imaging effects can be obtained when the system is first trained. Therefore, the single-lens imaging system of the present invention has a better initial structure, which greatly reduces the training difficulty of deep learning.
选取图像尺寸为256×256的清晰场景图像作为数据集,其中训练集中有350张图片,测试集中有50张图片。在做深度学习训练的每次迭代中,计算训练集的损失值做梯度下降,更新参数;计算测试集的损失值用于验证单透镜成像系统的效果,不更新参数。将学习率设为1×10-4,重复迭代100次,选取测试集的损失值最小的一次参数作为最终的优化结果。A clear scene image with an image size of 256×256 is selected as the data set, in which there are 350 pictures in the training set and 50 pictures in the test set. In each iteration of deep learning training, the loss value of the training set is calculated for gradient descent, and the parameters are updated; the loss value of the test set is calculated to verify the effect of the single-lens imaging system, and the parameters are not updated. Set the learning rate to 1×10 -4 , repeat iterations 100 times, and select the primary parameter with the smallest loss value of the test set as the final optimization result.
单透镜的非球面参数a4、a6、a8和a10优化后的数值分别变为1.766×10-5、-9.100×10-9、-4.052×10-11和9.894×10-12,维纳滤波中优化得到的噪声常量参数K=2×10-4。设计得到的单透镜的光路图如图3所示。由图4中未优化的光学系统标准球面镜(a)与进行了非球面参数优化的光学系统(b)的对比可以看出,端到端的单透镜成像系统对单透镜面形的优化使单透镜边缘视场的点扩散函数更加集中、与中心视场的点扩散函数更加一致,因此用一个学习得到的模糊核对单透镜模糊图像做逆滤波复原,便可得到良好的复原效果。由图5中预估模糊核(a)与学习得到表征单透镜全视场特性的模糊核(b)的对比可以看出,学习到的模糊核更为复杂,其分布与空间位置相关,ResDNN3网络起到了良好的学习效果。The optimized values of the aspheric parameters a 4 , a 6 , a 8 and a 10 of the single lens are respectively 1.766×10 -5 , -9.100×10 -9 , -4.052×10 -11 and 9.894×10 -12 , The noise constant parameter K=2×10 −4 obtained through optimization in Wiener filtering. The optical path diagram of the designed single lens is shown in Figure 3. From the comparison of the unoptimized optical system standard spherical mirror (a) and the optical system (b) with optimized aspheric parameters in Figure 4, it can be seen that the end-to-end single-lens imaging system optimizes the single-lens surface shape so that the single-lens The point spread function of the edge field of view is more concentrated and more consistent with the point spread function of the central field of view. Therefore, a good restoration effect can be obtained by using a learned blur kernel to perform inverse filter restoration on the single-lens blurred image. From the comparison of the predicted blur kernel (a) in Figure 5 and the learned blur kernel (b) that characterizes the full field of view of a single lens, it can be seen that the learned blur kernel is more complex, and its distribution is related to the spatial position. ResDNN3 The network has played a good learning effect.
单透镜成像系统中的复原算法部分可以将单透镜获取的模糊图像复原为接近原图的清晰图像。使用峰值信噪比与结构相似度做为评价指标来度量两图的相似性,经过计算得到单透镜获取的模糊图像与原图间的结构相似度为0.63,峰值信噪比为20.93;复原后的图像与原图间的结构相似度为0.93,峰值信噪比为25.47,通过量化的指标对比同样证明了本发明设计的有效性。The restoration algorithm part of the single-lens imaging system can restore the blurred image acquired by the single-lens to a clear image close to the original image. The peak signal-to-noise ratio and structural similarity are used as evaluation indicators to measure the similarity of the two images. After calculation, the structural similarity between the blurred image acquired by the single lens and the original image is 0.63, and the peak signal-to-noise ratio is 20.93; after restoration The structural similarity between the original image and the original image is 0.93, and the peak signal-to-noise ratio is 25.47. The comparison of quantitative indicators also proves the effectiveness of the design of the present invention.
本发明建立了一种端到端的单透镜成像系统框架,根据系统的成像效果同时对光学系统的面形参数、ResDNN3神经网络参数和维纳滤波图像复原算法中的噪声常量参数进行优化。The invention establishes an end-to-end single-lens imaging system framework, and simultaneously optimizes the surface shape parameters of the optical system, ResDNN3 neural network parameters and noise constant parameters in the Wiener filter image restoration algorithm according to the imaging effect of the system.
本发明提出了具有跳过连接结构的全连接神经网络(ResDNN3),此网络以预估的模糊核作为输入,可用于光学系统模糊核的学习修正。The present invention proposes a fully-connected neural network (ResDNN3) with a skip connection structure. The network takes the estimated blur kernel as input and can be used for learning and correcting the blur kernel of the optical system.
本发明在端到端的单透镜成像系统的训练优化中加入了光学系统附加约束损失,可以对设计的单透镜的边缘厚度和能量分布等做约束。In the present invention, an additional constraint loss of the optical system is added to the training optimization of the end-to-end single-lens imaging system, which can constrain the edge thickness and energy distribution of the designed single-lens.
本发明为所设计的单透镜成像系统框架提出了一套初始化方法,使此系统框架在训练时具有良好的初值,大幅降低了对此系统框架做训练优化的难度。The present invention proposes a set of initialization methods for the designed single-lens imaging system frame, so that the system frame has a good initial value during training, and greatly reduces the difficulty of training and optimizing the system frame.
以上所述,仅为本申请较佳的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above is only a preferred embodiment of the present application, but the scope of protection of the present application is not limited thereto. Any skilled person familiar with the technical field can easily think of changes or Replacement should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210522840.9A CN114967121B (en) | 2022-05-13 | 2022-05-13 | An end-to-end single-lens imaging system design method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210522840.9A CN114967121B (en) | 2022-05-13 | 2022-05-13 | An end-to-end single-lens imaging system design method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114967121A CN114967121A (en) | 2022-08-30 |
CN114967121B true CN114967121B (en) | 2023-02-03 |
Family
ID=82984078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210522840.9A Active CN114967121B (en) | 2022-05-13 | 2022-05-13 | An end-to-end single-lens imaging system design method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114967121B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119937159A (en) * | 2025-04-07 | 2025-05-06 | 同济大学 | Simple optical system design method based on consistency constraints of optical transfer function |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116862800B (en) * | 2023-07-11 | 2024-01-30 | 哈尔滨工业大学 | Large-view-field single-lens space-variant blurred image restoration method and device |
CN117233960B (en) * | 2023-11-15 | 2024-01-23 | 清华大学 | Optical system online design method and device based on intelligent optical computing |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5786582A (en) * | 1992-02-27 | 1998-07-28 | Symbol Technologies, Inc. | Optical scanner for reading and decoding one- and two-dimensional symbologies at variable depths of field |
CN102395917A (en) * | 2009-02-17 | 2012-03-28 | 领先角膜控股有限责任公司 | Ophthalmic lens with optical sectors |
WO2017134275A1 (en) * | 2016-02-05 | 2017-08-10 | Eidgenossische Technische Hochschule Zurich | Methods and systems for determining an optical axis and/or physical properties of a lens and use of the same in virtual imaging and head-mounted displays |
WO2018045602A1 (en) * | 2016-09-07 | 2018-03-15 | 华中科技大学 | Blur kernel size estimation method and system based on deep learning |
CN110009674A (en) * | 2019-04-01 | 2019-07-12 | 厦门大学 | A real-time calculation method of monocular image depth of field based on unsupervised deep learning |
CN110458901A (en) * | 2019-06-26 | 2019-11-15 | 西安电子科技大学 | A Global Optimal Design Method for Photoelectric Imaging System Based on Computational Imaging |
CN111709895A (en) * | 2020-06-17 | 2020-09-25 | 中国科学院微小卫星创新研究院 | Blind image deblurring method and system based on attention mechanism |
CN112036137A (en) * | 2020-08-27 | 2020-12-04 | 哈尔滨工业大学(深圳) | Deep learning-based multi-style calligraphy digital ink simulation method and system |
CN112329920A (en) * | 2020-11-06 | 2021-02-05 | 深圳先进技术研究院 | Unsupervised training method and unsupervised training device of magnetic resonance parametric imaging model |
CN113077540A (en) * | 2021-03-31 | 2021-07-06 | 点昀技术(南通)有限公司 | End-to-end imaging equipment design method and device |
CN113191983A (en) * | 2021-05-18 | 2021-07-30 | 陕西师范大学 | Image denoising method and device based on deep learning attention mechanism |
WO2021218119A1 (en) * | 2020-04-30 | 2021-11-04 | 中国科学院深圳先进技术研究院 | Image toning enhancement method and method for training image toning enhancement neural network |
CN114063282A (en) * | 2021-11-30 | 2022-02-18 | 哈尔滨工业大学 | A method and device for optimizing the surface shape of a single lens with a large field of view based on a point spread function |
CN114418883A (en) * | 2022-01-18 | 2022-04-29 | 北京工业大学 | A Blind Image Deblurring Method Based on Depth Prior |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7616841B2 (en) * | 2005-06-17 | 2009-11-10 | Ricoh Co., Ltd. | End-to-end design of electro-optic imaging systems |
KR20200094058A (en) * | 2019-01-29 | 2020-08-06 | 한국과학기술원 | Lensless Hyperspectral Imaging Method and Apparatus Therefore |
CN113296259B (en) * | 2021-05-25 | 2022-11-08 | 中国科学院国家天文台南京天文光学技术研究所 | Super-resolution imaging method and device based on aperture modulation subsystem and deep learning |
-
2022
- 2022-05-13 CN CN202210522840.9A patent/CN114967121B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5786582A (en) * | 1992-02-27 | 1998-07-28 | Symbol Technologies, Inc. | Optical scanner for reading and decoding one- and two-dimensional symbologies at variable depths of field |
CN102395917A (en) * | 2009-02-17 | 2012-03-28 | 领先角膜控股有限责任公司 | Ophthalmic lens with optical sectors |
WO2017134275A1 (en) * | 2016-02-05 | 2017-08-10 | Eidgenossische Technische Hochschule Zurich | Methods and systems for determining an optical axis and/or physical properties of a lens and use of the same in virtual imaging and head-mounted displays |
WO2018045602A1 (en) * | 2016-09-07 | 2018-03-15 | 华中科技大学 | Blur kernel size estimation method and system based on deep learning |
CN110009674A (en) * | 2019-04-01 | 2019-07-12 | 厦门大学 | A real-time calculation method of monocular image depth of field based on unsupervised deep learning |
CN110458901A (en) * | 2019-06-26 | 2019-11-15 | 西安电子科技大学 | A Global Optimal Design Method for Photoelectric Imaging System Based on Computational Imaging |
WO2021218119A1 (en) * | 2020-04-30 | 2021-11-04 | 中国科学院深圳先进技术研究院 | Image toning enhancement method and method for training image toning enhancement neural network |
CN111709895A (en) * | 2020-06-17 | 2020-09-25 | 中国科学院微小卫星创新研究院 | Blind image deblurring method and system based on attention mechanism |
CN112036137A (en) * | 2020-08-27 | 2020-12-04 | 哈尔滨工业大学(深圳) | Deep learning-based multi-style calligraphy digital ink simulation method and system |
CN112329920A (en) * | 2020-11-06 | 2021-02-05 | 深圳先进技术研究院 | Unsupervised training method and unsupervised training device of magnetic resonance parametric imaging model |
CN113077540A (en) * | 2021-03-31 | 2021-07-06 | 点昀技术(南通)有限公司 | End-to-end imaging equipment design method and device |
CN113191983A (en) * | 2021-05-18 | 2021-07-30 | 陕西师范大学 | Image denoising method and device based on deep learning attention mechanism |
CN114063282A (en) * | 2021-11-30 | 2022-02-18 | 哈尔滨工业大学 | A method and device for optimizing the surface shape of a single lens with a large field of view based on a point spread function |
CN114418883A (en) * | 2022-01-18 | 2022-04-29 | 北京工业大学 | A Blind Image Deblurring Method Based on Depth Prior |
Non-Patent Citations (2)
Title |
---|
基于简单透镜计算成像的图像复原重建;王新华等;《吉林大学学报(工学版)》;20170531(第03期);全文 * |
最大似然空间变化图像恢复算法;王治乐等;《红外与激光工程》;20120725(第07期);全文 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119937159A (en) * | 2025-04-07 | 2025-05-06 | 同济大学 | Simple optical system design method based on consistency constraints of optical transfer function |
CN119937159B (en) * | 2025-04-07 | 2025-06-24 | 同济大学 | Simple optical system design method based on consistency constraints of optical transfer function |
Also Published As
Publication number | Publication date |
---|---|
CN114967121A (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114967121B (en) | An end-to-end single-lens imaging system design method | |
Sun et al. | End-to-end complex lens design with differentiable ray tracing | |
Nikonorov et al. | Toward ultralightweight remote sensing with harmonic lenses and convolutional neural networks | |
US11721002B2 (en) | Imaging system and method for imaging objects with reduced image blur | |
Tseng et al. | Differentiable compound optics and processing pipeline optimization for end-to-end camera design | |
Elmalem et al. | Learned phase coded aperture for the benefit of depth of field extension | |
Akpinar et al. | Learning wavefront coding for extended depth of field imaging | |
US20090040330A1 (en) | End-to-End Design of Electro-Optic Imaging Systems | |
CN114897752B (en) | A single lens large depth of field computational imaging system and method based on deep learning | |
CN105046659B (en) | A kind of simple lens based on rarefaction representation is calculated as PSF evaluation methods | |
CN110533607A (en) | A kind of image processing method based on deep learning, device and electronic equipment | |
CN113077540B (en) | End-to-end imaging equipment design method and device | |
CN111415303B (en) | A zone plate coded aperture imaging method and device based on deep learning | |
CN102170526A (en) | Method for calculation of defocus fuzzy core and sharp processing of defocus fuzzy image of defocus fuzzy core | |
Jiang et al. | Annular computational imaging: Capture clear panoramic images through simple lens | |
CN114063282B (en) | Large-view-field single lens surface shape optimization method and device based on point spread function | |
Shi et al. | Rapid all-in-focus imaging via physical neural network optical encoding | |
Ji et al. | Learned large field-of-view imager with a simple spherical optical module | |
CN113191959A (en) | Digital imaging system limit image quality improving method based on degradation calibration | |
CN114859550B (en) | End-to-end design method for Fresnel single-lens calculation imaging system | |
Wang et al. | Simplified design method for optical imaging systems based on aberration characteristics of optical-digital joint optimization | |
Zhou et al. | DR-UNet: dynamic residual U-Net for blind correction of optical degradation | |
Mao | Image restoration methods for imaging through atmospheric turbulence | |
Zhong et al. | HDSR: Image super-resolution method for harmonic diffraction optical imaging system based on plug and play technology | |
CN118075590B (en) | Achromatic and extended depth-of-field imaging system and imaging method based on multiple virtual lenses |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |