CN118072097A - Correlation imaging target recognition method and device based on all-optical neural network - Google Patents
Correlation imaging target recognition method and device based on all-optical neural network Download PDFInfo
- Publication number
- CN118072097A CN118072097A CN202410262746.3A CN202410262746A CN118072097A CN 118072097 A CN118072097 A CN 118072097A CN 202410262746 A CN202410262746 A CN 202410262746A CN 118072097 A CN118072097 A CN 118072097A
- Authority
- CN
- China
- Prior art keywords
- neural network
- optical
- target
- diffraction
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/067—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明属于光学成像技术领域,具体涉及一种基于全光神经网络的关联成像目标识别方法及装置。The present invention belongs to the technical field of optical imaging, and in particular relates to a method and device for associative imaging target recognition based on an all-optical neural network.
背景技术Background technique
光学神经网络是利用光技术,如光连接技术和光器件技术而形成的一种新型人工神经网络,具有超并行处理和光速传输信息的能力。其利用光子作为物理载体构建人工神经网络算法中的基本计算单元,从而实现高性能的新型计算构架,可以通过传感器采集入射光和出射光,从输入输出反推描述其物理特性的公式中的相关参数,从而调整参数利用光的物理性质完成一些复杂计算,并且无需额外的能量输入即可执行完成光信号计算。近年来,随着深入研究,基于物理机制的全光神经网络框架逐步实现执行多种复杂任务:全光图像分析、特征检测、物体分类等。全光神经网络中的衍射网络通过连续的透射层使用光的衍射来计算给定的任务,每个衍射层通常由数以万计的衍射神经元组成,它们对入射光的相位、振幅单一调制或同时复合调制,利用误差反向传播等深度学习工具优化每层的调制值,将包含待处理的光学信息的复值输入场映射到期望的输出场,具有高速、并行、低功耗的优点。Optical neural network is a new type of artificial neural network formed by using optical technology, such as optical connection technology and optical device technology, with the ability of ultra-parallel processing and light-speed information transmission. It uses photons as physical carriers to construct the basic computing unit in the artificial neural network algorithm, thereby realizing a high-performance new computing architecture. It can collect incident light and outgoing light through sensors, and reverse the relevant parameters in the formula describing its physical characteristics from the input and output, so as to adjust the parameters and use the physical properties of light to complete some complex calculations, and can perform optical signal calculations without additional energy input. In recent years, with in-depth research, the all-optical neural network framework based on physical mechanisms has gradually realized the execution of various complex tasks: all-optical image analysis, feature detection, object classification, etc. The diffraction network in the all-optical neural network uses the diffraction of light through continuous transmission layers to calculate a given task. Each diffraction layer is usually composed of tens of thousands of diffraction neurons, which modulate the phase and amplitude of the incident light singly or simultaneously in a compound manner, and use deep learning tools such as error back propagation to optimize the modulation value of each layer, and map the complex input field containing the optical information to be processed to the desired output field, which has the advantages of high speed, parallelism and low power consumption.
关联成像(Correlated Imaging),又称鬼成像或量子成像,是一种主动照明的凝视成像方法,是一种新型的成像技术。与使用光场强度信息的传统成像技术相比,关联成像利用光场强度的涨落特性进行成像,这使得其具有无透镜成像和抗扰动性强的特点。经典关联成像系统中包含两束光路,一束照射在目标上,是信号光路,被不具备空间分辨能力的桶探测器收集并记录下反射或透射的总光强信息,另一束不经过目标,是参考光束,在空间中自由传播相同距离后被具有空间分辨能力的CCD接收,最后将信号光路和参考光路接收到的信息进行关联计算即可恢复出目标图像,其具有物像分离、突破衍射极限、抗干扰能力强等特点,在军事国防、医学成像和遥感等领域具有重要的应用前景。但一般的CCD探测速率大概在百赫兹量级,限制了关联成像的采样速率,对目标物体识别速度依赖于硬件而受限制。Correlated imaging, also known as ghost imaging or quantum imaging, is an active illumination staring imaging method and a new imaging technology. Compared with traditional imaging technology that uses light field intensity information, correlated imaging uses the fluctuation characteristics of light field intensity for imaging, which makes it have the characteristics of lens-free imaging and strong anti-disturbance. The classic correlated imaging system contains two light paths. One beam is irradiated on the target, which is the signal light path. The total light intensity information of the reflection or transmission is collected and recorded by the bucket detector without spatial resolution. The other beam does not pass through the target, which is the reference beam. After freely propagating the same distance in space, it is received by the CCD with spatial resolution. Finally, the information received by the signal light path and the reference light path is correlated and calculated to restore the target image. It has the characteristics of object-image separation, breaking the diffraction limit, and strong anti-interference ability. It has important application prospects in military defense, medical imaging, remote sensing and other fields. However, the general CCD detection rate is about hundreds of hertz, which limits the sampling rate of correlated imaging, and the speed of target object recognition is limited by hardware.
发明内容Summary of the invention
本发明的目的在于针对上述现有技术中的问题,提供一种基于全光神经网络的关联成像目标识别方法及装置,结合一种全新的全光衍射深度神经网络,通过直接处理桶探测阵列探测到的光强信息即可实现在光速下对目标物体的识别分类任务。The purpose of the present invention is to address the problems in the above-mentioned prior art and provide a method and device for associative imaging target recognition based on an all-optical neural network, combined with a new all-optical diffraction deep neural network, which can realize the recognition and classification tasks of target objects at the speed of light by directly processing the light intensity information detected by the bucket detection array.
为了实现上述目的,本发明有如下的技术方案:In order to achieve the above object, the present invention has the following technical solutions:
第一方面,提供一种基于全光神经网络的关联成像目标识别方法,包括:In a first aspect, a method for associative imaging target recognition based on an all-optical neural network is provided, comprising:
使用多个透射或反射层来物理创建全光深度衍射神经网络;Using multiple transmissive or reflective layers to physically create all-optical deep diffractive neural networks;
训练创建好的全光衍射深度神经网络并进行验证,确定目标全光衍射深度神经网络结构;Train and verify the created all-optical diffraction deep neural network to determine the target all-optical diffraction deep neural network structure;
对目标全光衍射深度神经网络结构进行物理加工,并搭建关联成像光路系统;Physically process the target all-optical diffraction deep neural network structure and build a related imaging optical path system;
在关联成像光路系统中提取桶探测器信号;Extracting the bucket detector signal in the correlation imaging optical path system;
利用桶探测器信号完成目标识别。Target identification is accomplished using the bucket detector signal.
作为一种优选的方案,在所述使用多个透射或反射层来物理创建全光深度衍射神经网络的步骤中,设定透射或反射层中每层上的每个点透射或者反射传入的光,每个点代表一个神经元,通过光学衍射连接到下一层的其他神经元,每个神经元的透射或反射系数作为一个乘性的偏置项,利用光传播的物理机制模拟神经网络训练过程中的权重与偏置,再通过误差反向传播方法迭代调整。As a preferred solution, in the step of using multiple transmission or reflection layers to physically create an all-optical deep diffraction neural network, each point on each layer in the transmission or reflection layer is set to transmit or reflect the incoming light, each point represents a neuron, which is connected to other neurons in the next layer through optical diffraction, and the transmission or reflection coefficient of each neuron is used as a multiplicative bias term. The physical mechanism of light propagation is used to simulate the weights and biases in the neural network training process, and then iteratively adjusted through the error back propagation method.
作为一种优选的方案,所述训练创建好的全光衍射深度神经网络并进行验证的步骤包括:将创建好的全光衍射深度神经网络进行深度学习,在输入层输入训练数据,通过全光网络的输出迭代调整衍射网络各层神经元的相位值来执行目标物体的识别分类问题,分别用待分类识别目标数据集的训练集和测试集对全光衍射深度神经网络进行训练和验证。As a preferred solution, the steps of training and verifying the created all-optical diffraction deep neural network include: performing deep learning on the created all-optical diffraction deep neural network, inputting training data into the input layer, iteratively adjusting the phase values of neurons in each layer of the diffraction network through the output of the all-optical network to perform the recognition and classification problem of the target object, and respectively training and verifying the all-optical diffraction deep neural network with a training set and a test set of the target data set to be classified.
作为一种优选的方案,对创建好的全光衍射深度神经网络进行训练并验证之后,固定目标全光衍射深度神经网络结构设计并确定各层神经元的相位值,此时对于输入光进行纯相位调制的相干衍射网络,每一层都近似为一个光学元件,从而通过物理加工将目标全光衍射深度神经网络结构实体化。As a preferred solution, after the created all-optical diffraction deep neural network is trained and verified, the target all-optical diffraction deep neural network structure design is fixed and the phase values of neurons in each layer are determined. At this time, for the coherent diffraction network that performs pure phase modulation on the input light, each layer is approximated as an optical element, thereby materializing the target all-optical diffraction deep neural network structure through physical processing.
作为一种优选的方案,所述关联成像光路系统按照光传播方向依次包括光源、目标全光衍射深度神经网络结构、待识别物体、透镜以及桶探测器阵列,光由光源发出后,依次经过目标全光衍射深度神经网络结构、待识别物体和透镜,最终在含有多个桶探测器的桶探测器阵列中被接收,得到多个桶探测器信号。As a preferred solution, the associated imaging optical path system includes a light source, a target full-optical diffraction deep neural network structure, an object to be identified, a lens and a bucket detector array in sequence according to the direction of light propagation. After the light is emitted by the light source, it passes through the target full-optical diffraction deep neural network structure, the object to be identified and the lens in sequence, and is finally received in the bucket detector array containing multiple bucket detectors to obtain multiple bucket detector signals.
作为一种优选的方案,所述在关联成像光路系统中提取桶探测器信号的步骤中,桶探测器阵列含有20个桶探测器,由20个桶探测器接收到的光强信号组成光强信号数据集s=[s0,s1,…s19]。As a preferred solution, in the step of extracting the bucket detector signal in the associated imaging optical path system, the bucket detector array contains 20 bucket detectors, and the light intensity signals received by the 20 bucket detectors form a light intensity signal data set s=[s 0 ,s 1 ,…s 19 ].
作为一种优选的方案,所述利用桶探测器信号完成目标识别的步骤包括:将光强信号数据集s=[s0,s1,…s19]中的光强信号数据两两进行差分处理分析,得到差分信号值,再用SoftMax函数对差分信号值处理,找出处理后的最大值即为待识别物体的分类类别;As a preferred solution, the step of using the bucket detector signal to complete target recognition includes: performing differential processing and analysis on the light intensity signal data in the light intensity signal data set s = [s 0 , s 1 , ... s 19 ] to obtain differential signal values, and then processing the differential signal values with a SoftMax function to find the maximum value after processing, which is the classification category of the object to be recognized;
差分计算表达式如下:The difference calculation expression is as follows:
式中,i=0,1,…,9。In the formula, i=0,1,…,9.
第二方面,提供一种基于全光神经网络的关联成像目标识别系统,包括:In a second aspect, a correlation imaging target recognition system based on an all-optical neural network is provided, comprising:
神经网络物理创建模块,用于使用多个透射或反射层来物理创建全光深度衍射神经网络;A neural network physical creation module for physically creating a full-optical deep diffractive neural network using multiple transmission or reflection layers;
网络结构训练模块,用于训练创建好的全光衍射深度神经网络并进行验证,确定目标全光衍射深度神经网络结构;A network structure training module is used to train and verify the created all-optical diffraction deep neural network and determine the target all-optical diffraction deep neural network structure;
关联成像光路系统搭建模块,用于对目标全光衍射深度神经网络结构进行物理加工,并搭建关联成像光路系统;The module for building a correlation imaging optical path system is used to physically process the target all-optical diffraction deep neural network structure and build a correlation imaging optical path system;
桶探测器信号提取模块,用于在关联成像光路系统中提取桶探测器信号;A bucket detector signal extraction module, used for extracting the bucket detector signal in the associated imaging optical path system;
桶探测器信号处理模块,用于利用桶探测器信号完成目标识别。The bucket detector signal processing module is used to complete target recognition using the bucket detector signal.
第三方面,提供一种电子设备,包括:According to a third aspect, an electronic device is provided, including:
存储器,存储至少一个指令;及处理器,执行所述存储器中存储的指令以实现所述基于全光神经网络的关联成像目标识别方法。A memory storing at least one instruction; and a processor executing the instruction stored in the memory to implement the associative imaging target recognition method based on the all-optical neural network.
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现所述基于全光神经网络的关联成像目标识别方法。In a fourth aspect, a computer-readable storage medium is provided, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the associated imaging target recognition method based on the all-optical neural network is implemented.
相较于现有技术,本发明至少具有如下的有益效果:Compared with the prior art, the present invention has at least the following beneficial effects:
基于全光衍射深度神经网络(D2NN)进行关联成像目标识别,解决了传统关联成像领域对目标物体识别速度依赖于硬件而受限制的问题,结合物理创建的全光深度衍射神经网络搭建关联成像光路系统,仅需在关联成像光路系统中提取桶探测器信号,利用桶探测器信号进行处理即能够完成目标识别。本发明方法为关联成像领域的物体图像识别分类提供了一个新的思路,在设定好网络参数后对纯相位调制的全光衍射深度神经网络完成训练及验证,确定目标全光衍射深度神经网络结构,并将其通过3D打印或光刻等技术进行物理加工,实体化之后放置在关联成像的光路系统中,通过直接处理桶探测阵列探测到的光强信息即可实现在光速下对目标物体的识别分类任务。实验结果表明,使用本发明方法对目标物体的识别分类正确率可达到90.89%。该方法将包含待处理的光学信息的复值输入场映射到期望的输出场,具有高速、并行、低功耗的优点,弥补了关联成像领域对目标物体识别速度依赖于硬件而受限制的不足,为关联成像领域目标物体的识别分类提供了一种崭新思路和全新方法。Based on the all-optical diffraction deep neural network ( D2NN ), the correlation imaging target recognition is carried out, which solves the problem that the target object recognition speed in the traditional correlation imaging field is limited by hardware. The all-optical deep diffraction neural network created physically is combined to build a correlation imaging optical path system. It only needs to extract the barrel detector signal in the correlation imaging optical path system, and the target recognition can be completed by processing the barrel detector signal. The method of the present invention provides a new idea for the object image recognition and classification in the correlation imaging field. After setting the network parameters, the pure phase-modulated all-optical diffraction deep neural network is trained and verified, the target all-optical diffraction deep neural network structure is determined, and it is physically processed by 3D printing or lithography. After being materialized, it is placed in the correlation imaging optical path system. By directly processing the light intensity information detected by the barrel detection array, the recognition and classification task of the target object can be realized at the speed of light. The experimental results show that the recognition and classification accuracy of the target object using the method of the present invention can reach 90.89%. This method maps the complex-valued input field containing the optical information to be processed to the desired output field. It has the advantages of high speed, parallelism and low power consumption. It makes up for the deficiency that the speed of target object recognition in the field of associative imaging is limited by hardware, and provides a new idea and a new method for the recognition and classification of target objects in the field of associative imaging.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更加清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作以简单地介绍,应当理解,以下附图仅示出了本发明部分实施例,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for use in the embodiments are briefly introduced below. It should be understood that the following drawings only show some embodiments of the present invention. For ordinary technicians in this field, other related drawings can be obtained based on these drawings without creative work.
图1为本发明实施例基于全光神经网络的关联成像目标识别方法流程图。FIG1 is a flow chart of a method for associative imaging target recognition based on an all-optical neural network according to an embodiment of the present invention.
图2为关联成像的工作原理示意图。FIG. 2 is a schematic diagram showing the working principle of correlation imaging.
图3为本发明实施例搭建的关联成像光路系统工作原理示意图。FIG. 3 is a schematic diagram of the working principle of the correlation imaging optical path system constructed in an embodiment of the present invention.
图4为经过待分类识别目标数据集训练的纯相位调制的衍射深度神经网络的示意图;FIG4 is a schematic diagram of a pure phase-modulated diffraction deep neural network trained with a target data set to be classified and identified;
图5(a)为本发明实施例输入手写体数字“2”及对应输出信号的可视化图。FIG. 5( a ) is a visualization diagram of a handwritten number “2” input and a corresponding output signal according to an embodiment of the present invention.
图5(b)为本发明实施例输入手写体数字“3”及对应输出信号的可视化图。FIG5( b ) is a visualization diagram of the handwritten number “3” input and the corresponding output signal according to an embodiment of the present invention.
图5(c)为本发明实施例输入手写体数字“9”及对应输出信号的可视化图。FIG5( c ) is a visualization diagram of the handwritten number “9” input and the corresponding output signal according to an embodiment of the present invention.
图6(a)为本发明实施例在Layer number=9、Epoch=30时对待分类识别目标数据集在训练时和验证时的Loss值变化曲线图。FIG6( a ) is a curve diagram showing the change of the Loss value of the target data set to be classified and identified during training and verification when Layer number=9 and Epoch=30 according to an embodiment of the present invention.
图6(b)为本发明实施例在Layer number=9、Epoch=30时对待分类识别目标数据集在训练时和验证时的Accuracy值变化曲线图。FIG6( b ) is a curve diagram showing the variation of the Accuracy value of the target data set to be classified and identified during training and verification when Layer number=9 and Epoch=30 according to an embodiment of the present invention.
图6(c)为本发明实施例在Layer number=5、Epoch=50时对待分类识别目标数据集在训练时和验证时的Loss值变化曲线图。FIG6( c ) is a curve diagram showing the change of the Loss value of the target data set to be classified and identified during training and verification when Layer number=5 and Epoch=50 according to an embodiment of the present invention.
图6(d)为本发明实施例在Layer number=5、Epoch=50时对待分类识别目标数据集在训练时和验证时的Accuracy值变化曲线图。FIG6( d ) is a curve diagram showing changes in the Accuracy value of the target data set to be classified and identified during training and verification when Layer number=5 and Epoch=50 according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员还可以在没有做出创造性劳动的前提下获得其他实施例。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, ordinary technicians in this field can also obtain other embodiments without making creative work.
在现有的关联成像领域,一般的CCD探测速率大概在百赫兹量级,限制了关联成像的采样速率。而使用相位、振幅单一或复合调制装置调制相干光作为光源,相干光经相位或振幅调制装置作用后的分布是可知的,衍射形成的散斑场可以通过光场传播函数计算得到,因此,在关联成像系统中可以省掉参考光路,将传统的关联成像系统演变成只有信号光路的计算关联成像系统。计算关联成像由于散斑场的强度分布、视场和横向相干长度多样而可控的特点使得关联成像速度大大提高。随着计算关联成像的深入研究,神经网络的出现进一步提高了计算关联成像系统的成像速度与成像质量,这为计算关联成像提供了新的研究视角。In the existing correlation imaging field, the general CCD detection rate is about hundreds of hertz, which limits the sampling rate of correlation imaging. However, by using a phase, amplitude single or compound modulation device to modulate coherent light as a light source, the distribution of coherent light after the phase or amplitude modulation device is known, and the speckle field formed by diffraction can be calculated by the light field propagation function. Therefore, the reference optical path can be omitted in the correlation imaging system, and the traditional correlation imaging system can be transformed into a computational correlation imaging system with only a signal optical path. The speed of computational correlation imaging is greatly improved due to the diverse and controllable intensity distribution, field of view and lateral coherence length of the speckle field. With the in-depth study of computational correlation imaging, the emergence of neural networks has further improved the imaging speed and imaging quality of computational correlation imaging systems, which provides a new research perspective for computational correlation imaging.
请参阅图1,本发明实施例基于全光神经网络的关联成像目标识别方法,主要包括:Referring to FIG. 1 , the associated imaging target recognition method based on the all-optical neural network according to the embodiment of the present invention mainly includes:
(1)搭建全光衍射深度神经网络;(1) Build an all-optical diffraction deep neural network;
(2)训练并验证全光衍射深度神经网络;(2) Train and validate the all-optical diffraction deep neural network;
(3)将训练后的掩模板3D打印并搭建光路;(3) 3D print the trained mask template and build the optical path;
(4)提取桶探测器阵列中的桶探测器信号;(4) extracting bucket detector signals in the bucket detector array;
(5)对桶探测器阵列中的桶探测器信号差分处理完成目标识别。(5) Differential processing of the bucket detector signals in the bucket detector array is performed to complete target recognition.
在一种可能的实施方式中,步骤(1)在搭建全光衍射深度神经网络时:In a possible implementation, in step (1) when building an all-optical diffraction deep neural network:
首先通过使用多个透射或反射层来物理创建全光深度衍射神经网络,设定透射或反射层中每层上的每个点要么透射要么反射传入的光,每个点代表一个人工神经元,通过光学衍射连接到下一层的其他神经元,与标准的深度神经网络类似,可以将每个点或神经元的透射或反射系数看作是一个乘性的偏置项,也就是利用光传播的物理机制去模拟神经网络训练过程中的权重与偏置,再通过误差反向传播方法进行迭代调整。First, an all-optical deep diffraction neural network is physically created by using multiple transmission or reflection layers. Each point on each layer of the transmission or reflection layer is set to either transmit or reflect the incoming light. Each point represents an artificial neuron, which is connected to other neurons in the next layer through optical diffraction. Similar to a standard deep neural network, the transmission or reflection coefficient of each point or neuron can be regarded as a multiplicative bias term, that is, the physical mechanism of light propagation is used to simulate the weights and biases in the neural network training process, and then iteratively adjusted through the error back propagation method.
在一种可能的实施方式中,步骤(2)在训练并验证全光衍射深度神经网络时:In one possible implementation, in step (2), when training and verifying the all-optical diffraction deep neural network:
全光衍射深度神经网络通过深度学习,在输入层输入训练数据,通过全光网络的输出迭代调整衍射网络各层神经元的相位值来执行目标物体的识别分类问题,分别用待分类识别目标数据集的训练集和测试集对神经网络进行训练和验证。The all-optical diffraction deep neural network uses deep learning to input training data into the input layer, and iteratively adjusts the phase values of neurons in each layer of the diffraction network through the output of the all-optical network to perform the recognition and classification of target objects. The neural network is trained and verified using the training set and test set of the target data set to be classified.
在一种可能的实施方式中,步骤(3)将训练后的掩模板3D打印并搭建光路时:In a possible implementation, when the trained mask is 3D printed and the optical path is constructed in step (3):
训练及验证阶段完成之后,固定网络设计并确定各层神经元的相位值,此时对于输入光进行纯相位调制的相干衍射网络,每一层都可以近似为一个薄的光学元件,因此使用3D打印或光刻等技术进行物理加工该全光神经网络就可以在光速下执行它所训练的特定任务。在本发明中通过3D打印技术将训练后并固定参数的神经网络实体化,在完成3D打印后,搭建装置器件完成实验部分。如图3所示,关联成像光路系统按照光传播方向依次包括光源1、目标全光衍射深度神经网络结构2、待识别物体3、透镜4以及桶探测器阵列5,光由光源1发出后,依次经过目标全光衍射深度神经网络结构2、待识别物体3和透镜4,最终,在含有多个桶探测器7的桶探测器阵列5中被接收,得到多个桶探测器信号。此时,打印的掩膜板即目标全光衍射深度神经网络结构2,对输入的可见光进行纯相位调制,光源1和目标全光衍射深度神经网络结构2可以粗略等效为一个可以识别目标物体的光源。After the training and verification phases are completed, the network design is fixed and the phase values of the neurons in each layer are determined. At this time, for the coherent diffraction network that performs pure phase modulation on the input light, each layer can be approximated as a thin optical element. Therefore, the all-optical neural network can be physically processed using 3D printing or photolithography to perform the specific task it is trained for at the speed of light. In the present invention, the trained neural network with fixed parameters is materialized by 3D printing technology. After 3D printing is completed, the device is built to complete the experimental part. As shown in Figure 3, the associated imaging optical path system includes a light source 1, a target all-optical diffraction deep neural network structure 2, an object to be identified 3, a lens 4, and a barrel detector array 5 in sequence according to the direction of light propagation. After the light is emitted by the light source 1, it passes through the target all-optical diffraction deep neural network structure 2, the object to be identified 3 and the lens 4 in sequence. Finally, it is received in the barrel detector array 5 containing multiple barrel detectors 7 to obtain multiple barrel detector signals. At this time, the printed mask plate, that is, the target all-optical diffraction deep neural network structure 2, performs pure phase modulation on the input visible light. The light source 1 and the target all-optical diffraction deep neural network structure 2 can be roughly equivalent to a light source that can identify the target object.
在一种可能的实施方式中,步骤(4)提取桶探测器信号时:In a possible implementation, when extracting the bucket detector signal in step (4):
本发明实施例桶探测器阵列5含有20个桶探测器7,由20个桶探测器7接收到的光强信号组成光强信号数据集s=[s0,s1,…s19]。The bucket detector array 5 of the embodiment of the present invention includes 20 bucket detectors 7 , and the light intensity signals received by the 20 bucket detectors 7 form a light intensity signal data set s=[s 0 , s 1 , . . . s 19 ].
在一种可能的实施方式中,步骤(5)对桶探测器阵列中的桶探测器信号差分处理完成目标识别时:将光强信号数据集s=[s0,s1,…s19]中的光强信号数据两两进行差分处理分析,得到差分信号值,再用SoftMax函数对差分信号值处理,找出处理后的最大值,即为待识别物体(3)的分类类别;差分计算表达式如下:In a possible implementation, when step (5) performs differential processing on the barrel detector signals in the barrel detector array to complete target recognition: perform differential processing and analysis on the light intensity signal data set s = [s 0 , s 1 , ... s 19 ] in pairs to obtain differential signal values, and then use the SoftMax function to process the differential signal values to find the maximum value after processing, which is the classification category of the object to be identified (3); the differential calculation expression is as follows:
式中,i=0,1,…,9。In the formula, i=0,1,…,9.
图2是关联成像原理的示意图。图2中的光源1是发出光的装置;空间调制器6是对光进行相位调制的装置;待识别物体3是待识别的目标;透镜4是为了让经过物体的光更好的被桶探测器7接收;桶探测器7是信号接收装置。用相位或振幅调制装置调制相干光作为关联成像的光源1,光由光源1发出后依次经过空间光调制器6、待识别物体3和透镜4,最终被桶探测器7实现接收。被调制的相干光经夫琅禾费衍射在远场形成散斑场,相干光经相位或振幅调制装置作用后的分布是可知的,因此,衍射形成的散斑场可以通过光场的传播函数计算得到,故而,在此关联成像光路系统中由经过空间调制器6产生的散斑和桶探测器7所接收到的信号进行关联计算,便可以恢复出待识别物体3的图像。FIG2 is a schematic diagram of the principle of correlation imaging. The light source 1 in FIG2 is a device for emitting light; the spatial modulator 6 is a device for phase modulation of light; the object to be identified 3 is the target to be identified; the lens 4 is to allow the light passing through the object to be better received by the barrel detector 7; the barrel detector 7 is a signal receiving device. The coherent light is modulated by a phase or amplitude modulation device as the light source 1 for correlation imaging. After being emitted by the light source 1, the light passes through the spatial light modulator 6, the object to be identified 3 and the lens 4 in sequence, and is finally received by the barrel detector 7. The modulated coherent light forms a speckle field in the far field through Fraunhofer diffraction. The distribution of the coherent light after the phase or amplitude modulation device is known. Therefore, the speckle field formed by diffraction can be calculated by the propagation function of the light field. Therefore, in this correlation imaging optical path system, the speckle generated by the spatial modulator 6 and the signal received by the barrel detector 7 are correlated and calculated, and the image of the object to be identified 3 can be restored.
参见图3,本发明首先搭建全光衍射深度神经网络,仿真模拟神经网络对可见光波段的光进行纯相位调制,设定其中每层上的每个点要么透射要么反射传入的光,每个点代表一个人工神经元,通过光学衍射连接到下一层的其他神经元,与标准的深度神经网络类似,可以将每个点或神经元的透射或反射系数看作是一个乘性的偏置项,也就是利用光传播的物理机制去模拟神经网络训练过程中的权重与偏置,再通过误差反向传播方法进行迭代调整。设定网络为8cm*8cm的全连接网络,每个神经元大小为400μm*400μm,利用待分类识别目标数据集进行训练,实施例设定其批量大小为32,也就是基于待分类识别目标数据集用训练集的6万张照片以32张照片为一组训练1875次,再用测试集的1万张照片以32张照片为一组验证313次。再设定不同的神经网络层数和训练次数,通过分析测试数据集和验证数据集的正确率及损失值的变化,尽可能寻找最适合该模型的训练循环次数和神经网络层数,通过对比分析,实施例选择设定5层衍射神经网络,因此,此时神经元数量为[200*200]*5=20万,由于光传播的特性所以使用全连接网络,此时连接数为(200*200)2*5=80亿,最终经过训练和验证环节便可得到特定执行目标物体识别分类的5层全光衍射深度神经网络,如图4所示。训练及验证阶段完成之后,固定网络设计并确定各层神经元的相位值,对于纯相位调制的相干衍射网络,每一层都可以近似为一个薄的光学元件,因此使用3D打印技术进行物理加工,将训练后的神经网络实体化,在完成3D打印后我们按照图3设计的实验装置图搭建光路。Referring to FIG3 , the present invention first builds an all-optical diffraction deep neural network, simulates the neural network to perform pure phase modulation on light in the visible light band, sets each point on each layer to either transmit or reflect the incoming light, and each point represents an artificial neuron, which is connected to other neurons in the next layer through optical diffraction. Similar to the standard deep neural network, the transmission or reflection coefficient of each point or neuron can be regarded as a multiplicative bias term, that is, the physical mechanism of light propagation is used to simulate the weights and biases in the neural network training process, and then iteratively adjusted by the error back propagation method. The network is set to be a fully connected network of 8cm*8cm, each neuron size is 400μm*400μm, and the target data set to be classified and identified is used for training. The embodiment sets its batch size to 32, that is, based on the target data set to be classified and identified, 60,000 photos of the training set are used as a group of 32 photos for training 1875 times, and then 10,000 photos of the test set are used as a group of 32 photos for verification 313 times. Then set different neural network layers and training times, and by analyzing the changes in the accuracy and loss values of the test data set and the verification data set, try to find the training cycle number and neural network layer number that best suits the model. Through comparative analysis, the embodiment chooses to set a 5-layer diffraction neural network. Therefore, the number of neurons is [200*200]*5=200,000. Due to the characteristics of light propagation, a fully connected network is used. At this time, the number of connections is (200*200)2*5=8 billion. Finally, after training and verification, a 5-layer all-optical diffraction deep neural network for specific target object recognition and classification can be obtained, as shown in Figure 4. After the training and verification stages are completed, the network design is fixed and the phase values of each layer of neurons are determined. For a coherent diffraction network with pure phase modulation, each layer can be approximated as a thin optical element. Therefore, 3D printing technology is used for physical processing to materialize the trained neural network. After completing 3D printing, we build the optical path according to the experimental device diagram designed in Figure 3.
图3中的光源1是发出光的装置;目标全光衍射深度神经网络结构2是对光进行相位调制的装置;待识别物体3是待识别的目标;透镜4是为了让经过待识别物体3的光更好的被桶探测器阵列5接收;桶探测器阵列5是信号接收装置。光由光源1发出后依次经过目标全光衍射深度神经网络结构2、待识别物体3和透镜4,光强信息最终在含有20个桶探测器7的探测阵列5中被接收,依次得到20个桶探测器信号。此时,打印的掩膜板即目标全光衍射深度神经网络结构2,通过目标全光衍射深度神经网络结构2对输入的可见光进行纯相位调制,光源1和掩膜板可以粗略等效为一个可以识别待识别物体3的光源。The light source 1 in Figure 3 is a device that emits light; the target full-light diffraction deep neural network structure 2 is a device that phase modulates light; the object to be identified 3 is the target to be identified; the lens 4 is to allow the light passing through the object to be identified 3 to be better received by the barrel detector array 5; the barrel detector array 5 is a signal receiving device. After being emitted by the light source 1, the light passes through the target full-light diffraction deep neural network structure 2, the object to be identified 3 and the lens 4 in sequence. The light intensity information is finally received in the detection array 5 containing 20 barrel detectors 7, and 20 barrel detector signals are obtained in sequence. At this time, the printed mask plate is the target full-light diffraction deep neural network structure 2. The input visible light is purely phase modulated by the target full-light diffraction deep neural network structure 2. The light source 1 and the mask plate can be roughly equivalent to a light source that can identify the object to be identified 3.
实验最后,提取桶探测器信号,实测桶探测器阵列5接收到的光强信息,得到信号值s=[s0,s1,…s19]。将这20个信号值两两进行差分处理分析,再用SoftMax函数对差分信号值处理,按照预先训练的网络模型可知其信号最大值即为目标物体的分类类别,从而实现全光神经网络下的基于关联成像的目标物体的识别分类,并且在实验前的仿真及实验本身证明了该框架的推理能力,这种新的方法成为了关联成像领域对目标物体识别分类的新途径。At the end of the experiment, the bucket detector signal is extracted, and the light intensity information received by the bucket detector array 5 is measured to obtain the signal value s = [s 0 ,s 1 ,…s 19 ]. These 20 signal values are differentially processed and analyzed in pairs, and then the differential signal value is processed by the SoftMax function. According to the pre-trained network model, it can be known that the maximum value of the signal is the classification category of the target object, thereby realizing the recognition and classification of target objects based on correlation imaging under the all-optical neural network. The simulation before the experiment and the experiment itself proved the reasoning ability of the framework. This new method has become a new way to recognize and classify target objects in the field of correlation imaging.
参见图5(a)、图5(a)与图5(c),展示了在实验中输入信号(手写体数字“2”、“3”、“9”)和输出信号(经过差分和SoftMax函数处理后的桶探测器阵列信号)的可视化图,图中的红色方框所标注的为最终得到的光强信息的最大值,即目标分类的分类类别。在得到训练好的衍射深度神经网络并实体化后,实验实测得到桶探测器阵列信号值,对其进行差分及SoftMax函数处理后,即可从处理后的桶信号的最大值完成目标物体的识别分类。See Figure 5(a), Figure 5(c), and Figure 5(d), which show the visualization of the input signal (handwritten digits "2", "3", "9") and the output signal (bucket detector array signal after differential and SoftMax function processing) in the experiment. The red box in the figure is the maximum value of the final light intensity information, that is, the classification category of the target classification. After the trained diffraction deep neural network is obtained and materialized, the bucket detector array signal value is measured experimentally. After differential and SoftMax function processing, the target object can be identified and classified from the maximum value of the processed bucket signal.
图6(a)、图6(b)以及图6(c)、图6(d)分别为计算机训练全光衍射深度神经网络时在Layer number=9、Epoch=30和Layer number=5、Epoch=50时对待分类识别目标数据集在训练和验证时的Loss值变化和Accuracy值变化的曲线图,其中,Layer number表示全光神经网络层数,Epoch表示训练次数(历元),Loss为损失值,Accuracy为正确率。从图6(a)与图6(c)所示的Loss值变化可以看出训练过程的拟合情况比较好,没有出现过拟合或者欠拟合的情况。从图6(b)与图6(d)所示的Accuracy值变化可以看出经过一定次数的遍历学习后分类正确率趋于稳定,此时分类的正确率可以达到90.89%和88.99%。Figures 6(a), 6(b), 6(c), and 6(d) are curves showing the changes in the Loss value and the Accuracy value of the target data set to be classified and recognized during training and verification when the computer trains the all-optical diffraction deep neural network at Layer number = 9, Epoch = 30 and Layer number = 5, Epoch = 50, respectively. Layer number represents the number of layers of the all-optical neural network, Epoch represents the number of training times (epochs), Loss represents the loss value, and Accuracy represents the accuracy. From the changes in the Loss value shown in Figures 6(a) and 6(c), it can be seen that the training process is well-fitted, and there is no overfitting or underfitting. From the changes in the Accuracy value shown in Figures 6(b) and 6(d), it can be seen that the classification accuracy tends to be stable after a certain number of traversal learning, and the classification accuracy can reach 90.89% and 88.99% at this time.
本发明另一实施例还提出一种基于全光神经网络的关联成像目标识别系统,包括:Another embodiment of the present invention further provides a correlation imaging target recognition system based on an all-optical neural network, comprising:
神经网络物理创建模块,用于使用多个透射或反射层来物理创建全光深度衍射神经网络;A neural network physical creation module for physically creating a full-optical deep diffractive neural network using multiple transmission or reflection layers;
网络结构训练模块,用于训练创建好的全光衍射深度神经网络并进行验证,确定目标全光衍射深度神经网络结构;A network structure training module is used to train and verify the created all-optical diffraction deep neural network and determine the target all-optical diffraction deep neural network structure;
关联成像光路系统搭建模块,用于对目标全光衍射深度神经网络结构进行物理加工,并搭建关联成像光路系统;The module for building a correlation imaging optical path system is used to physically process the target all-optical diffraction deep neural network structure and build a correlation imaging optical path system;
桶探测器信号提取模块,用于在关联成像光路系统中提取桶探测器信号;A bucket detector signal extraction module, used for extracting the bucket detector signal in the associated imaging optical path system;
桶探测器信号处理模块,用于利用桶探测器信号完成目标识别。The bucket detector signal processing module is used to complete target recognition using the bucket detector signal.
本发明另一实施例还提出一种电子设备,包括:存储器,存储至少一个指令;及处理器,执行所述存储器中存储的指令以实现所述基于全光神经网络的关联成像目标识别方法。Another embodiment of the present invention further proposes an electronic device, comprising: a memory storing at least one instruction; and a processor executing the instructions stored in the memory to implement the associative imaging target recognition method based on the all-optical neural network.
本发明另一实施例还提出一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现基于全光神经网络的关联成像目标识别方法。Another embodiment of the present invention further proposes a computer-readable storage medium, which stores a computer program. When the computer program is executed by a processor, an associated imaging target recognition method based on an all-optical neural network is implemented.
示例性的,所述存储器中存储的指令可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在计算机可读存储介质中,并由所述处理器执行,以完成本发明基于全光神经网络的关联成像目标识别方法。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机程序在服务器中的执行过程。Exemplarily, the instructions stored in the memory may be divided into one or more modules/units, which are stored in a computer-readable storage medium and executed by the processor to complete the associative imaging target recognition method based on an all-optical neural network of the present invention. The one or more modules/units may be a series of computer-readable instruction segments capable of completing a specific function, which are used to describe the execution process of the computer program in the server.
所述电子设备可以是智能手机、笔记本、掌上电脑及云端服务器等计算设备。所述电子设备可包括,但不仅限于,处理器、存储器。本领域技术人员可以理解,所述电子设备还可以包括更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备还可以包括输入输出设备、网络接入设备、总线等。The electronic device may be a computing device such as a smart phone, a notebook, a PDA, and a cloud server. The electronic device may include, but is not limited to, a processor and a memory. Those skilled in the art will appreciate that the electronic device may also include more or fewer components, or a combination of certain components, or different components, for example, the electronic device may also include an input/output device, a network access device, a bus, etc.
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field-programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or any conventional processor, etc.
所述存储器可以是所述服务器的内部存储单元,例如服务器的硬盘或内存。所述存储器也可以是所述服务器的外部存储设备,例如所述服务器上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器还可以既包括所述服务器的内部存储单元也包括外部存储设备。所述存储器用于存储所述计算机可读指令以及所述服务器所需的其他程序和数据。所述存储器还可以用于暂时地存储已经输出或者将要输出的数据。The memory may be an internal storage unit of the server, such as a hard disk or memory of the server. The memory may also be an external storage device of the server, such as a plug-in hard disk, a smart media card (SMC), a secure digital (SD) card, a flash card, etc. equipped on the server. Furthermore, the memory may include both an internal storage unit of the server and an external storage device. The memory is used to store the computer-readable instructions and other programs and data required by the server. The memory may also be used to temporarily store data that has been output or is to be output.
需要说明的是,上述模块单元之间的信息交互、执行过程等内容,由于与方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction, execution process and other contents between the above-mentioned module units are based on the same concept as the method embodiment. Their specific functions and technical effects can be found in the method embodiment part and will not be repeated here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。The technicians in the relevant field can clearly understand that for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example for illustration. In practical applications, the above-mentioned function allocation can be completed by different functional units and modules as needed, that is, the internal structure of the device can be divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiment can be integrated in a processing unit, or each unit can exist physically separately, or two or more units can be integrated in one unit. The above-mentioned integrated unit can be implemented in the form of hardware or in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the scope of protection of this application. The specific working process of the units and modules in the above-mentioned system can refer to the corresponding process in the aforementioned method embodiment, which will not be repeated here.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,RandomAccess Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the present application implements all or part of the processes in the above-mentioned embodiment method, which can be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer program can implement the steps of the above-mentioned various method embodiments when executed by the processor. Among them, the computer program includes computer program code, and the computer program code can be in source code form, object code form, executable file or some intermediate form. The computer-readable medium may at least include: any entity or device that can carry the computer program code to the camera device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, RandomAccess Memory), electrical carrier signal, telecommunication signal and software distribution medium. For example, a USB flash drive, a mobile hard disk, a disk or an optical disk.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above embodiments, the description of each embodiment has its own emphasis. For parts that are not described or recorded in detail in a certain embodiment, reference can be made to the relevant descriptions of other embodiments.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The embodiments described above are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, a person skilled in the art should understand that the technical solutions described in the aforementioned embodiments may still be modified, or some of the technical features may be replaced by equivalents. Such modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application, and should all be included in the protection scope of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410262746.3A CN118072097A (en) | 2024-03-07 | 2024-03-07 | Correlation imaging target recognition method and device based on all-optical neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410262746.3A CN118072097A (en) | 2024-03-07 | 2024-03-07 | Correlation imaging target recognition method and device based on all-optical neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118072097A true CN118072097A (en) | 2024-05-24 |
Family
ID=91110773
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410262746.3A Pending CN118072097A (en) | 2024-03-07 | 2024-03-07 | Correlation imaging target recognition method and device based on all-optical neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118072097A (en) |
-
2024
- 2024-03-07 CN CN202410262746.3A patent/CN118072097A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Class-specific differential detection in diffractive optical neural networks improves inference accuracy | |
Chen et al. | Diffractive deep neural networks at visible wavelengths | |
Wang et al. | Few-shot SAR automatic target recognition based on Conv-BiLSTM prototypical network | |
Yang et al. | Graph r-cnn for scene graph generation | |
Davies et al. | Using convolutional neural networks to identify gravitational lenses in astronomical images | |
CN110186559B (en) | Method and device for detecting orbital angular momentum mode of vortex beam | |
CN112633459B (en) | Method for training neural network, data processing method and related device | |
Wang et al. | Industrial cyber-physical systems-based cloud IoT edge for federated heterogeneous distillation | |
CN111507378A (en) | Method and apparatus for training image processing model | |
CN112801146B (en) | A target detection method and system | |
CN111340183B (en) | Deep learning-based scatter imaging device and method | |
Li et al. | DCNR: deep cube CNN with random forest for hyperspectral image classification | |
Yuan et al. | Training large-scale optoelectronic neural networks with dual-neuron optical-artificial learning | |
CN111652059B (en) | Target identification model construction and identification method and device based on computational ghost imaging | |
CN112465137A (en) | Vortex light beam modal identification system and method based on photonic neural network | |
Avramov-Zamurovic et al. | Classifying beams carrying orbital angular momentum with machine learning: tutorial | |
Xiao et al. | Practical advantage of quantum machine learning in ghost imaging | |
Hazineh et al. | D-flat: A differentiable flat-optics framework for end-to-end metasurface visual sensor design | |
Ni et al. | MHST: Multiscale head selection transformer for hyperspectral and LiDAR classification | |
Ibadulla et al. | FatNet: high-resolution kernels for classification using fully convolutional optical neural networks | |
Qiao et al. | Rotation is all you need: Cross dimensional residual interaction for hyperspectral image classification | |
Tsirigotis et al. | Unconventional integrated photonic accelerators for high-throughput convolutional neural networks | |
Sadeghzadeh et al. | High-speed multi-layer convolutional neural network based on free-space optics | |
Chen et al. | YOLO‐UOD: An underwater small object detector via improved efficient layer aggregation network | |
CN118072097A (en) | Correlation imaging target recognition method and device based on all-optical neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |