CN115220211A - Microscopic imaging system and method based on deep learning and light field imaging - Google Patents
Microscopic imaging system and method based on deep learning and light field imaging Download PDFInfo
- Publication number
- CN115220211A CN115220211A CN202210902191.5A CN202210902191A CN115220211A CN 115220211 A CN115220211 A CN 115220211A CN 202210902191 A CN202210902191 A CN 202210902191A CN 115220211 A CN115220211 A CN 115220211A
- Authority
- CN
- China
- Prior art keywords
- camera sensor
- resolution
- dimensional image
- vcd
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 90
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000000386 microscopy Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 230000002452 interceptive effect Effects 0.000 claims description 5
- 230000003068 static effect Effects 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000004624 confocal microscopy Methods 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 210000004027 cell Anatomy 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000000399 optical microscopy Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000012292 cell migration Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000009647 digital holographic microscopy Methods 0.000 description 1
- 210000003743 erythrocyte Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 244000005700 microbiome Species 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B21/00—Microscopes
- G02B21/36—Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
- G02B21/365—Control or image processing arrangements for digital or video microscopes
- G02B21/367—Control or image processing arrangements for digital or video microscopes providing an output produced by processing a plurality of individual source images, e.g. image tiling, montage, composite images, depth sectioning, image comparison
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Analytical Chemistry (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Chemical & Material Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Optics & Photonics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Microscoopes, Condenser (AREA)
Abstract
Description
技术领域technical field
本发明涉及基于深度学习和光场成像的显微成像系统及方法,属于光场显微成像技术领域。The invention relates to a microscopic imaging system and method based on deep learning and light field imaging, and belongs to the technical field of light field microscopic imaging.
背景技术Background technique
光场成像系统主要是通过光学装置采集捕捉到空间分布的四维光场,再根据不同的应用需求来计算出相应的图像。计算光场成像技术基于四维光场,旨在建立光在空域、视角、光谱和时域等多个维度的关系,实现耦合感知、解耦重建与智能处理,用于面向大范围动态场景的多维多尺度成像。光场成像技术正逐渐应用于生命科学、工业探测、国家安全、无人系统和虚拟现实/增强现实等领域,具有重要的学术研究价值和产业应用前景。The light field imaging system mainly collects and captures the four-dimensional light field of the spatial distribution through the optical device, and then calculates the corresponding image according to different application requirements. Computational light field imaging technology is based on four-dimensional light field, aiming to establish the relationship of light in multiple dimensions such as spatial domain, viewing angle, spectrum and time domain, realize coupled perception, decoupled reconstruction and intelligent processing, and is used for multi-dimensional dynamic scenes in a wide range. Multiscale imaging. Light field imaging technology is gradually being used in life sciences, industrial detection, national security, unmanned systems, and virtual reality/augmented reality, and has important academic research value and industrial application prospects.
光场显微(Light-Field Microscopy,LFM)是通过在传统光学显微镜的中继像面上插入一块能够捕获光场信息的微透镜阵列来实现的。通过4D光场数据的反演能重建多视角图像和多层焦平面图像,引入去卷积算法和断层重建能实现三维显微成像。由于这些后续处理都可通过一次曝光实现,因而对观察运动的微生物和光敏感样品有其独特优势。光学显微镜凭借其非接触、无损伤等优点,长期以来是生物医学研究的重要工具。但是,自1873年以来,人们一直认为,光学显微镜的分辨率极限约为200nm,无法用于清晰观察尺寸在200nm以内的生物结构。Light-Field Microscopy (LFM) is achieved by inserting a microlens array capable of capturing light field information on the relay image plane of a traditional optical microscope. Multi-view images and multi-layer focal plane images can be reconstructed through the inversion of 4D light field data, and three-dimensional microscopic imaging can be achieved by introducing deconvolution algorithms and tomographic reconstruction. Since these post-processing can be achieved with a single exposure, it has unique advantages for observing moving microorganisms and light-sensitive samples. Optical microscopes have long been an important tool in biomedical research due to their non-contact and non-invasive advantages. However, since 1873, it has been believed that the resolution limit of optical microscopy is about 200 nm, and it cannot be used to clearly observe biological structures with dimensions within 200 nm.
利用光场显微镜得到高分辨率图像在生物研究和医疗方面都有重要意义,其中数字显微成像技术有别于传统光学显微成像,可根据重建全息图获取细胞的生物学参数与形貌信息,是一种有效的非接触无损三维成像技术。随着图像传感器的发展与硬件计算能力的提升,数字全息显微成像技术在活体生物细胞检测尤其在血红细胞检测领域取得了显著进展和突破。如今数字显微成像技术已广泛运用于细胞迁移分析和异常细胞行为研究,同时也大量使用于描绘医学图像的仪器中。The use of light field microscopy to obtain high-resolution images is of great significance in biological research and medical treatment. Digital microscopy imaging technology is different from traditional optical microscopy imaging. The biological parameters and morphology information of cells can be obtained according to the reconstructed hologram. , is an effective non-contact non-destructive 3D imaging technology. With the development of image sensors and the improvement of hardware computing capabilities, digital holographic microscopy imaging technology has made significant progress and breakthroughs in the detection of living biological cells, especially in the field of red blood cell detection. Today, digital microscopy imaging techniques are widely used for cell migration analysis and abnormal cell behavior studies, and are also widely used in instruments for delineating medical images.
但是,一方面,由于受设备以及成像技术的限制,数字显微成像技术获得的图像并不能有非常高的准确度;另一方面,由于传感器分辨率(传感器可感受到的被测量的最小变化的能力)的限制,光场相机通常通过牺牲空间分辨率(能够详细区分的最小单元的尺寸或大小)来换取角度分辨率(成像系统或系统元件能有差别地区分开两相邻物体最小间距的能力),导致图像不清晰。因此,有限的空间分辨率是光场相机发展的难处所在。However, on the one hand, due to the limitations of equipment and imaging technology, the images obtained by digital microscopy imaging technology cannot have very high accuracy; Limited by the ability of light field cameras to differentiate between two adjacent objects, light field cameras typically trade spatial resolution (the size or size of the smallest unit that can be distinguished in detail) for angular resolution (the minimum separation between two adjacent objects that an imaging system or system element can differentially distinguish). ability), resulting in unclear images. Therefore, the limited spatial resolution is the difficulty in the development of light field cameras.
为了解决上述问题,Yoon等人首次提出基于卷积神经网络的光场数据超分辨率重建(Yoon Y,Jeon H G,Yoo D,et al.Light Field Image SuperResolution usingConvolutional Neural Network[J].IEEE Signal Processing Letters,2017.),其网络可分为空间分辨率重建卷积神经网络和角度分辨率重建卷积神经网络,但该模型并没有充分利用多视角图像间的有效信息,导致无法获得较高分辨率的重建图像,也无法快速获得无伪影、强度分布均匀的重建图像。In order to solve the above problems, Yoon et al. first proposed the super-resolution reconstruction of light field data based on convolutional neural network (Yoon Y, Jeon H G, Yoo D, et al. Light Field Image SuperResolution using Convolutional Neural Network [J]. IEEE Signal Processing Letters, 2017.), its network can be divided into spatial resolution reconstruction convolutional neural network and angular resolution reconstruction convolutional neural network, but this model does not make full use of the effective information between multi-view images, resulting in the inability to obtain higher resolution Even if the reconstructed image is high, the reconstructed image with no artifacts and uniform intensity distribution cannot be quickly obtained.
发明内容SUMMARY OF THE INVENTION
为了解决目前显微成像中存在的伪影、非均匀分辨率以及重建速度慢等问题,本发明提供了基于深度学习和光场成像的显微成像系统及方法,所述技术方案如下:In order to solve the problems of artifacts, non-uniform resolution, and slow reconstruction speed in current microscopic imaging, the present invention provides a microscopic imaging system and method based on deep learning and light field imaging, and the technical solutions are as follows:
本发明的第一个目的在于提供一种基于深度学习和光场成像的显微成像系统,所述显微成像系统包括依次连接的:显微系统、深度学习网络模块、图像输出模块;The first object of the present invention is to provide a microscopic imaging system based on deep learning and light field imaging, the microscopic imaging system includes: a microscopic system, a deep learning network module, and an image output module connected in sequence;
所述显微系统用于采集图像的多个二维数据,包括:显微镜头1、第一分色镜2、透镜3、第二分色镜4、带通滤波器5、第一相机传感器6、微透镜阵列7、中继透镜8、第二分色镜9、第二相机传感器10、第三相机传感器11;The microscope system is used to collect multiple two-dimensional data of the image, including: a
所述显微镜头1采集图像数据,经过所述第一分色镜2过滤去除干扰光线,再经过所述透镜3折射使光线聚焦,再经过所述第二分色镜4,使采集到的信号一部分聚集到用于宽场成像的第一相机传感器6,进行宽场成像;另一部分分别经过所述微透镜阵列7、中继透镜8、第二分色镜9,分别在所述第二相机传感器10和第三相机传感器11上进行光场成像;The
所述深度学习网络模块,用于将所述第一相机传感器6、第二相机传感器10和第三相机传感器11成像的二维图像数据重构成高分辨率三维图像;The deep learning network module is used to reconstruct the two-dimensional image data imaged by the
所述图像输出模块用于输出重构后的高分辨率三维图像。The image output module is used for outputting the reconstructed high-resolution three-dimensional image.
可选的,所述深度学习网络模块采用训练好的VCD深度网络VCD-Net进行高分辨率三维图像的重构。Optionally, the deep learning network module uses the trained VCD deep network VCD-Net to reconstruct high-resolution three-dimensional images.
可选的,第一相机传感器6、第二相机传感器10和第三相机传感器11采用sCMOS相机。Optionally, the
可选的,所述VCD深度网络VCD-Net的训练过程包括:Optionally, the training process of the VCD deep network VCD-Net includes:
步骤1:初始化VCD-Net,包括:网络参数和损失函数;Step 1: Initialize VCD-Net, including: network parameters and loss function;
步骤2:从真实静态样本及合成数据的共聚显微镜中获取高分辨率三维图像;Step 2: Acquire high-resolution 3D images from real static samples and concomitant microscopy of synthetic data;
步骤3:构建波动光学模型,将步骤2中获取的高分辨率三维图像输入所述波动光学模型,输出相应的二维图像;Step 3: constructing a wave optics model, inputting the high-resolution three-dimensional image obtained in
步骤4:基于步骤3获取的二维图像和步骤2获取的高分辨率三维图像构建训练集和测试集,将所述二维图像作为输入,将所述高分辨率三维图像作为输出,对所述VCD-Net进行训练,直至收敛,得到最优的VCD-Net网络模型。Step 4: Construct a training set and a test set based on the two-dimensional image obtained in
可选的,对样本进行光场成像时,利用1:1的中继透镜8将所述第二相机传感器10、第三相机传感器11对焦在所述微透镜阵列7的后焦平面。Optionally, when performing light field imaging on the sample, a 1:1
可选的,所述波动光学模型为:Optionally, the wave optics model is:
F=HgF=Hg
其中,矢量F表示采集到的原始光场图像,矢量g表示重建的物体3D离散点云,H是成像过程的点扩散函数矩阵表示。Among them, the vector F represents the collected original light field image, the vector g represents the reconstructed 3D discrete point cloud of the object, and H is the point spread function matrix representation of the imaging process.
本发明的第二个目的在于提供一种基于深度学习和光场成像的显微成像方法,包括:The second object of the present invention is to provide a microscopic imaging method based on deep learning and light field imaging, including:
步骤一:采用显微镜头采集图像数据,将所述图像数据输入显微系统中的多个相机传感器和由微透镜阵列和CCD构成的传感器分别进行宽场成像和光场成像,得到多个二维图像;Step 1: Use a microscope to collect image data, and input the image data into multiple camera sensors in the microscope system and sensors composed of a microlens array and a CCD for wide-field imaging and light-field imaging, respectively, to obtain multiple two-dimensional images. ;
步骤二:将所述步骤一中获取的二维图像输入训练好的深度神经网络,通过所述训练好的深度神经网络得到重构的高分辨率三维图像。Step 2: Input the two-dimensional image obtained in the first step into a trained deep neural network, and obtain a reconstructed high-resolution three-dimensional image through the trained deep neural network.
可选的,所述显微系统包括:显微镜头1、第一分色镜2、透镜3、光束分离器4、带通滤波器5、第一相机传感器6、微透镜阵列7、中继透镜8、第二分色镜9、第二相机传感器10、第三相机传感器11;Optionally, the microscope system includes: a
所述显微镜头1采集图像数据,经过所述第一分色镜2过滤去除干扰光线,再经过所述透镜3折射使光线聚焦,再经过所述光束分离器4,使采集到的信号一部分聚集到用于宽场成像的第一相机传感器6,进行宽场成像;另一部分分别经过所述微透镜阵列7、中继透镜8、第二分色镜9,分别在所述第二相机传感器10和第三相机传感器11上进行光场成像。The
可选的,所述深度神经网络采用VCD-Net网络,将所述第一相机传感器6、第二相机传感器10和第三相机传感器11成像的二维图像数据重构成高分辨率三维图像。Optionally, the deep neural network adopts a VCD-Net network to reconstruct the two-dimensional image data imaged by the
可选的,所述VCD-Net网络的训练过程包括:Optionally, the training process of the VCD-Net network includes:
步骤1:构建并初始化VCD-Net;Step 1: Build and initialize VCD-Net;
步骤2:从真实静态样本及其合成数据的共聚显微镜中获取高分辨率三维图像;Step 2: Acquiring high-resolution 3D images from confocal microscopy of real static samples and their synthetic data;
步骤3:构建波动光学模型,将步骤2中获取的高分辨率三维图像输入所述波动光学模型,输出相应的二维图像;Step 3: constructing a wave optics model, inputting the high-resolution three-dimensional image obtained in
步骤4:基于步骤3获取的二维图像和步骤2获取的高分辨率三维图像构建训练集和测试集,将所述二维图像作为输入,将所述高分辨率三维图像作为输出,对所述VCD-Net进行训练,直至收敛,得到最优的VCD-Net网络模型。Step 4: Construct a training set and a test set based on the two-dimensional image obtained in
本发明有益效果是:The beneficial effects of the present invention are:
本发明基于光场成像和深度学习网络构建光场显微成像系统,通过由微透镜阵列和相机传感器构成的显微系统获取多个二维图像,然后深度学习网络从这些二维图像中获取多视角图像信息对图像进行超分辨率重构,获得高分辨率的三维图像,相比于现有的基于卷积神经网络的超分辨率重构方法,本发明产生的图像具有更高的空间分辨率(1.0+0.15um)、较小的重建伪影和更大的重建吞吐量(200HZ),从而有效地提升成像清晰度;且在透镜阵列系统下,本发明的基于深度神经网络的超分辨率光场图像生成模型还具备低成本、低系统复杂度、无需扫描的超分辨率成像的优势,有效提升了重构速度。The invention constructs a light field microscopic imaging system based on light field imaging and deep learning network, acquires multiple two-dimensional images through a microscopic system composed of a microlens array and a camera sensor, and then the deep learning network acquires multiple two-dimensional images from these two-dimensional images. Perspective image information performs super-resolution reconstruction on the image to obtain a high-resolution three-dimensional image. Compared with the existing super-resolution reconstruction method based on convolutional neural network, the image generated by the present invention has higher spatial resolution (1.0 + 0.15um), smaller reconstruction artifacts and larger reconstruction throughput (200HZ), thereby effectively improving imaging clarity; and under the lens array system, the deep neural network-based super-resolution of the present invention The high-rate light field image generation model also has the advantages of low cost, low system complexity, and super-resolution imaging without scanning, which effectively improves the reconstruction speed.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort.
图1是本发明实施例的一种显微成像系统的平面示意图。FIG. 1 is a schematic plan view of a microscopic imaging system according to an embodiment of the present invention.
其中,1-显微镜头;2-第一分色镜;3-反射镜;4-光束分离器;5-带通滤波器;6-第一相机传感器;7-微透镜阵列;8-中继透镜;9-第二分色镜;10-第二相机传感器;11-第三相机传感器。Among them, 1-microscope; 2-first dichroic mirror; 3-reflector; 4-beam splitter; 5-bandpass filter; 6-first camera sensor; 7-microlens array; 8-relay lens; 9-second dichroic mirror; 10-second camera sensor; 11-third camera sensor.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.
首先对本发明涉及的基础理论知识介绍如下:First, the basic theoretical knowledge involved in the present invention is introduced as follows:
VCD-Net网络:VCD-Net network:
VCD网络为:在一般的卷积神经网络(CNN)中,某个第N卷积层从前(N-1)层接收特征图,并使用不同的卷积核生成新的特征图。网络最终产生多通道输出。其中每个通道都是原始输入的非线性组合,这个概念与光场摄影中的数字重聚算法有相似之处,其中重建体积的每个合成平面都可以理为从光场中提取的不同视图的叠加。通过级联层,我们的模型有望逐渐将原始角度信息从光场原始图像转换为深度特征,最终形成传统的3D图像堆栈并重建场景。在实现过程中,定制的VCD-Net基于修改后的U-Net架构(参考https://cloud.tencent.com/developer/article/1520224)。其中包含下采样路径和对称上采样路径,沿着两条路径,每一层有三个参数:n、f和s,分别表示输出通道数、卷积核的滤波器大小和移动核的步长。The VCD network is: In a general convolutional neural network (CNN), a certain Nth convolutional layer receives feature maps from the previous (N-1) layers and uses different convolution kernels to generate new feature maps. The network eventually produces a multi-channel output. where each channel is a non-linear combination of the original input, a concept that has similarities to digital reconvergence algorithms in light field photography, where each composite plane of the reconstructed volume can be thought of as a different view extracted from the light field superposition. By cascading layers, our model is expected to gradually transform raw angular information from light-field raw images to depth features, eventually forming a traditional 3D image stack and reconstructing the scene. In the implementation process, the customized VCD-Net is based on the modified U-Net architecture (refer to https://cloud.tencent.com/developer/article/1520224). It contains a downsampling path and a symmetric upsampling path, along two paths, each layer has three parameters: n, f, and s, which represent the number of output channels, the filter size of the convolution kernel, and the stride of the moving kernel, respectively.
实施例一:Example 1:
本实施例提供一种基于深度学习和光场成像的显微成像系统,所述显微成像系统包括依次连接的:显微系统、深度学习网络模块、图像输出模块;This embodiment provides a microscopic imaging system based on deep learning and light field imaging, the microscopic imaging system includes: a microscopic system, a deep learning network module, and an image output module connected in sequence;
所述显微系统用于采集图像的多个二维数据,参见图1,包括:显微镜头1、第一分色镜2、反射镜3、光束分离器4、带通滤波器5、第一相机传感器6、微透镜阵列7、中继透镜8、第二分色镜9、第二相机传感器10、第三相机传感器11;The microscope system is used to collect multiple two-dimensional data of an image, see FIG. 1 , including: a
所述显微镜头1采集图像数据,经过所述第一分色镜2过滤去除干扰光线,再经过所述反射镜3使光线从竖直方向进入水平方向,再经过所述光束分离器4,使采集到的信号一部分聚集到用于宽场成像的第一相机传感器6,进行宽场成像;另一部分分别经过所述微透镜阵列7、中继透镜8、第二分色镜9,分别在所述第二相机传感器10和第三相机传感器11上进行光场成像;The
所述深度学习网络模块,用于将所述第一相机传感器6、第二相机传感器10和第三相机传感器11成像的二维图像数据重构成高分辨率三维图像;The deep learning network module is used to reconstruct the two-dimensional image data imaged by the
所述图像输出模块用于输出重构后的高分辨率三维图像。The image output module is used for outputting the reconstructed high-resolution three-dimensional image.
实施例二:Embodiment 2:
本实施例一种基于深度学习和光场成像的显微成像系统,所述显微成像系统包括依次连接的:显微系统、深度学习网络模块、图像输出模块;This embodiment is a microscopic imaging system based on deep learning and light field imaging, the microscopic imaging system includes: a microscopic system, a deep learning network module, and an image output module connected in sequence;
所述显微系统用于采集图像的多个二维数据,参见图1,包括:显微镜头1、第一分色镜2、反射镜3、光束分离器4、带通滤波器5、第一相机传感器6、微透镜阵列7、中继透镜8、第二分色镜9、第二相机传感器10、第三相机传感器11;三个相机传感器均采用sCMOS相机。The microscope system is used to collect multiple two-dimensional data of an image, see FIG. 1 , including: a
显微镜头1采集图像数据,经过所述第一分色镜2过滤去除干扰光线,再经过所述的反射镜3使收集到的光线从垂直方向进入水平方向,再经过所述光束分离器4,使采集到的信号一部分聚集到用于宽场成像的第一相机传感器6,进行宽场成像;另一部分分别经过所述微透镜阵列7、中继透镜8、第二分色镜9,分别在所述第二相机传感器10和第三相机传感器11上进行光场成像;进行光场成像时,利用1:1的中继透镜8将所述第二相机传感器10、第三相机传感器11对焦在所述微透镜阵列7的后焦平面。The
深度学习网络模块采用训练好的VCD深度网络VCD-Net,根据第一相机传感器6、第二相机传感器10和第三相机传感器11成像的二维图像数据重构成高分辨率三维图像;The deep learning network module adopts the trained VCD deep network VCD-Net, and reconstructs a high-resolution three-dimensional image according to the two-dimensional image data imaged by the
VCD-Net的训练过程包括:The training process of VCD-Net includes:
步骤1:构建并初始化VCD-Net,本实施例在Window10环境中使用Tensorflow1.15.0,Tensorlayer 1.8.5和Python 3构建VCD-Net深度学习模型;Step 1: Build and initialize the VCD-Net. In this example, Tensorflow 1.15.0, Tensorlayer 1.8.5 and
步骤2:从静态样本或合成数据的共聚显微镜中获取高分辨率三维图像;Step 2: Acquire high-resolution 3D images from static samples or concomitant microscopy of synthetic data;
步骤3:构建波动光学模型,将步骤2中获取的高分辨率三维图像输入所述波动光学模型,输出相应的二维图像;Step 3: constructing a wave optics model, inputting the high-resolution three-dimensional image obtained in
波动光学模型为:F=Hg,其中矢量F表示光场,矢量g表示正在重建的离散体积,H是正向成像过程的测量矩阵建模,H主要由光场显微镜的点扩散函数决定。The wave optics model is: F=Hg, where the vector F represents the light field, the vector g represents the discrete volume being reconstructed, H is the measurement matrix modeling of the forward imaging process, and H is mainly determined by the point spread function of the light field microscope.
使用波动光学对光学显微镜的空间变化点扩散函数进行建模,光场点扩散函数映射了从三维物体到二维平面的转变,并且也是空间上的变化,对于感兴趣区域的每一个点都有一个独特的点扩散函数被考虑。为了生成点扩散函数,使用标量衍射理论计算通过微透镜阵列对体积中的多个点进行波前成像。Using wave optics to model the spatially varying point spread function of an optical microscope, the light field point spread function maps the transition from a 3D object to a 2D plane, and is also spatially variable, for each point in the region of interest A unique point spread function is considered. To generate the point spread function, multiple points in the volume are imaged wavefronts through a microlens array using scalar diffraction theory calculations.
步骤4:基于步骤3获取的二维图像和步骤2获取的高分辨率三维图像构建训练集和测试集,将所述二维图像作为输入,将所述高分辨率三维图像作为输出,对所述VCD-Net进行训练,通过迭代最小化其中间输出和参考高分辨率图像之间的差异来逐步训练,通过设置适当损失参数,例如像素强度的均方误差,为每一层获得优化的核参数,并有效地收敛到一个得到最优的VCD-Net网络模型。Step 4: Construct a training set and a test set based on the two-dimensional image obtained in
通过将显微系统采集的多个二维图像(本实施例采集的是秀丽隐杆线虫的图像)输入到训练好的VCD-Net网络模型中,可以得到最终的高分辨率重构图像。The final high-resolution reconstructed image can be obtained by inputting multiple two-dimensional images collected by the microscope system (the images of C. elegans collected in this embodiment) into the trained VCD-Net network model.
图像输出模块用于输出重构后的高分辨率三维图像。The image output module is used to output the reconstructed high-resolution three-dimensional image.
实施例三:Embodiment three:
本实施例提供一种基于深度学习和光场成像的显微成像方法,采用实施例二记载的基于深度学习和微透镜阵列的显微成像系统实现,包括以下步骤:This embodiment provides a microscopic imaging method based on deep learning and light field imaging, which is implemented by the deep learning and microlens array-based microscopic imaging system described in the second embodiment, including the following steps:
步骤一:采用显微镜头采集图像数据,将所述图像数据输入显微系统中的多个相机传感器和微透镜阵列分别进行宽场成像和光场成像,得到多个二维图像;Step 1: collecting image data with a microscope head, and inputting the image data into multiple camera sensors and microlens arrays in the microscope system for wide-field imaging and light-field imaging, respectively, to obtain multiple two-dimensional images;
步骤二:将所述步骤一中获取的二维图像输入训练好的VCD-Net网络模型,通过所述训练好的VCD-Net网络模型得到重构的高分辨率三维图像。Step 2: Input the two-dimensional image obtained in the first step into the trained VCD-Net network model, and obtain a reconstructed high-resolution three-dimensional image through the trained VCD-Net network model.
本发明实施例中的部分步骤,可以利用软件实现,相应的软件程序可以存储在可读取的存储介质中,如光盘或硬盘等。Some steps in the embodiments of the present invention may be implemented by software, and corresponding software programs may be stored in a readable storage medium, such as an optical disc or a hard disk.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210902191.5A CN115220211B (en) | 2022-07-29 | 2022-07-29 | Microscopic imaging system and method based on deep learning and light field imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210902191.5A CN115220211B (en) | 2022-07-29 | 2022-07-29 | Microscopic imaging system and method based on deep learning and light field imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115220211A true CN115220211A (en) | 2022-10-21 |
CN115220211B CN115220211B (en) | 2024-03-08 |
Family
ID=83614063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210902191.5A Active CN115220211B (en) | 2022-07-29 | 2022-07-29 | Microscopic imaging system and method based on deep learning and light field imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115220211B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116609942A (en) * | 2023-07-18 | 2023-08-18 | 长春理工大学 | A sub-aperture compressive sensing polarization super-resolution imaging system and method |
CN117876377A (en) * | 2024-03-13 | 2024-04-12 | 浙江荷湖科技有限公司 | Microscopic imaging general nerve extraction method based on large model |
CN117934285A (en) * | 2024-02-04 | 2024-04-26 | 浙江荷湖科技有限公司 | A scanning-free high-resolution four-dimensional light field microscopic imaging method and system |
WO2024113316A1 (en) * | 2022-12-01 | 2024-06-06 | 骆远 | Portable quantitative differential phase contrast microscopic module and deep learning reconstruction method therefor |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846463A (en) * | 2017-01-13 | 2017-06-13 | 清华大学 | Micro-image three-dimensional rebuilding method and system based on deep learning neutral net |
JP2018194634A (en) * | 2017-05-16 | 2018-12-06 | オリンパス株式会社 | Light field microscope |
CN109596588A (en) * | 2018-12-16 | 2019-04-09 | 华中科技大学 | A kind of high-resolution four-dimension light field micro imaging system based on mating plate illumination |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
CN111105415A (en) * | 2019-12-31 | 2020-05-05 | 北京理工大学重庆创新中心 | A large-field-of-view image detection system and method for white blood cells based on deep learning |
CN113383225A (en) * | 2018-12-26 | 2021-09-10 | 加利福尼亚大学董事会 | System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning |
CN114549318A (en) * | 2022-02-23 | 2022-05-27 | 复旦大学 | Super-resolution fluorescence microscopy imaging method based on sub-voxel convolutional neural network |
-
2022
- 2022-07-29 CN CN202210902191.5A patent/CN115220211B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106846463A (en) * | 2017-01-13 | 2017-06-13 | 清华大学 | Micro-image three-dimensional rebuilding method and system based on deep learning neutral net |
JP2018194634A (en) * | 2017-05-16 | 2018-12-06 | オリンパス株式会社 | Light field microscope |
CN109596588A (en) * | 2018-12-16 | 2019-04-09 | 华中科技大学 | A kind of high-resolution four-dimension light field micro imaging system based on mating plate illumination |
CN113383225A (en) * | 2018-12-26 | 2021-09-10 | 加利福尼亚大学董事会 | System and method for propagating two-dimensional fluorescence waves onto a surface using deep learning |
CN110441271A (en) * | 2019-07-15 | 2019-11-12 | 清华大学 | Light field high-resolution deconvolution method and system based on convolutional neural networks |
CN111105415A (en) * | 2019-12-31 | 2020-05-05 | 北京理工大学重庆创新中心 | A large-field-of-view image detection system and method for white blood cells based on deep learning |
CN114549318A (en) * | 2022-02-23 | 2022-05-27 | 复旦大学 | Super-resolution fluorescence microscopy imaging method based on sub-voxel convolutional neural network |
Non-Patent Citations (1)
Title |
---|
李浩宇 等: "基于深度学习的荧光显微成像技术及应用", 《激光与光电子学进展》, vol. 58, no. 18, pages 1811007 - 1 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024113316A1 (en) * | 2022-12-01 | 2024-06-06 | 骆远 | Portable quantitative differential phase contrast microscopic module and deep learning reconstruction method therefor |
CN116609942A (en) * | 2023-07-18 | 2023-08-18 | 长春理工大学 | A sub-aperture compressive sensing polarization super-resolution imaging system and method |
CN116609942B (en) * | 2023-07-18 | 2023-09-22 | 长春理工大学 | Sub-aperture compressed sensing polarization super-resolution imaging method |
CN117934285A (en) * | 2024-02-04 | 2024-04-26 | 浙江荷湖科技有限公司 | A scanning-free high-resolution four-dimensional light field microscopic imaging method and system |
CN117876377A (en) * | 2024-03-13 | 2024-04-12 | 浙江荷湖科技有限公司 | Microscopic imaging general nerve extraction method based on large model |
CN117876377B (en) * | 2024-03-13 | 2024-05-28 | 浙江荷湖科技有限公司 | Microscopic imaging general nerve extraction method based on large model |
Also Published As
Publication number | Publication date |
---|---|
CN115220211B (en) | 2024-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115220211B (en) | Microscopic imaging system and method based on deep learning and light field imaging | |
US9426429B2 (en) | Scanning projective lensless microscope system | |
CN109884018B (en) | A method and system for submicron lensless microscopic imaging based on neural network | |
US11169367B2 (en) | Three-dimensional microscopic imaging method and system | |
US9679360B2 (en) | High-resolution light-field imaging | |
WO2020087966A1 (en) | Three-dimensional diffraction tomography microscopic imaging method based on led array coding illumination | |
WO2013018024A1 (en) | Apparatus and method for quantitative phase tomography through linear scanning with coherent and non-coherent detection | |
CN101865673B (en) | Microcosmic optical field acquisition and three-dimensional reconstruction method and device | |
CN108508588A (en) | A kind of multiple constraint information without lens holographic microphotography phase recovery method and its device | |
CN110349237B (en) | Fast volume imaging method based on convolutional neural network | |
CN106872408A (en) | A kind of planktonic organism imaging detection device | |
Yang et al. | Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network | |
US20230247276A1 (en) | Re-imaging microscopy with micro-camera array | |
CN114529476B (en) | Phase retrieval method for lensless holographic microscopy based on decoupling-fusion network | |
CN206710306U (en) | A kind of planktonic organism imaging detection device | |
CN117197386A (en) | A single-click lensless optical diffraction tomography method and system | |
CN116109768A (en) | Super-resolution imaging method and system for Fourier light field microscope | |
Tian et al. | DeepLeMiN: Deep-learning-empowered Physics-aware Lensless Miniscope | |
WO2022173848A1 (en) | Methods of holographic image reconstruction with phase recovery and autofocusing using recurrent neural networks | |
Kauvar et al. | Aperture interference and the volumetric resolution of light field fluorescence microscopy | |
Bai et al. | HoloFormer: Contrastive Regularization Based Transformer for Holographic Image Reconstruction | |
CN114943806B (en) | Wide-angle light field three-dimensional imaging method and system based on multi-angle light field imaging device | |
Feshki et al. | Deep Learning Empowered Fresnel-based Lensless Fluorescence Microscopy | |
Memmolo et al. | Single-cell phase-contrast tomograms data encoded by 3D Zernike descriptors | |
Gregory et al. | A gigapixel computational light-field camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |