[go: up one dir, main page]

CN110363734B - Method and system for reconstructing microscopic fluorescence images of thick samples - Google Patents

Method and system for reconstructing microscopic fluorescence images of thick samples Download PDF

Info

Publication number
CN110363734B
CN110363734B CN201910568163.2A CN201910568163A CN110363734B CN 110363734 B CN110363734 B CN 110363734B CN 201910568163 A CN201910568163 A CN 201910568163A CN 110363734 B CN110363734 B CN 110363734B
Authority
CN
China
Prior art keywords
grid
fluorescence
image
focus position
confidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910568163.2A
Other languages
Chinese (zh)
Other versions
CN110363734A (en
Inventor
谷朝臣
龚靖渝
吴开杰
关新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pu Huasen Biotechnology Co ltd
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201910568163.2A priority Critical patent/CN110363734B/en
Publication of CN110363734A publication Critical patent/CN110363734A/en
Application granted granted Critical
Publication of CN110363734B publication Critical patent/CN110363734B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Microscoopes, Condenser (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)

Abstract

本发明提供一种厚样本显微荧光图像重构方法及系统,以小于显微镜景深的步长等间隔采集不同焦平面的视野图像,得到数据立方;利用设定长宽的矩形将视野图像划分为不同的栅格,利用栅格中最亮点的亮度值、栅格各像素亮度值的方差构建清晰度评价函数,并建立栅格置信度地图;对每个栅格的第一对焦位置进行修正后,精确每个像素点的对焦位置,在位置空间与视野图像上对栅格的边缘做光滑处理,得到初步重构图像;对初步重构图像中的像素点进行荧光特征聚类,由荧光特征所占像素点数量决定分割栅格的大小,对每个荧光特征进行重构并得到最终重构图像。解决了厚样本同一视野下不同位置焦平面不同的问题,提升了重构之后荧光图像质量和整体清晰度。

Figure 201910568163

The invention provides a method and system for reconstructing a microscopic fluorescence image of a thick sample. The field of view images of different focal planes are collected at equal intervals with a step size smaller than the depth of field of the microscope to obtain a data cube; the field of view image is divided into For different grids, use the brightness value of the brightest point in the grid and the variance of the brightness value of each pixel in the grid to construct a sharpness evaluation function, and establish a grid confidence map; after correcting the first focus position of each grid , the focus position of each pixel point is precise, and the edge of the grid is smoothed on the position space and field of view image to obtain a preliminary reconstructed image; The number of occupied pixels determines the size of the segmentation grid, and each fluorescence feature is reconstructed to obtain the final reconstructed image. It solves the problem of different focal planes at different positions in the same field of view for thick samples, and improves the quality and overall clarity of fluorescent images after reconstruction.

Figure 201910568163

Description

Thick sample microscopic fluorescence image reconstruction method and system
Technical Field
The invention relates to the field of microscopes and image processing, in particular to a method and a system for reconstructing a thick sample micro fluorescence image.
Background
Microscopic fluorescence imaging is an important means for quantitative analysis in the field of life science, is limited by the extremely small depth of field of a high-power objective lens, cannot realize clear acquisition of fluorescence characteristics in a visual field in thick sample fluorescence microscopic imaging, and is more difficult to acquire fluorescence images and insufficient in overall definition especially when focal planes at different positions of the thick sample in the same visual field are different.
Prior art relevant to the present application is patent document CN1071111118A, disclosing epi-illumination fourier ptychographic imaging system and method for high resolution imaging of thick samples, comprising: a variable illumination source configured to sequentially provide radiation at a plurality of incident angles; a first polarizer system configured to polarize radiation from the variable illumination source to a first polarization state incident on the sample; a light collection device configured to receive radiation emitted from a sample; a second polarizer system configured to receive radiation transmitted through the light collection device; a radiation detector configured to receive radiation from the second polarizer system; and a processor configured to determine a sequence of de-scattered surface intensity images from the first sequence of intensity images and the second sequence of intensity images.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for reconstructing a thick sample micro-fluorescence image.
The invention provides a thick sample micro-fluorescence image reconstruction method, which comprises the following steps:
data cube acquisition step: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid dividing step: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
initial focusing position measurement: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction step: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and (3) information completion and correction: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
fluorescence characteristic reconstruction: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Preferably, the initial measurement step of the focusing position includes:
a definition evaluation step: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing step: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
grid confidence step: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Preferably, the in-focus position correction step includes:
a high confidence degree correction step: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
and (3) low confidence coefficient correction: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Preferably, the information completion correcting step includes:
and (3) a precise focusing step: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing step: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Preferably, the fluorescence signature reconstructing step comprises:
fluorescence clustering step: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
fluorescence merging step: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering step: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
fluorescence reconstruction step: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
integrating images: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The invention provides a thick sample micro-fluorescence image reconstruction system, which comprises the following modules:
the data cube acquisition module: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid partitioning module: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
the focusing position initial measurement module: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction module: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and an information completion correction module: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
a fluorescence characteristic reconstruction module: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Preferably, the focusing position initial measurement module comprises:
a definition evaluation module: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing module: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
a grid confidence module: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Preferably, the in-focus position correction module includes:
a high confidence correction module: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
a low confidence correction module: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Preferably, the information completion correcting module includes:
a precision focusing module: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing module: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Preferably, the fluorescence signature reconstruction module comprises:
a fluorescence clustering module: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
a fluorescence combining module: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering module: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
a fluorescence reconstruction module: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
an integrated image module: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
Compared with the prior art, the invention has the following beneficial effects:
1. the method solves the problem of reconstruction of images with different focal planes at different positions of a thick sample in the same field of view, and improves the quality and the integral definition of the reconstructed fluorescence image;
2. according to the method, the field of view is divided into grids, and the variance between the brightest point in the grid and the brightness of each pixel of the grid is utilized to construct the definition evaluation function to determine the focusing position, so that the problem that the focusing position cannot be determined is solved.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a thick sample micro-fluorescence image reconstruction method, which comprises the following steps:
data cube acquisition step: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid dividing step: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
initial focusing position measurement: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction step: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and (3) information completion and correction: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
fluorescence characteristic reconstruction: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Specifically, the focusing position initial measurement step includes:
a definition evaluation step: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing step: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
grid confidence step: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Specifically, the focus position correction step includes:
a high confidence degree correction step: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
and (3) low confidence coefficient correction: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Specifically, the information completion correcting step includes:
and (3) a precise focusing step: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing step: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Specifically, the fluorescence characteristic reconstructing step includes:
fluorescence clustering step: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
fluorescence merging step: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering step: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
fluorescence reconstruction step: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
integrating images: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The invention provides a thick sample micro-fluorescence image reconstruction system, which comprises the following modules:
the data cube acquisition module: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid partitioning module: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
the focusing position initial measurement module: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction module: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and an information completion correction module: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
a fluorescence characteristic reconstruction module: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Specifically, the focusing position initial measurement module includes:
a definition evaluation module: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing module: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
a grid confidence module: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Specifically, the focus position correction module includes:
a high confidence correction module: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
a low confidence correction module: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Specifically, the information completion correcting module includes:
a precision focusing module: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing module: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Specifically, the fluorescence characteristic reconstruction module includes:
a fluorescence clustering module: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
a fluorescence combining module: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering module: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
a fluorescence reconstruction module: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
an integrated image module: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The thick sample micro-fluorescence image reconstruction system provided by the invention can be realized through the step flow of the thick sample micro-fluorescence image reconstruction method. The thick sample micro-fluorescence image reconstruction method can be understood as a preferred example of the thick sample micro-fluorescence image reconstruction system by those skilled in the art.
In a specific implementation, the invention proceeds by the following steps:
data cube acquisition step: in the same visual field, in the fluctuating range of the fluorescence slice, the images of different focal planes are collected at equal intervals by the step length smaller than the depth of field of the microscope, and a data cube is obtained.
A local grid dividing step: the field of view is divided into different grids by means of rectangles of a certain length and width.
Initial focusing position measurement: for one grid, a definition evaluation function is constructed by using the variance between the brightest point in the grid and the brightness of each pixel of the grid, the focusing position of each grid in the visual field is preliminarily determined, and a grid confidence map is established by using the brightest point of each grid in the focusing position image.
A focusing position correction step: and correcting the focusing position of each grid by using the grid confidence map and the information of the focusing position of the grid around each grid, so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing position.
And (3) information completion and correction: and an information competition mechanism is adopted to accurately focus the position of each pixel point, and the edges of the grids are smoothed in a position space and an image, so that the edge effect caused by grid segmentation is reduced.
Fluorescence characteristic reconstruction: and clustering the pixels of the preliminary reconstructed image by adopting an FOF algorithm to obtain the pixels occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixels occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstructed image.
Wherein, the range of the fluorescence slice fluctuation refers to: the focal plane corresponding to the deepest position of the fluorescence slice in the visual field range is between the focal plane corresponding to the shallowest position.
The step length of less than the depth of field of the microscope is used for collecting images of different focal planes at equal intervals, and the data cube is obtained by the following steps: and setting the acquired step length to be smaller than the depth of field of the fluorescence microscope, and acquiring images at different focal plane positions with the same step length to obtain the data cube of the same visual field at different focal planes.
The dividing of the field of view into different grids by using a rectangle with a certain length and width refers to: a rectangle with a certain length and width is set, the visual field is divided into different grids seamlessly, and no overlapped area exists between the grids.
The method for constructing the definition evaluation function by using the variance between the brightest point in the grid and the brightness of each pixel in the grid comprises the following steps: for images shot by the same grid in different focal planes, the value of the brightest point of each image in the grid and the variance of the brightness of each pixel in the grid are respectively calculated, and the product of the two is used as a definition evaluation function.
The preliminary determination of the focusing position of each grid in the field of view refers to: and calculating the definition evaluation functions of the acquired images of different focal planes in the same grid by using the definition evaluation functions to obtain the position of the focal plane with the maximum definition evaluation function as the initial focusing position of the grid.
The establishing of the grid confidence map by using the brightest point of each grid at the focusing position refers to: and for each grid, acquiring an image acquired by a focusing position by using the obtained initial focusing position, obtaining the value of the brightest point of the image in the grid, comparing the value with a set threshold value to obtain the confidence coefficient of the grid, and obtaining a confidence coefficient map.
The step of correcting the focusing position of each grid by using the grid confidence map and the information of the focusing position of the grid around each grid is as follows: and (3) calculating the proportion of grids which are more than a certain threshold from the focusing position of each grid in the grids by combining the grids with high surrounding confidence coefficients for each grid by using the confidence coefficient map, if the confidence coefficient of the grid is high, calculating the average value of the initial focusing positions of the grids, and correcting the focusing position of the grid. If the confidence of the grid is low, the focus position mean value of the grid with high confidence around the grid is directly used for correction, and if the grid without high confidence around the grid, the focus position mean value of the grid with high confidence around the grid is not used for correction.
The influence of the defocusing halo on the judgment of the focal position is as follows: the diffusion of the halo generated by the defocused fluorescent features on the focal plane to the areas without fluorescent features results in a much greater brightness of the halo than the areas without fluorescent features, such that the halo is misidentified as an in-focus position.
The method for accurately focusing position of each pixel point by using information competition mechanism comprises the following steps: for each pixel in a grid, if the value of the pixel point in the grid focusing position image is greater than a certain threshold value, the focusing position of the pixel point is the same as the focusing position of the grid. And conversely, respectively taking out the two images of the grid focusing position and the grid focusing position around the grid to obtain the value of the pixel point in the two images. If the former is larger than the latter, the focusing position of the pixel point is the same as that of the grid, and if the latter is larger than the former, the focusing position of the pixel point is the same as that of the surrounding grid. If a plurality of focusing positions meeting the conditions appear, selecting the position which enables the value of the pixel point to be maximum as the focusing position so as to solve the problem that different pixels in the same grid have different focusing positions.
The method for smoothing the edges of the grid in the position space and the image comprises the following steps: and interpolating the pixel focusing position at the grid-divided edge to ensure that the focusing position is smooth, and interpolating the pixel point value at the grid-divided edge to ensure that the pixel point value is smooth at the grid-divided edge.
The step of clustering the pixels of the preliminary reconstructed image by adopting the FOF algorithm to obtain the pixels occupied by each fluorescence characteristic is as follows: the reconstructed image is obtained through the method, the FOF algorithm is utilized to cluster the pixel points in the fluorescence area in the image according to the positions of the pixel points in the visual field, and the connected pixel points are clustered together to form a fluorescence characteristic.
The size of the segmentation grid is determined by the number of the pixel points occupied by the fluorescence features, and the combination of the fluorescence features with the same segmentation grid size into one fluorescence feature means that: and determining the length and width of the grid division according to the number of the pixel points occupied by the obtained fluorescent features, and combining the fluorescent features with the same length and width of the divided grids into a new fluorescent feature without changing the size of the grid division.
The step of reconstructing each fluorescence characteristic and obtaining a final reconstructed image is as follows: and reconstructing each obtained fluorescent feature and the corresponding grid size on the original image by using the method, filtering out information of other fluorescent features by using the pixel point of each obtained fluorescent feature, and integrating the images obtained by all the fluorescent features into a finally reconstructed image.
The technical solution of the present invention will be described in more detail with reference to the following embodiments.
As shown in fig. 1, the thick sample micro-fluorescence image reconstruction method is implemented as follows:
step 100: collecting images of different focal planes at equal intervals by a step length smaller than the depth of field of the microscope within the fluctuating range of the fluorescence slice in a visual field to obtain a data cube, comprising a step 110;
step 110: in a visual field range, obtaining focal plane positions corresponding to the deepest and the shallowest positions of the fluorescence slice, setting the moving step length of a shooting position to be smaller than the depth of field of a camera, and collecting images in the visual field range at different focal planes at equal intervals;
step 200: dividing the field of view into different grids by using a rectangle with a certain length and width means that: setting a rectangle with a certain length and width, and seamlessly dividing a visual field into different grids without overlapped areas among the grids;
step 300: constructing a definition evaluation function by using the variance between the brightest point in the grid and the brightness of each pixel of the grid, preliminarily determining the focusing position of each grid in the visual field, and establishing a grid confidence map by using the brightest point of each grid focusing position image, wherein the steps comprise steps 310, 320 and 330;
step 310: for images shot by the same grid in different focal planes, respectively calculating the value of the brightest point of each image in the grid and the variance of the brightness of each pixel in the grid, and taking the product of the two as a definition evaluation function;
step 320: calculating the definition evaluation functions of the images acquired by different focal planes in the same grid by using the definition evaluation function in the step 310 to obtain the position of the focal plane with the maximum definition evaluation function as the initial focusing position of the grid;
step 330: for each grid, acquiring an image acquired by a focusing position by using the initial focusing position obtained in the step 320, obtaining a brightest point value of the image in the grid, and comparing the brightest point value with a set threshold value to obtain a confidence coefficient of the grid, so as to obtain a confidence coefficient map;
step 400: using the confidence map established in step 330 and the information of the focused positions of the grids around each grid, each focused position of the grid is corrected, including steps 410 and 420:
step 410: for the area with high confidence coefficient, combining the grids with high confidence coefficient around, calculating the proportion of the grids with the distance between the focusing position of the grid and a certain threshold value in the grids, if the proportion is higher, calculating the average value of the primary focusing positions of the grids, and correcting the focusing position of the grid;
step 420: and for the area with low confidence coefficient, directly correcting the area with low confidence coefficient by using the focusing position average value of the surrounding grid with high confidence coefficient, and not correcting the area without the surrounding grid with high confidence coefficient.
Step 500: the information competition mechanism is adopted to accurately focus the position of each pixel point, and the edge of the grid is smoothed in the position space and the image, and the method comprises the following steps of 510, 520 and 530:
step 510: after the correction in step 400, for each pixel in a grid, if the value of the pixel in the image of the focusing position of the grid is greater than a certain threshold, the focusing position of the pixel is the same as the focusing position of the grid. And conversely, respectively taking out the two images of the grid focusing position and the grid focusing position around the grid to obtain the value of the pixel point in the two images. If the former is larger than the latter, the focusing position of the pixel point is the same as that of the grid, and if the latter is larger than the former, the focusing position of the pixel point is the same as that of the surrounding grid. And if a plurality of focusing positions meeting the conditions appear, selecting the position which enables the value of the pixel point to be maximum as the focusing position.
Step 520: performing interpolation smoothing on the refined focusing position map processed in the step 510 at the edge of the grid by using the surrounding focusing positions;
step 530: reconstructing the focused position map and the data cube obtained after the processing in the step 520 into an image, and then performing interpolation smoothing on the edge of the grid;
step 600: performing fluorescence feature clustering by using the preliminary reconstructed image obtained in the step 500, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence features, combining the fluorescence features with the same segmentation grid size into one fluorescence feature, reconstructing each fluorescence feature and obtaining a final reconstructed image, wherein the steps comprise steps 610, 620, 630, 640 and 650;
step 610: obtaining a reconstructed image through steps 100 to 500, clustering pixel points in a fluorescence area in the image according to the positions of the pixel points in a visual field by using a FOF algorithm, and clustering the connected pixel points together to form a fluorescence characteristic;
step 620: determining the length and width of the grid division according to the number of the pixel points occupied by the fluorescent features obtained in the step 610, and combining the fluorescent features with the same length and width of the divided grids into a new fluorescent feature without changing the size of the grid division;
step 630: obtaining different fluorescence characteristics through the step 620, dividing surrounding pixel points which do not belong to any fluorescence characteristics to the nearest fluorescence characteristics, iterating the step until all pixels are distributed, and constructing an information filter according to the pixel point of each fluorescence characteristic;
step 640: repeating steps 100 through 500 for each fluorescence feature using each fluorescence feature obtained in step 630 and the corresponding grid size obtained in step 620, resulting in a reconstructed image of each fluorescence feature.
Step 650: and finally, filtering the reconstructed image obtained in the step 640 by using the filter of each fluorescence characteristic obtained in the step 630, and integrating the images obtained by all the fluorescence characteristics into a final reconstructed image.
The invention solves the problem of data cube acquisition by acquiring images of different focal planes at equal intervals by a step length which is smaller than the depth of field of a microscope within the fluctuation range of the fluorescence slice in a field range; the method for determining the focusing position by dividing the visual field into grids and constructing a definition evaluation function by using the variance of the brightest point in the grids and the brightness of each pixel of the grids solves the problem that the focusing position cannot be determined; the problem of artificial texture generation in rasterization is solved by an information competition method; the method for realizing fluorescence feature segmentation by using the primary reconstructed image solves the problem that the size of grid division is not matched with the size of the fluorescence feature, and improves the reconstruction quality of the fluorescence image of the thick tissue slice through the decomposition of the steps of data cube acquisition, local grid division, focusing position initial measurement, focusing position correction, information completion correction, fluorescence feature reconstruction and the like.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (8)

1.一种厚样本显微荧光图像重构方法,其特征在于,包括如下步骤:1. a thick sample microscopic fluorescence image reconstruction method, is characterized in that, comprises the steps: 数据立方采集步骤:在同一视野中,在荧光切片起伏的范围之内,以小于显微镜景深的步长等间隔采集不同焦平面的视野图像,得到数据立方;Data cube collection steps: in the same field of view, within the fluctuation range of the fluorescence slice, collect field images of different focal planes at equal intervals with a step size smaller than the depth of field of the microscope to obtain a data cube; 局部栅格划分步骤:基于数据立方,利用设定长宽的矩形将视野图像划分为不同的栅格,且各个栅格之间没有重叠;Partial grid division step: Based on the data cube, the field of view image is divided into different grids by using rectangles with set length and width, and there is no overlap between the grids; 对焦位置初测步骤:对于一个栅格,利用栅格中最亮点的亮度值、栅格各像素亮度值的方差构建清晰度评价函数,确定视野中各栅格的第一对焦位置,并利用每个栅格在第一对焦位置处的最亮点建立栅格置信度地图;Initial focus position measurement steps: For a grid, use the brightness value of the brightest point in the grid and the variance of the brightness value of each pixel of the grid to construct a sharpness evaluation function, determine the first focus position of each grid in the field of view, and use each grid to determine the first focus position. The grid confidence map is established for the brightest point of each grid at the first focus position; 对焦位置修正步骤:利用栅格置信度地图以及各个栅格周围栅格对焦位置的信息,对每个栅格的第一对焦位置进行修正,以减少离焦荧光的光晕对对焦位置判断的影响;Focus position correction step: Use the grid confidence map and the information on the focus positions of the grids around each grid to correct the first focus position of each grid to reduce the influence of the out-of-focus fluorescence halo on the focus position judgment. ; 信息补全修正步骤:精确每个像素点的对焦位置,并且在位置空间与视野图像上对栅格的边缘做光滑处理,得到初步重构图像;Information completion and correction steps: Accurate the focus position of each pixel point, and smooth the edges of the grid on the position space and field of view image to obtain a preliminary reconstructed image; 荧光特征重构步骤:对初步重构图像中的像素点进行荧光特征聚类,得到各个荧光特征所占像素点,由荧光特征所占像素点数量决定分割栅格的大小,拥有相同分割栅格大小的荧光特征合并为一个荧光特征,使得栅格的大小能够适应各种尺度的荧光特征,对每个荧光特征进行重构并得到最终重构图像;Fluorescence feature reconstruction step: Fluorescence feature clustering is performed on the pixels in the preliminary reconstructed image to obtain the pixels occupied by each fluorescence feature, and the size of the segmentation grid is determined by the number of pixels occupied by the fluorescence features. The fluorescence features of different sizes are merged into one fluorescence feature, so that the size of the grid can adapt to the fluorescence features of various scales, and each fluorescence feature is reconstructed to obtain the final reconstructed image; 所述信息补全修正步骤包括:The information completion and correction steps include: 精确对焦步骤:对于一个栅格内的各个像素,所述栅格的第一对焦位置处的像素点的值大于第三设定阈值,则不修正第一对焦位置;否则,则分别取出所述栅格的第一对焦位置处的第一图像和所述栅格的周围栅格对焦位置处的第二图像,得到所述两张图像中所述像素点的值,若一图像中所述像素点的值大于第二图像中所述像素点的值,则不修正第一对焦位置,否则,则以周围栅格对焦位置修正第一对焦位置,最终得到精确对焦位置;Precise focusing step: for each pixel in a grid, if the value of the pixel at the first focusing position of the grid is greater than the third set threshold, the first focusing position is not corrected; otherwise, the first focusing position is taken out respectively. The first image at the first focus position of the grid and the second image at the focus position of the surrounding grids of the grid, to obtain the value of the pixel in the two images, if the pixel in one image If the value of the point is greater than the value of the pixel point in the second image, the first focus position is not corrected, otherwise, the first focus position is corrected with the surrounding grid focus position, and an accurate focus position is finally obtained; 边缘平滑步骤:基于精确对焦位置对栅格的边缘进行插值平滑化后,将精确对焦位置与数据立方进行重构,得到初步重构图像;采用信息竞争机制精确每个像素点的对焦位置,并且在位置空间与图像上对栅格的边缘做光滑处理,减少栅格分割带来的边缘效应。Edge smoothing step: After the edge of the grid is interpolated and smoothed based on the precise focus position, the precise focus position and the data cube are reconstructed to obtain a preliminary reconstructed image; the information competition mechanism is used to accurately determine the focus position of each pixel, and Smooth the edge of the grid in the position space and image to reduce the edge effect caused by grid segmentation. 2.根据权利要求1所述的厚样本显微荧光图像重构方法,其特征在于,所述对焦位置初测步骤包括:2 . The method for reconstructing a microscopic fluorescence image of a thick sample according to claim 1 , wherein the initial measurement step of the focus position comprises: 3 . 清晰度评价步骤:对于同一栅格在不同焦平面所拍摄的视野图像,分别计算各视野图像在栅格中最亮点的亮度值、栅格内各像素亮度值的方差,并将最亮点的亮度值、各像素亮度值的方差相乘积作为清晰度评价函数;Sharpness evaluation steps: For the field of view images captured by the same grid in different focal planes, calculate the brightness value of the brightest point in the grid and the variance of the brightness value of each pixel in the grid, respectively, and calculate the brightness of the brightest point. value and the product of the variance of the luminance value of each pixel as the sharpness evaluation function; 栅格对焦步骤:计算同一栅格内不同焦平面采集的视野图像的清晰度评价函数,得到清晰度评价函数最大时的焦平面位置,作为所述栅格的第一对焦位置;The grid focusing step: calculating the sharpness evaluation function of the field of view images collected by different focal planes in the same grid, and obtaining the focal plane position when the sharpness evaluation function is the largest, as the first focusing position of the grid; 栅格置信步骤:对于各个栅格,获取在第一对焦位置处的视野图像,得到视野图像在栅格中的最亮点的亮度值,将最亮点的亮度值与第一设定阈值进行对比,得到所述栅格的置信度,进一步建立置信度地图。Grid confidence step: for each grid, obtain the field of view image at the first focus position, obtain the brightness value of the brightest point of the field of view image in the grid, and compare the brightness value of the brightest point with the first set threshold, The confidence level of the grid is obtained, and a confidence level map is further established. 3.根据权利要求1所述的厚样本显微荧光图像重构方法,其特征在于,所述对焦位置修正步骤包括:3. The method for reconstructing a microscopic fluorescence image of a thick sample according to claim 1, wherein the step of correcting the focus position comprises: 高置信度修正步骤:对于置信度地图中置信度高的区域中的栅格,计算所述栅格周围置信度高的周围栅格中与所述栅格的第一对焦位置的距离大于第二设定阈值的栅格所占比例,若所述比例高于设定值,则计算周围栅格的对焦位置均值,对所述栅格的第一对焦位置进行修正;The high-confidence correction step: for the grid in the high-confidence area in the confidence map, calculate that the distance from the first focus position of the grid in the surrounding grid with high confidence around the grid is greater than the second The proportion of grids with a set threshold, if the proportion is higher than the set value, calculate the average value of the focus positions of the surrounding grids, and correct the first focus position of the grid; 低置信度修正步骤:对于置信度地图中置信度低的区域中的栅格,利用周围置信度高的周围栅格的对焦位置均值进行修正,若如周围栅格中不存在的置信度高的栅格,则不修正。Low-confidence correction step: For the grids in the low-confidence area of the confidence map, use the average focus position of the surrounding grids with high confidence to correct, if there is no high-confidence grid in the surrounding grids. Grids are not corrected. 4.根据权利要求1所述的厚样本显微荧光图像重构方法,其特征在于,所述荧光特征重构步骤包括:4. The method for reconstructing a microscopic fluorescence image of a thick sample according to claim 1, wherein the fluorescence feature reconstruction step comprises: 荧光聚类步骤:利用FOF算法对初步重构图像中荧光区域的像素点进行聚类,将相连的像素点聚合形成一个荧光特征;Fluorescence clustering step: use the FOF algorithm to cluster the pixels in the fluorescence area in the preliminary reconstructed image, and aggregate the connected pixels to form a fluorescence feature; 荧光合并步骤:根据一个荧光特征所占像素点的数量决定栅格划分的长宽,并将划分的栅格长宽相同的荧光特征合并为一个新荧光特征,且不改变栅格划分的长宽;Fluorescence merging step: Determine the length and width of the grid division according to the number of pixels occupied by a fluorescent feature, and merge the divided grids with the same length and width as a new fluorescent feature without changing the length and width of the grid division. ; 构建过滤步骤:基于新荧光特征,将周围不属于任何新荧光特征的像素点划分给最近的新荧光特征,迭代直至将所有像素分配完毕,并根据每个新荧光特征的像素点构建一个信息过滤器;Construction filtering step: Based on the new fluorescent features, divide the surrounding pixels that do not belong to any new fluorescent features to the nearest new fluorescent features, iterate until all pixels are allocated, and construct an information filter based on the pixels of each new fluorescent feature device; 荧光重构步骤:利用新荧光特征、栅格划分的长宽,对每个新荧光特征进行图像重构,得到每个新荧光特征的重构图像;Fluorescence reconstruction step: use the new fluorescence features and the length and width of grid division to reconstruct the image of each new fluorescence feature to obtain a reconstructed image of each new fluorescence feature; 整合图像步骤:利用信息过滤器,对每个新荧光特征的重构图像进行过滤后,将所有新荧光特征得到的重构图像整合为最终重构图像。The step of integrating images: After filtering the reconstructed images of each new fluorescent feature using an information filter, the reconstructed images obtained from all the new fluorescent features are integrated into the final reconstructed image. 5.一种厚样本显微荧光图像重构系统,其特征在于,包括如下模块:5. A thick sample microscopic fluorescence image reconstruction system, characterized in that it comprises the following modules: 数据立方采集模块:在同一视野中,在荧光切片起伏的范围之内,以小于显微镜景深的步长等间隔采集不同焦平面的视野图像,得到数据立方;Data cube acquisition module: In the same field of view, within the fluctuation range of the fluorescence slice, collect field images of different focal planes at equal intervals with a step size smaller than the depth of field of the microscope to obtain a data cube; 局部栅格划分模块:基于数据立方,利用设定长宽的矩形将视野图像划分为不同的栅格,且各个栅格之间没有重叠;Local grid division module: Based on the data cube, the field of view image is divided into different grids by using a rectangle with set length and width, and there is no overlap between each grid; 对焦位置初测模块:对于一个栅格,利用栅格中最亮点的亮度值、栅格各像素亮度值的方差构建清晰度评价函数,确定视野中各栅格的第一对焦位置,并利用每个栅格在第一对焦位置处的最亮点建立栅格置信度地图;Focus position initial measurement module: For a grid, use the brightness value of the brightest point in the grid and the variance of the brightness value of each pixel of the grid to construct a sharpness evaluation function, determine the first focus position of each grid in the field of view, and use each grid. The grid confidence map is established for the brightest point of each grid at the first focus position; 对焦位置修正模块:利用栅格置信度地图以及各个栅格周围栅格对焦位置的信息,对每个栅格的第一对焦位置进行修正,以减少离焦荧光的光晕对对焦位置判断的影响;Focus position correction module: Use the grid confidence map and the grid focus position information around each grid to correct the first focus position of each grid to reduce the influence of out-of-focus fluorescence halo on the focus position judgment ; 信息补全修正模块:精确每个像素点的对焦位置,并且在位置空间与视野图像上对栅格的边缘做光滑处理,得到初步重构图像;Information completion and correction module: Accurate the focus position of each pixel point, and smooth the edges of the grid on the position space and the field of view image to obtain a preliminary reconstructed image; 荧光特征重构模块:对初步重构图像中的像素点进行荧光特征聚类,得到各个荧光特征所占像素点,由荧光特征所占像素点数量决定分割栅格的大小,拥有相同分割栅格大小的荧光特征合并为一个荧光特征,使得栅格的大小能够适应各种尺度的荧光特征,对每个荧光特征进行重构并得到最终重构图像;Fluorescence feature reconstruction module: perform fluorescence feature clustering on the pixels in the preliminary reconstructed image to obtain the pixels occupied by each fluorescence feature. The size of the segmentation grid is determined by the number of pixels occupied by the fluorescence features, and the same segmentation grid The fluorescence features of different sizes are merged into one fluorescence feature, so that the size of the grid can adapt to the fluorescence features of various scales, and each fluorescence feature is reconstructed to obtain the final reconstructed image; 所述信息补全修正模块包括:The information completion and correction module includes: 精确对焦模块:对于一个栅格内的各个像素,若所述栅格的第一对焦位置处的像素点的值大于第三设定阈值,则不修正第一对焦位置;否则,则分别取出所述栅格的第一对焦位置处的第一图像和所述栅格的周围栅格对焦位置处的第二图像,得到所述两张图像中所述像素点的值,若第一图像中所述像素点的值大于第二图像中所述像素点的值,则不修正第一对焦位置,否则,则以周围栅格对焦位置修正第一对焦位置,最终得到精确对焦位置;Precise focus module: for each pixel in a grid, if the value of the pixel at the first focus position of the grid is greater than the third set threshold, the first focus position is not corrected; otherwise, the The first image at the first focus position of the grid and the second image at the focus position of the surrounding grids of the grid are obtained, and the value of the pixel in the two images is obtained. If the value of the pixel point is greater than the value of the pixel point in the second image, the first focus position is not corrected, otherwise, the first focus position is corrected with the surrounding grid focus position, and the precise focus position is finally obtained; 边缘平滑模块:基于精确对焦位置对栅格的边缘进行插值平滑化后,将精确对焦位置与数据立方进行重构,得到初步重构图像;Edge smoothing module: After the edge of the grid is interpolated and smoothed based on the precise focus position, the precise focus position and the data cube are reconstructed to obtain a preliminary reconstructed image; 采用信息竞争机制精确每个像素点的对焦位置,并且在位置空间与图像上对栅格的边缘做光滑处理,减少栅格分割带来的边缘效应。The information competition mechanism is used to accurately focus the position of each pixel, and the edge of the grid is smoothed in the position space and image to reduce the edge effect caused by grid segmentation. 6.根据权利要求5所述的厚样本显微荧光图像重构系统,其特征在于,所述对焦位置初测模块包括:6 . The thick sample microscope fluorescence image reconstruction system according to claim 5 , wherein the initial measurement module of the focus position comprises: 清晰度评价模块:对于同一栅格在不同焦平面所拍摄的视野图像,分别计算各视野图像在栅格中最亮点的亮度值、栅格内各像素亮度值的方差,并将最亮点的亮度值、各像素亮度值的方差相乘积作为清晰度评价函数;Sharpness evaluation module: For the field of view images captured by the same grid in different focal planes, calculate the brightness value of the brightest point in each field of view image in the grid and the variance of the brightness value of each pixel in the grid, and calculate the brightness of the brightest point. value and the product of the variance of the luminance value of each pixel as the sharpness evaluation function; 栅格对焦模块:计算同一栅格内不同焦平面采集的视野图像的清晰度评价函数,得到清晰度评价函数最大时的焦平面位置,作为所述栅格的第一对焦位置;Grid focusing module: calculate the sharpness evaluation function of the field of view images collected by different focal planes in the same grid, and obtain the focal plane position when the sharpness evaluation function is the largest, as the first focusing position of the grid; 栅格置信模块:对于各个栅格,获取在第一对焦位置处的视野图像,得到视野图像在栅格中的最亮点的亮度值,将最亮点的亮度值与第一设定阈值进行对比,得到所述栅格的置信度,进一步建立置信度地图。Grid confidence module: For each grid, obtain the field of view image at the first focus position, obtain the brightness value of the brightest point of the field of view image in the grid, and compare the brightness value of the brightest point with the first set threshold, The confidence level of the grid is obtained, and a confidence level map is further established. 7.根据权利要求5所述的厚样本显微荧光图像重构系统,其特征在于,所述对焦位置修正模块包括:7. The thick sample microscope fluorescence image reconstruction system according to claim 5, wherein the focus position correction module comprises: 高置信度修正模块:对于置信度地图中置信度高的区域中的栅格,计算所述栅格周围置信度高的周围栅格中与所述栅格的第一对焦位置的距离大于第二设定阈值的栅格所占比例,若所述比例高于设定值,则计算周围栅格的对焦位置均值,对所述栅格的第一对焦位置进行修正;High-confidence correction module: for the grid in the high-confidence area in the confidence map, calculate that the distance from the first focus position of the grid in the surrounding grid with high confidence around the grid is greater than the second The proportion of grids with a set threshold, if the proportion is higher than the set value, calculate the average value of the focus positions of the surrounding grids, and correct the first focus position of the grid; 低置信度修正模块:对于置信度地图中置信度低的区域中的栅格,利用周围置信度高的周围栅格的对焦位置均值进行修正,若如周围栅格中不存在的置信度高的栅格,则不修正。Low-confidence correction module: For the grids in the low-confidence area in the confidence map, use the average focus position of the surrounding grids with high confidence to correct, if there is no high-confidence grid in the surrounding grids. Grids are not corrected. 8.根据权利要求5所述的厚样本显微荧光图像重构系统,其特征在于,所述荧光特征重构模块包括:8 . The thick sample microscope fluorescence image reconstruction system according to claim 5 , wherein the fluorescence feature reconstruction module comprises: 荧光聚类模块:利用FOF算法对初步重构图像中荧光区域的像素点进行聚类,将相连的像素点聚合形成一个荧光特征;Fluorescence clustering module: Use the FOF algorithm to cluster the pixels in the fluorescence area in the preliminary reconstructed image, and aggregate the connected pixels to form a fluorescence feature; 荧光合并模块:根据一个荧光特征所占像素点的数量决定栅格划分的长宽,并将划分的栅格长宽相同的荧光特征合并为一个新荧光特征,且不改变栅格划分的长宽;Fluorescence merge module: Determine the length and width of the grid division according to the number of pixels occupied by a fluorescent feature, and merge the fluorescent features with the same length and width into a new fluorescent feature without changing the length and width of the grid division ; 构建过滤模块:基于新荧光特征,将周围不属于任何新荧光特征的像素点划分给最近的新荧光特征,迭代直至将所有像素分配完毕,并根据每个新荧光特征的像素点构建一个信息过滤器;Building a filtering module: Based on the new fluorescent features, divide the surrounding pixels that do not belong to any new fluorescent features to the nearest new fluorescent features, iterate until all pixels are allocated, and construct an information filter based on the pixels of each new fluorescent feature device; 荧光重构模块:利用新荧光特征、栅格划分的长宽,对每个新荧光特征进行图像重构,得到每个新荧光特征的重构图像;Fluorescence reconstruction module: Use the new fluorescence features and the length and width of grid division to reconstruct the image of each new fluorescence feature to obtain the reconstructed image of each new fluorescence feature; 整合图像模块:利用信息过滤器,对每个新荧光特征的重构图像进行过滤后,将所有新荧光特征得到的重构图像整合为最终重构图像。Integrating image module: After filtering the reconstructed image of each new fluorescent feature using the information filter, the reconstructed image obtained by all the new fluorescent features is integrated into the final reconstructed image.
CN201910568163.2A 2019-06-27 2019-06-27 Method and system for reconstructing microscopic fluorescence images of thick samples Active CN110363734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910568163.2A CN110363734B (en) 2019-06-27 2019-06-27 Method and system for reconstructing microscopic fluorescence images of thick samples

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910568163.2A CN110363734B (en) 2019-06-27 2019-06-27 Method and system for reconstructing microscopic fluorescence images of thick samples

Publications (2)

Publication Number Publication Date
CN110363734A CN110363734A (en) 2019-10-22
CN110363734B true CN110363734B (en) 2021-07-13

Family

ID=68216180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910568163.2A Active CN110363734B (en) 2019-06-27 2019-06-27 Method and system for reconstructing microscopic fluorescence images of thick samples

Country Status (1)

Country Link
CN (1) CN110363734B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111212237B (en) * 2020-02-13 2021-10-22 中国科学院苏州生物医学工程技术研究所 Autofocus method for bioluminescence chips
CN111787233B (en) * 2020-08-05 2021-10-29 湖南莱博赛医用机器人有限公司 Image acquisition method and device and electronic equipment
CN115953344B (en) * 2023-03-08 2023-05-30 上海聚跃检测技术有限公司 Image processing method, device, electronic equipment and storage medium
CN119027597B (en) * 2024-10-29 2024-12-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Structured light microscopic image reconstruction method and system based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562701A (en) * 2009-03-25 2009-10-21 北京航空航天大学 Digital focusing method and digital focusing device used for optical field imaging
CN101900536A (en) * 2010-07-28 2010-12-01 西安交通大学 Measuring Method of Object Surface Topography Based on Digital Image Method
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN103075960A (en) * 2012-12-30 2013-05-01 北京工业大学 Multi-visual-angle great-depth micro stereo visual-features fusion-measuring method
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology
CN107392946A (en) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 A kind of micro- multiple focal length images series processing method rebuild towards 3D shape
EP3122268B1 (en) * 2014-03-24 2019-01-02 Koninklijke Philips N.V. Quality assurance and data coordination for electromagnetic tracking systems
CN109239900A (en) * 2018-11-07 2019-01-18 华东师范大学 A kind of full-automatic quick focusing method for the big visual field acquisition of microscopic digital image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101562701A (en) * 2009-03-25 2009-10-21 北京航空航天大学 Digital focusing method and digital focusing device used for optical field imaging
CN101900536A (en) * 2010-07-28 2010-12-01 西安交通大学 Measuring Method of Object Surface Topography Based on Digital Image Method
CN101976436A (en) * 2010-10-14 2011-02-16 西北工业大学 Pixel-level multi-focus image fusion method based on correction of differential image
CN103854265A (en) * 2012-12-03 2014-06-11 西安元朔科技有限公司 Novel multi-focus image fusion technology
CN103075960A (en) * 2012-12-30 2013-05-01 北京工业大学 Multi-visual-angle great-depth micro stereo visual-features fusion-measuring method
EP3122268B1 (en) * 2014-03-24 2019-01-02 Koninklijke Philips N.V. Quality assurance and data coordination for electromagnetic tracking systems
CN107392946A (en) * 2017-07-18 2017-11-24 宁波永新光学股份有限公司 A kind of micro- multiple focal length images series processing method rebuild towards 3D shape
CN109239900A (en) * 2018-11-07 2019-01-18 华东师范大学 A kind of full-automatic quick focusing method for the big visual field acquisition of microscopic digital image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Enhanced image reconstruction of three-dimensional fluorescent assays by subtractive structured-light illumination microscopy;Jong-ryul Choi 等;《Optical Society of America》;20121031;第29卷(第10期);2165-2173 *
High-resolution image reconstruction in fluorescence microscopy with patterned excitation;Rainer Heintzmann 等;《APPLIED OPTICS》;20060710;第45卷(第20期);5037-5045 *
基于视觉检测的原子力显微镜探针自动定位技术研究;刘静怡;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170515(第5期);I138-768 *
计量型紫外光学显微镜自动对焦系统的研究;尹传祥 等;《计量学报》;20141231;第35卷(第6A期);26-30 *

Also Published As

Publication number Publication date
CN110363734A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363734B (en) Method and system for reconstructing microscopic fluorescence images of thick samples
US10462351B2 (en) Fast auto-focus in imaging
EP3988985B1 (en) Fast auto-focus in microscopic imaging
KR102582261B1 (en) Method for determining a point spread function of an imaging system
CN104637064A (en) Defocus blurred image definition detection method based on edge intensity weight
US11449964B2 (en) Image reconstruction method, device and microscopic imaging device
JP2013117848A (en) Image processing apparatus and image processing method
CN110646933A (en) Autofocus system and method based on multi-depth plane microscope
KR102253320B1 (en) Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same
WO2023005671A1 (en) Correction method and apparatus for large-field-of-view high-resolution light field microscopic system
JP2023542619A (en) Computer-implemented method for quality control of digital images of specimens
CN105072330A (en) An automatic focusing method for a line scan camera
JP2015108837A (en) Image processing apparatus and image processing method
EP2926558B1 (en) A method and system for extended depth of field calculation for microscopic images
JP2016051167A (en) Image acquisition device and control method therefor
CN116660173A (en) Image scanning method, terminal and storage medium for hyperspectral imaging technology
Kwon et al. All-in-focus imaging using average filter-based relative focus measure
CN116645418A (en) Screen button detection method and device based on 2D and 3D cameras and relevant medium thereof
US20160162753A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN112363309A (en) Automatic focusing method and system for pathological image under microscope
CN114730070A (en) Image processing method, image processing apparatus, and image processing system
CN118552551B (en) A method for automatically determining whether to remove noise
CN119863370B (en) Imaging focal length correction method for infrared imaging target simulation system
CN114067012B (en) Linear structure optical chromatography method, system and device based on strip intensity estimation
Tian et al. Computer Controlled Microscope Autofocus and Image Real-Time Fusion Technology.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220707

Address after: 201100 rooms 615 and 616, 6 / F, building 2, No. 322, Lane 953, Jianchuan Road, Minhang District, Shanghai

Patentee after: Shanghai Pu Huasen Biotechnology Co.,Ltd.

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Patentee before: SHANGHAI JIAO TONG University