Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for reconstructing a thick sample micro-fluorescence image.
The invention provides a thick sample micro-fluorescence image reconstruction method, which comprises the following steps:
data cube acquisition step: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid dividing step: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
initial focusing position measurement: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction step: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and (3) information completion and correction: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
fluorescence characteristic reconstruction: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Preferably, the initial measurement step of the focusing position includes:
a definition evaluation step: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing step: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
grid confidence step: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Preferably, the in-focus position correction step includes:
a high confidence degree correction step: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
and (3) low confidence coefficient correction: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Preferably, the information completion correcting step includes:
and (3) a precise focusing step: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing step: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Preferably, the fluorescence signature reconstructing step comprises:
fluorescence clustering step: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
fluorescence merging step: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering step: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
fluorescence reconstruction step: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
integrating images: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The invention provides a thick sample micro-fluorescence image reconstruction system, which comprises the following modules:
the data cube acquisition module: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid partitioning module: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
the focusing position initial measurement module: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction module: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and an information completion correction module: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
a fluorescence characteristic reconstruction module: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Preferably, the focusing position initial measurement module comprises:
a definition evaluation module: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing module: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
a grid confidence module: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Preferably, the in-focus position correction module includes:
a high confidence correction module: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
a low confidence correction module: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Preferably, the information completion correcting module includes:
a precision focusing module: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing module: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Preferably, the fluorescence signature reconstruction module comprises:
a fluorescence clustering module: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
a fluorescence combining module: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering module: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
a fluorescence reconstruction module: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
an integrated image module: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
Compared with the prior art, the invention has the following beneficial effects:
1. the method solves the problem of reconstruction of images with different focal planes at different positions of a thick sample in the same field of view, and improves the quality and the integral definition of the reconstructed fluorescence image;
2. according to the method, the field of view is divided into grids, and the variance between the brightest point in the grid and the brightness of each pixel of the grid is utilized to construct the definition evaluation function to determine the focusing position, so that the problem that the focusing position cannot be determined is solved.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
The invention provides a thick sample micro-fluorescence image reconstruction method, which comprises the following steps:
data cube acquisition step: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid dividing step: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
initial focusing position measurement: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction step: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and (3) information completion and correction: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
fluorescence characteristic reconstruction: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Specifically, the focusing position initial measurement step includes:
a definition evaluation step: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing step: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
grid confidence step: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Specifically, the focus position correction step includes:
a high confidence degree correction step: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
and (3) low confidence coefficient correction: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Specifically, the information completion correcting step includes:
and (3) a precise focusing step: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing step: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Specifically, the fluorescence characteristic reconstructing step includes:
fluorescence clustering step: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
fluorescence merging step: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering step: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
fluorescence reconstruction step: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
integrating images: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The invention provides a thick sample micro-fluorescence image reconstruction system, which comprises the following modules:
the data cube acquisition module: collecting the visual field images of different focal planes at equal intervals by the step length smaller than the depth of field of the microscope in the fluctuating range of the fluorescence slice in the same visual field to obtain a data cube;
a local grid partitioning module: dividing the view image into different grids by utilizing a rectangle with set length and width based on the data cube, wherein the grids are not overlapped;
the focusing position initial measurement module: for one grid, constructing a definition evaluation function by using the brightness value of the brightest point in the grid and the variance of the brightness values of all pixels of the grid, determining a first focus position of each grid in a visual field, and establishing a grid confidence map by using the brightest point of each grid at the first focus position;
a focusing position correction module: correcting the first focusing position of each grid by using the grid confidence map and the information of the focusing positions of the grids around each grid so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing positions;
and an information completion correction module: the focusing position of each pixel point is accurate, and the edges of the grids are smoothed on the position space and the view field image to obtain a primary reconstructed image;
a fluorescence characteristic reconstruction module: and carrying out fluorescence characteristic clustering on the pixel points in the preliminary reconstruction image to obtain pixel points occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstruction image.
Specifically, the focusing position initial measurement module includes:
a definition evaluation module: for the visual field images shot by the same grid in different focal planes, respectively calculating the brightness value of the brightest point of each visual field image in the grid and the variance of the brightness value of each pixel in the grid, and taking the product of the brightness value of the brightest point and the variance of the brightness value of each pixel as a definition evaluation function;
a grid focusing module: calculating the definition evaluation functions of the visual field images acquired by different focal planes in the same grid to obtain the position of the focal plane when the definition evaluation function is maximum, wherein the position is used as a first focusing position of the grid;
a grid confidence module: and for each grid, acquiring a view field image at the first focus position, obtaining the brightness value of the brightest point of the view field image in the grid, comparing the brightness value of the brightest point with a first set threshold value, obtaining the confidence of the grid, and further establishing a confidence map.
Specifically, the focus position correction module includes:
a high confidence correction module: calculating the proportion of grids with the distance between the grids and the first focusing position of the grids in the surrounding grids with high confidence degrees around the grids and larger than a second set threshold value for the grids in the region with high confidence degrees in the confidence degree map, and if the proportion is higher than the set value, calculating the average value of the focusing positions of the surrounding grids and correcting the first focusing position of the grids;
a low confidence correction module: the grid in the region with low confidence in the confidence map is corrected by using the focused position average value of the surrounding grid with high confidence, and if the grid with high confidence does not exist in the surrounding grid, the grid is not corrected.
Specifically, the information completion correcting module includes:
a precision focusing module: for each pixel in one grid, if the value of the pixel point at the first focusing position of the grid is greater than a third set threshold value, not correcting the first focusing position; if not, respectively taking out a first image at a first focusing position of the grid and a second image at a focusing position of surrounding grids of the grid to obtain values of the pixel points in the two images, if the values of the pixel points in the first image are larger than the values of the pixel points in the second image, not correcting the first focusing position, otherwise, correcting the first focusing position by using the focusing positions of the surrounding grids, and finally obtaining an accurate focusing position;
an edge smoothing module: and after the edge of the grid is subjected to interpolation smoothing based on the accurate focusing position, reconstructing the accurate focusing position and the data cube to obtain a preliminary reconstruction image.
Specifically, the fluorescence characteristic reconstruction module includes:
a fluorescence clustering module: clustering the pixel points of the fluorescence area in the preliminary reconstructed image by using an FOF algorithm, and aggregating the connected pixel points to form a fluorescence characteristic;
a fluorescence combining module: determining the length and width of grid division according to the number of pixel points occupied by one fluorescence feature, combining the fluorescence features with the same length and width of the divided grids into a new fluorescence feature, and not changing the length and width of the grid division;
constructing a filtering module: based on the new fluorescence characteristics, dividing surrounding pixel points which do not belong to any new fluorescence characteristics to the latest new fluorescence characteristics, iterating until all the pixels are completely distributed, and constructing an information filter according to the pixel points of each new fluorescence characteristic;
a fluorescence reconstruction module: carrying out image reconstruction on each new fluorescence feature by using the new fluorescence feature and the length and the width of the grid division to obtain a reconstructed image of each new fluorescence feature;
an integrated image module: and (3) filtering the reconstructed image of each new fluorescence characteristic by using an information filter, and integrating the reconstructed images obtained by all new fluorescence characteristics into a final reconstructed image.
The thick sample micro-fluorescence image reconstruction system provided by the invention can be realized through the step flow of the thick sample micro-fluorescence image reconstruction method. The thick sample micro-fluorescence image reconstruction method can be understood as a preferred example of the thick sample micro-fluorescence image reconstruction system by those skilled in the art.
In a specific implementation, the invention proceeds by the following steps:
data cube acquisition step: in the same visual field, in the fluctuating range of the fluorescence slice, the images of different focal planes are collected at equal intervals by the step length smaller than the depth of field of the microscope, and a data cube is obtained.
A local grid dividing step: the field of view is divided into different grids by means of rectangles of a certain length and width.
Initial focusing position measurement: for one grid, a definition evaluation function is constructed by using the variance between the brightest point in the grid and the brightness of each pixel of the grid, the focusing position of each grid in the visual field is preliminarily determined, and a grid confidence map is established by using the brightest point of each grid in the focusing position image.
A focusing position correction step: and correcting the focusing position of each grid by using the grid confidence map and the information of the focusing position of the grid around each grid, so as to reduce the influence of the halo of the defocused fluorescence on the judgment of the focusing position.
And (3) information completion and correction: and an information competition mechanism is adopted to accurately focus the position of each pixel point, and the edges of the grids are smoothed in a position space and an image, so that the edge effect caused by grid segmentation is reduced.
Fluorescence characteristic reconstruction: and clustering the pixels of the preliminary reconstructed image by adopting an FOF algorithm to obtain the pixels occupied by each fluorescence characteristic, determining the size of the segmentation grid according to the number of the pixels occupied by the fluorescence characteristics, combining the fluorescence characteristics with the same segmentation grid size into one fluorescence characteristic, enabling the size of the grid to be suitable for the fluorescence characteristics of various scales, reconstructing each fluorescence characteristic and obtaining a final reconstructed image.
Wherein, the range of the fluorescence slice fluctuation refers to: the focal plane corresponding to the deepest position of the fluorescence slice in the visual field range is between the focal plane corresponding to the shallowest position.
The step length of less than the depth of field of the microscope is used for collecting images of different focal planes at equal intervals, and the data cube is obtained by the following steps: and setting the acquired step length to be smaller than the depth of field of the fluorescence microscope, and acquiring images at different focal plane positions with the same step length to obtain the data cube of the same visual field at different focal planes.
The dividing of the field of view into different grids by using a rectangle with a certain length and width refers to: a rectangle with a certain length and width is set, the visual field is divided into different grids seamlessly, and no overlapped area exists between the grids.
The method for constructing the definition evaluation function by using the variance between the brightest point in the grid and the brightness of each pixel in the grid comprises the following steps: for images shot by the same grid in different focal planes, the value of the brightest point of each image in the grid and the variance of the brightness of each pixel in the grid are respectively calculated, and the product of the two is used as a definition evaluation function.
The preliminary determination of the focusing position of each grid in the field of view refers to: and calculating the definition evaluation functions of the acquired images of different focal planes in the same grid by using the definition evaluation functions to obtain the position of the focal plane with the maximum definition evaluation function as the initial focusing position of the grid.
The establishing of the grid confidence map by using the brightest point of each grid at the focusing position refers to: and for each grid, acquiring an image acquired by a focusing position by using the obtained initial focusing position, obtaining the value of the brightest point of the image in the grid, comparing the value with a set threshold value to obtain the confidence coefficient of the grid, and obtaining a confidence coefficient map.
The step of correcting the focusing position of each grid by using the grid confidence map and the information of the focusing position of the grid around each grid is as follows: and (3) calculating the proportion of grids which are more than a certain threshold from the focusing position of each grid in the grids by combining the grids with high surrounding confidence coefficients for each grid by using the confidence coefficient map, if the confidence coefficient of the grid is high, calculating the average value of the initial focusing positions of the grids, and correcting the focusing position of the grid. If the confidence of the grid is low, the focus position mean value of the grid with high confidence around the grid is directly used for correction, and if the grid without high confidence around the grid, the focus position mean value of the grid with high confidence around the grid is not used for correction.
The influence of the defocusing halo on the judgment of the focal position is as follows: the diffusion of the halo generated by the defocused fluorescent features on the focal plane to the areas without fluorescent features results in a much greater brightness of the halo than the areas without fluorescent features, such that the halo is misidentified as an in-focus position.
The method for accurately focusing position of each pixel point by using information competition mechanism comprises the following steps: for each pixel in a grid, if the value of the pixel point in the grid focusing position image is greater than a certain threshold value, the focusing position of the pixel point is the same as the focusing position of the grid. And conversely, respectively taking out the two images of the grid focusing position and the grid focusing position around the grid to obtain the value of the pixel point in the two images. If the former is larger than the latter, the focusing position of the pixel point is the same as that of the grid, and if the latter is larger than the former, the focusing position of the pixel point is the same as that of the surrounding grid. If a plurality of focusing positions meeting the conditions appear, selecting the position which enables the value of the pixel point to be maximum as the focusing position so as to solve the problem that different pixels in the same grid have different focusing positions.
The method for smoothing the edges of the grid in the position space and the image comprises the following steps: and interpolating the pixel focusing position at the grid-divided edge to ensure that the focusing position is smooth, and interpolating the pixel point value at the grid-divided edge to ensure that the pixel point value is smooth at the grid-divided edge.
The step of clustering the pixels of the preliminary reconstructed image by adopting the FOF algorithm to obtain the pixels occupied by each fluorescence characteristic is as follows: the reconstructed image is obtained through the method, the FOF algorithm is utilized to cluster the pixel points in the fluorescence area in the image according to the positions of the pixel points in the visual field, and the connected pixel points are clustered together to form a fluorescence characteristic.
The size of the segmentation grid is determined by the number of the pixel points occupied by the fluorescence features, and the combination of the fluorescence features with the same segmentation grid size into one fluorescence feature means that: and determining the length and width of the grid division according to the number of the pixel points occupied by the obtained fluorescent features, and combining the fluorescent features with the same length and width of the divided grids into a new fluorescent feature without changing the size of the grid division.
The step of reconstructing each fluorescence characteristic and obtaining a final reconstructed image is as follows: and reconstructing each obtained fluorescent feature and the corresponding grid size on the original image by using the method, filtering out information of other fluorescent features by using the pixel point of each obtained fluorescent feature, and integrating the images obtained by all the fluorescent features into a finally reconstructed image.
The technical solution of the present invention will be described in more detail with reference to the following embodiments.
As shown in fig. 1, the thick sample micro-fluorescence image reconstruction method is implemented as follows:
step 100: collecting images of different focal planes at equal intervals by a step length smaller than the depth of field of the microscope within the fluctuating range of the fluorescence slice in a visual field to obtain a data cube, comprising a step 110;
step 110: in a visual field range, obtaining focal plane positions corresponding to the deepest and the shallowest positions of the fluorescence slice, setting the moving step length of a shooting position to be smaller than the depth of field of a camera, and collecting images in the visual field range at different focal planes at equal intervals;
step 200: dividing the field of view into different grids by using a rectangle with a certain length and width means that: setting a rectangle with a certain length and width, and seamlessly dividing a visual field into different grids without overlapped areas among the grids;
step 300: constructing a definition evaluation function by using the variance between the brightest point in the grid and the brightness of each pixel of the grid, preliminarily determining the focusing position of each grid in the visual field, and establishing a grid confidence map by using the brightest point of each grid focusing position image, wherein the steps comprise steps 310, 320 and 330;
step 310: for images shot by the same grid in different focal planes, respectively calculating the value of the brightest point of each image in the grid and the variance of the brightness of each pixel in the grid, and taking the product of the two as a definition evaluation function;
step 320: calculating the definition evaluation functions of the images acquired by different focal planes in the same grid by using the definition evaluation function in the step 310 to obtain the position of the focal plane with the maximum definition evaluation function as the initial focusing position of the grid;
step 330: for each grid, acquiring an image acquired by a focusing position by using the initial focusing position obtained in the step 320, obtaining a brightest point value of the image in the grid, and comparing the brightest point value with a set threshold value to obtain a confidence coefficient of the grid, so as to obtain a confidence coefficient map;
step 400: using the confidence map established in step 330 and the information of the focused positions of the grids around each grid, each focused position of the grid is corrected, including steps 410 and 420:
step 410: for the area with high confidence coefficient, combining the grids with high confidence coefficient around, calculating the proportion of the grids with the distance between the focusing position of the grid and a certain threshold value in the grids, if the proportion is higher, calculating the average value of the primary focusing positions of the grids, and correcting the focusing position of the grid;
step 420: and for the area with low confidence coefficient, directly correcting the area with low confidence coefficient by using the focusing position average value of the surrounding grid with high confidence coefficient, and not correcting the area without the surrounding grid with high confidence coefficient.
Step 500: the information competition mechanism is adopted to accurately focus the position of each pixel point, and the edge of the grid is smoothed in the position space and the image, and the method comprises the following steps of 510, 520 and 530:
step 510: after the correction in step 400, for each pixel in a grid, if the value of the pixel in the image of the focusing position of the grid is greater than a certain threshold, the focusing position of the pixel is the same as the focusing position of the grid. And conversely, respectively taking out the two images of the grid focusing position and the grid focusing position around the grid to obtain the value of the pixel point in the two images. If the former is larger than the latter, the focusing position of the pixel point is the same as that of the grid, and if the latter is larger than the former, the focusing position of the pixel point is the same as that of the surrounding grid. And if a plurality of focusing positions meeting the conditions appear, selecting the position which enables the value of the pixel point to be maximum as the focusing position.
Step 520: performing interpolation smoothing on the refined focusing position map processed in the step 510 at the edge of the grid by using the surrounding focusing positions;
step 530: reconstructing the focused position map and the data cube obtained after the processing in the step 520 into an image, and then performing interpolation smoothing on the edge of the grid;
step 600: performing fluorescence feature clustering by using the preliminary reconstructed image obtained in the step 500, determining the size of the segmentation grid according to the number of the pixel points occupied by the fluorescence features, combining the fluorescence features with the same segmentation grid size into one fluorescence feature, reconstructing each fluorescence feature and obtaining a final reconstructed image, wherein the steps comprise steps 610, 620, 630, 640 and 650;
step 610: obtaining a reconstructed image through steps 100 to 500, clustering pixel points in a fluorescence area in the image according to the positions of the pixel points in a visual field by using a FOF algorithm, and clustering the connected pixel points together to form a fluorescence characteristic;
step 620: determining the length and width of the grid division according to the number of the pixel points occupied by the fluorescent features obtained in the step 610, and combining the fluorescent features with the same length and width of the divided grids into a new fluorescent feature without changing the size of the grid division;
step 630: obtaining different fluorescence characteristics through the step 620, dividing surrounding pixel points which do not belong to any fluorescence characteristics to the nearest fluorescence characteristics, iterating the step until all pixels are distributed, and constructing an information filter according to the pixel point of each fluorescence characteristic;
step 640: repeating steps 100 through 500 for each fluorescence feature using each fluorescence feature obtained in step 630 and the corresponding grid size obtained in step 620, resulting in a reconstructed image of each fluorescence feature.
Step 650: and finally, filtering the reconstructed image obtained in the step 640 by using the filter of each fluorescence characteristic obtained in the step 630, and integrating the images obtained by all the fluorescence characteristics into a final reconstructed image.
The invention solves the problem of data cube acquisition by acquiring images of different focal planes at equal intervals by a step length which is smaller than the depth of field of a microscope within the fluctuation range of the fluorescence slice in a field range; the method for determining the focusing position by dividing the visual field into grids and constructing a definition evaluation function by using the variance of the brightest point in the grids and the brightness of each pixel of the grids solves the problem that the focusing position cannot be determined; the problem of artificial texture generation in rasterization is solved by an information competition method; the method for realizing fluorescence feature segmentation by using the primary reconstructed image solves the problem that the size of grid division is not matched with the size of the fluorescence feature, and improves the reconstruction quality of the fluorescence image of the thick tissue slice through the decomposition of the steps of data cube acquisition, local grid division, focusing position initial measurement, focusing position correction, information completion correction, fluorescence feature reconstruction and the like.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.