[go: up one dir, main page]

CN110458923B - Method for quickly and accurately acquiring neuron body position of tissue sample - Google Patents

Method for quickly and accurately acquiring neuron body position of tissue sample Download PDF

Info

Publication number
CN110458923B
CN110458923B CN201811295425.4A CN201811295425A CN110458923B CN 110458923 B CN110458923 B CN 110458923B CN 201811295425 A CN201811295425 A CN 201811295425A CN 110458923 B CN110458923 B CN 110458923B
Authority
CN
China
Prior art keywords
image
pixels
surface layer
imaging
strip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811295425.4A
Other languages
Chinese (zh)
Other versions
CN110458923A (en
Inventor
袁菁
李安安
钟秋园
龚辉
骆清铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hust-Suzhou Institute For Brainsmatics
Original Assignee
Hust-Suzhou Institute For Brainsmatics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hust-Suzhou Institute For Brainsmatics filed Critical Hust-Suzhou Institute For Brainsmatics
Priority to CN201811295425.4A priority Critical patent/CN110458923B/en
Publication of CN110458923A publication Critical patent/CN110458923A/en
Application granted granted Critical
Publication of CN110458923B publication Critical patent/CN110458923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses a method for quickly and accurately acquiring the position of a neuron cell of a tissue sample, and then acquiring the cell position information of a surface layer image with partial or full thickness which is imaged and cut in the next round of cutting and imaging process, wherein the time for the next round of cutting and imaging is longer than the time for acquiring the cell position information of the surface layer image with partial or full thickness which is imaged and cut and shorter than 5 times of the time for acquiring the cell position information of the surface layer image with partial or full thickness which is imaged and cut, so that only little time is needed to finish the cell position information in the last batch of surface layer images and the neuron cell position information of the whole tissue sample after the whole sample is cut and imaged, and the time for acquiring the neuron cell position information of the whole tissue sample is greatly shortened.

Description

Method for quickly and accurately acquiring neuron body position of tissue sample
Technical Field
The present invention relates to image processing technology, and more particularly, to a method for rapidly and accurately obtaining the neuronal cell body position of a tissue sample.
Background
In the field of optical imaging technology for large samples, all tissues are cut to obtain all surface images, and then all the surface images are processed to obtain target information. Regarding the position of the neuron cell body for obtaining the tissue sample, the method of imaging and then processing the image has certain defects: 1. the quality of an original image formed by the surface layer is not high, the image needs to be preprocessed to improve the definition of the image, and the surface layer image formed by a large sample has a large quantity, so that the preprocessing time is long, and the whole image processing time is long; 2. the method can obtain a clearer original surface image, but the imaging surface is thicker and the imaging time is longer, so that the time difference between the cell body position information in a part of the surface image calculated in the imaging process and the cell body position information in all the surface images uniformly calculated after imaging is finished is not large, namely, the whole image processing time for obtaining the cell body position information is longer no matter when the image processing is carried out.
In summary, the current methods for obtaining the neuron cell positions of the tissue sample all need to take a long time, and have high requirements on image processing equipment, which increases the cost of the image processing equipment.
The problem to be solved is to provide a method for rapidly and accurately acquiring the neuron cell body position of a tissue sample.
Disclosure of Invention
The present invention is directed to overcome the above technical deficiencies, and to provide a method for rapidly and accurately obtaining a neuron cell position of a tissue sample, which solves the technical problem of long time consumption in obtaining the neuron cell position of the tissue sample through an original surface image in the prior art.
In order to achieve the above technical objective, the present invention provides a method for rapidly and accurately obtaining the neuron position of a tissue sample, comprising the following steps:
step a, imaging a surface layer with H thickness from one end of a tissue sample with the height of H to form a surface layer image with clear cell form, and then cutting the imaged surface layer with H thickness, wherein the imaging and cutting time of the surface layer with H thickness is t1;
step b, continuously repeating the step a, wherein the time required for finishing imaging and cutting the surface layers with a plurality of H thicknesses is t1, wherein H < < H;
c, sequentially finishing imaging and cutting of another plurality of h-thickness surface layers adjacent to the plurality of h-thickness surface layers which are finished imaging and cutting in the step b, calculating the cell positions of partial or whole-thickness surface layer images which are finished imaging and cutting in the imaging and cutting process in the step c to determine the position information of the cells in the partial or whole-thickness surface layer images in a three-dimensional space, and calculating the time required by the cell positions to be t2, wherein t1 is more than or equal to t2 and less than or equal to t1 and less than or equal to 5t2;
and d, repeating the steps b and c, sequentially connecting the adjacent surface images for calculating the cell positions until the position information of the cells in the last part of the surface images in the three-dimensional space is completed, and fusing the cell position information in the plurality of surface images to obtain the position information of the cells of all the tissue samples in the three-dimensional space.
Compared with the prior art, the method has the advantages that the surface layer image with clear cell morphology is formed, the cell position information of the surface layer image with partial or whole thickness which is imaged and cut is obtained in the next cutting and imaging process, and the time for the next cutting and imaging process is longer than the time for obtaining the cell position information of the surface layer image with partial or whole thickness which is imaged and cut and shorter than 5 times of the time for obtaining the cell position information of the surface layer image with partial or whole thickness which is imaged and cut, so that only little time is needed for completing the cell position information in the last part of the surface layer image and the neuron cell position information of the whole tissue sample after the whole sample is cut and imaged, and the time for obtaining the neuron cell position information of the whole tissue sample is greatly shortened.
Drawings
FIG. 1 is a flow chart of a high throughput optical tomography method of the present invention;
FIG. 2 is one of the sub-flow diagrams of the high throughput optical tomography method of the present invention;
FIG. 3 is another sub-flow diagram of the high throughput optical tomography method of the present invention;
FIG. 4 is a schematic view of reconstruction of an optical tomographic image of embodiment 1 of the present invention;
FIG. 5 is a schematic view of reconstruction of an optical tomographic image of embodiment 2 of the present invention;
FIG. 6 is an image obtained by high throughput optical tomography, where a is the original surface image, b is the neuronal cell distribution profile in the surface image, with the scale in a being 1mm and the scale in b being 100 μm. (ii) a
FIG. 7 is a processed neuron cell distribution diagram, wherein a is a down-sampled surface layer image, b is a matching diagram of the cell position obtained by calculation with the position of the cell shown in the original surface layer image, the scale in a is 1mm, and the scale in b is 200X 100. Mu.m 3;
fig. 8 is a three-dimensional position diagram of a whole brain neuron cell body obtained by rendering.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a method for quickly and accurately acquiring the neuron cell position of a tissue sample, wherein the tissue sample in the embodiment is a rat brain, and the method comprises the following steps:
step a, imaging a surface layer with the thickness of 10um from one end of a tissue sample with the height of 1.5cm to form a surface layer image with clear cell morphology, and then cutting the imaged surface layer with the thickness of 10 um;
step b, continuously repeating the step a, wherein the time required for finishing imaging and cutting 10 surface layers with the thickness of 10um is 30 minutes;
c, sequentially finishing imaging and cutting of another 10 um-thick surface layer adjacent to the 10 um-thick surface layers which are finished imaging and cutting in the step b, calculating the cell position of the 100um surface layer image which is finished imaging and cutting in the step c in the imaging and cutting process to determine the position information of the cell in the 100um surface layer image in the three-dimensional space, wherein the time required for calculating the cell position is 10 minutes;
and d, repeating the steps b and c, sequentially connecting the adjacent surface images for calculating the cell positions until the position information of the cells in the last part of surface images in the three-dimensional space is finished, and fusing the cell position information in the plurality of surface images to obtain the position information of the cells of all the tissue samples in the three-dimensional space.
Since the time for obtaining the cell position information in the surface layer image with the thickness of 100um is 10 minutes, and the time required for completing the imaging and cutting of the 10 surface layers with the thickness of 10um is 30 minutes, the cell position information in the surface layer image with the thickness of 100um can be obtained in the process of completing the imaging and cutting of the 10 surface layers with the thickness of 10 um.
Further, the method for imaging the surface layer in the step a is high-throughput optical tomography, and the specific steps of the high-throughput optical tomography are as follows:
as shown in fig. 1 to 3, a high throughput optical tomography method includes the steps of:
s1, modulating a light beam into a modulated light beam which can be focused on a focal plane of an objective lens and can be diverged on a defocusing surface of the objective lens, wherein the modulated light beam has incompletely same modulation intensity on the focal plane of the objective lens;
during specific modulation, the light beam is firstly shaped into a linear line light beam, and then the line light beam is modulated into a line illumination modulation light beam.
Specifically, the modulated light beam is modulated by a waveform having an incompletely identical modulation intensity, such as gaussian modulation, sinusoidal modulation, triangular modulation, etc., at the focal plane of the objective lens. Since the illumination beam of the present embodiment is gaussian beam, the illumination modulated beam formed in the present embodiment is gaussian modulated. In this embodiment, other waveform modulations with different modulation intensities may also be adopted as required.
S2, imaging the surface layer with the same thickness h of the tissue sample under the illumination of the modulation light beam under different rows of pixels, wherein the calculation formula of the formed surface layer image is as follows:
I(i)=I in f(i)+I out
wherein I (I) is a surface image formed under I pixels, f (I) is a modulation intensity corresponding to the surface image I (I), I in Focal plane image, which is the surface image, I out An out-of-focus surface image which is a surface layer image;
when in specific imaging, the method comprises the following steps:
s21, driving the modulated light beam and the tissue sample to relatively move continuously and uniformly in the X direction;
s22, the camera sequentially and continuously images the tissue samples along the relative motion direction of the tissue samples;
the modulated light beam of the present embodiment may be perpendicular to the sample moving direction, and the direction of imaging the tissue sample continuously is the same as the direction of the multi-row pixel arrangement, i.e. the continuously illuminated portion of the tissue sample is continuously imaged while the tissue sample is moving relative to the modulated light beam. The tissue sample can be driven to move continuously and uniformly in the direction perpendicular to the linear illumination modulated light beam, and the modulated light beam can also be driven to move continuously and uniformly in the direction parallel to the sample, as long as the modulated light beam and the tissue sample can generate relative continuous and uniform motion.
As shown in a in FIG. 4, the imaging area of this embodiment is N rows of pixels, N ≧ 2; forming two perpendicular directions of X and Y on a plane parallel to the tissue sample imaging plane, the modulated light beam having the following characteristics in the X and Y directions, respectively: the modulated light beams have not exactly the same modulated intensity in the X direction over the N rows of pixels, the modulated light beams have the same modulated intensity in the Y direction over each of the N rows of pixels. Moreover, the distribution direction and the width of the N rows of pixels are respectively the same as the distribution direction and the width of the line illumination modulation light beams and are in conjugate relation with each other, so that the imaging area corresponds to the line illumination modulation light beams conveniently.
Correspondingly, the direction of the tissue sample moving relative to the modulated light beam is also along the X direction, so as to ensure that the direction of the tissue sample moving relative to the modulated light beam is the same as the arrangement direction of the N rows of pixels, for convenience of operation, the present embodiment preferably drives the tissue sample to move, and the modulated light beam can be set to be static, that is, the direction of the tissue sample moving relative to the modulated light beam can be set to be the same as the arrangement direction of the N rows of pixels, and the single frame exposure time of imaging is the same as the time of the sample moving by one row of pixels.
The present embodiment can determine imaging, and after completing continuous imaging, the following steps can be performed, and if not, the tissue sample is continuously driven to move. Because the continuous imaging of the tissue sample is realized through the continuous and uniform movement of the tissue sample, which is equivalent to the continuous scanning imaging of the tissue sample, after the imaging, whether the continuous scanning imaging of the whole tissue sample is finished or not needs to be judged, which is beneficial to ensuring the integrity and continuity of the imaging.
S23, obtaining a strip image block I of the ith row of pixels in each frame of image obtained according to the time sequence t (i) The calculation formula of the stripe image block is as follows:
Figure BDA0001851056590000051
wherein, I t (i) For the stripe image block corresponding to the ith row of pixels in the t frame image,
Figure BDA0001851056590000052
is I t (i) Focal plane images of opposite strip image blocks, i.e.
Figure BDA0001851056590000053
The focal plane image of the mth strip image block in the complete strip image,
Figure BDA0001851056590000054
is shown as I t (i) F (i) is the modulation intensity corresponding to the ith row of pixels of the defocused surface image of the opposite stripe image block;
as shown in fig. 4 (a), during imaging, the tissue sample moves along the arrangement direction of the imaging pixels, and since the exposure time of a single frame of imaging is the same as the time when the tissue sample moves by one row of pixels, each row of pixels sequentially forms a plurality of band image blocks along the length direction of the tissue sample, and the plurality of band image blocks are continuous imaging of the tissue sample.
S24, sequentially splicing the strip image blocks of the ith row of pixels in each frame of image to obtain the strip image of the ith row of pixels, wherein the calculation formula of the strip image is as follows:
Figure BDA0001851056590000055
wherein M is the number of strip image blocks corresponding to the complete strip image, and specifically is: the strip image is formed by splicing M strip image blocks, wherein,
Figure BDA0001851056590000056
and M is less than or equal to M and is a focal plane image corresponding to the mth strip image block in the strip images.
It should be noted that the strip images are formed by shift stitching of a plurality of strip image blocks corresponding to one row of pixels, that is, N rows of pixels can be respectively stitched to form N surface images.
S3, demodulating the multiple strip images under different pixels through a demodulation algorithm to obtain a focal plane image of the strip image, wherein the focal plane image is an optical tomography image; wherein, the demodulation formula of the demodulation algorithm is as follows:
I in =c×|βI 1 –αI 2 |
wherein alpha and beta are positive integers, c is a constant greater than 0, I 1 For the cumulative sum of the strip images taken at a pixels, I 2 The sum of the surface images obtained under beta pixels; the accumulated value of the modulation intensities corresponding to the surface layer image under alpha pixels is different from the accumulated value of the modulation intensities corresponding to the surface layer image under beta pixels.
The method comprises the following specific steps:
s31, accumulating the strip images of at least one row of pixels to form a first strip image, and accumulating the strip images of at least one row of pixels to form a second strip image;
when acquiring the N band images, one, two, or more of the N band images may be arbitrarily selected to be accumulated to form a first band image, and then the first band image and the second band image are accumulated in the same manner, and in order to avoid that the optical tomographic image acquired by the demodulation algorithm is zero, the present embodiment may set the accumulated value of the modulation intensities corresponding to the band images at α pixels and the accumulated value of the modulation intensities corresponding to the band images at β pixels to be different.
S32, demodulating the first strip image and the second strip image into the optical tomography image of the strip image according to a demodulation formula
Figure BDA0001851056590000061
For convenience of explanation of the flow of acquiring a band image in this embodiment, the following embodiment is explained.
Example 1: as shown in FIG. 4 (a), when the tissue sample moves in the direction of N lines of pixel arrangement, it may be at time t 1 To t N+M-1 N + M-1 frame images are obtained in time (M is the number of band image blocks corresponding to a complete band image, N is 8 in this embodiment, and M is 9), and each line of pixels in the N + M-1 frame images corresponds to a band image block, for example: a strip-picture block I of the first row of pixels of the first frame picture can be fetched 1 (1) And a strip image block I of the 1 st line pixels of the 2 nd frame image 2 (1) Striped image block I of pixels of line 1 of the N-th frame image N (1) And a band image block I of the 1 st line pixels of the N + M-1 th frame image N+M-1 (1) And the above-mentioned band image block I 1 (1) Striped image block I 2 (1) To the striped image block I N+M-1 (1) The pixels in the second row to the Nth row can be spliced to form the corresponding strip images.
As shown in (b) and (c) of fig. 4, in order to facilitate the description of how to obtain clearer band image blocks and band images, the description is first given by taking the pixels in row 2 and the pixels in row 4 as an example, and the calculation formulas of the band image blocks and the skin image are known respectively:
Figure RE-GDA0001927207990000062
and
Figure RE-GDA0001927207990000063
the strip image block of the 4 th frame image under the 4 th line of pixels
Figure RE-GDA0001927207990000064
(where m =1, since the strip image is formed by splicing 9 strip image blocks, and the strip image block of the 4 th frame image at the pixel of the 4 th row is the first strip image block of the strip image, that is, the strip image block of the 4 th frame image is the first strip image block of the strip image
Figure RE-GDA0001927207990000065
A focal plane image corresponding to the 1 st strip image block in the strip image); in a corresponding manner,
Figure RE-GDA0001927207990000066
wherein
Figure RE-GDA0001927207990000067
Strip image block of 2 nd frame image under 2 nd line pixel
Figure RE-GDA0001927207990000068
I 1 Cumulative sum of surface images acquired at pixel 4, I 2 Selecting the values of alpha and beta as 1 for the accumulated sum of the surface layer images acquired under the pixels in the 2 nd row,
Figure RE-GDA0001927207990000069
therefore, the number of the first and second electrodes is increased,
Figure RE-GDA00019272079900000610
example 2: as shown in FIG. 5, the stripe image formed by the 4 th row pixel under-stitching
Figure BDA00018510565900000611
Wherein
Figure BDA00018510565900000612
Stripe image formed by splicing under 1 st row of pixels
Figure BDA00018510565900000613
Wherein
Figure BDA00018510565900000614
Stripe image formed by splicing pixels on the 2 nd row
Figure BDA00018510565900000615
Wherein
Figure BDA00018510565900000616
Stripe image formed by splicing under 3 rd row pixels
Figure BDA00018510565900000617
Wherein
Figure BDA0001851056590000071
If I 1 For the cumulative sum of the surface images acquired at the pixels of lines 1, 2 and 3, I 2 The cumulative sum of the top layer images obtained under the pixel of the 4 th row is equivalent to the value of α being selected to be 3, and the value of β being selected to be 1, so the demodulation formula can show that:
Figure BDA0001851056590000072
Figure BDA0001851056590000073
therefore, the number of the first and second electrodes is increased,
Figure BDA0001851056590000074
Figure BDA0001851056590000075
by obtaining a surface image with a thickness of 10um by the high-throughput optical tomography, as shown in fig. 6, where a is an original surface image and b is a neuron cell distribution diagram in the surface image, it can be seen that the outer contour of the neuron is very clear. Since the original skin image obtained is a 16-bit grayscale image, the image size is generally 34000 × 25000 pixels, and the resolution is 0.32 × 0.32 × 2.00 μm3, about 1.6GB. To further reduce the image size, the 16-bit table layer image is converted into an 8-bit image, and the resolution is down-sampled from 0.32 × 0.32 × 2.00 μm3 to 2 × 2 × 2 μm 3. 16. Converting the bit image into an 8-bit image, performing linear mapping on an image value according to the signal intensity of the original image, and performing down-sampling by using a bilinear interpolation method. The size of a single surface layer image after 8 bits conversion and down sampling is only about 20MB, but the clear outer outline of the cell can be ensured to be used for calculating the position of the cell in the later period, and meanwhile, the memory consumption in the later counting process is reduced. The time for completing the imaging and cutting of the surface layer with the thickness of 100um is 30 minutes, the time for splicing, rotating 8 bits and down-sampling the image of the surface layer with the thickness of 100um is generally less than 12 minutes, and the time for completing the calculation of the cell position in the image of the surface layer with the thickness of 100um is 10 minutes, so that the splicing, rotating 8 bits, down-sampling and cell position calculation of the image of the surface layer with the thickness of 100um can be completed once in the process of performing the imaging and cutting of the surface layer with the thickness of 100um next time.
An 8-bit downsampling data block with the thickness of 100 micrometers is obtained through the process, as shown in fig. 7a, the outline of a cell in a surface layer image after downsampling is still clear and visible, the data block is automatically segmented and cell position information in a three-dimensional space is calculated by using a published NeuroGPS algorithm, and as shown in fig. 7b, the matching degree of the cell position obtained through calculation and the position of a cell displayed in an original surface layer image is high. After the calculation is finished, an SWC file is generated and the three-dimensional position information of the cells is stored. Image acquisition and image processing were performed using the same graphics workstation (T7910, dual CPU, 8 cores per CPU, frequency 3.4GHz, memory 128 GB). Procedure for imaging and cutting of 10um thick skin layers per round: since the time for obtaining the cell position information in the surface layer image with the thickness of 100um is 10 minutes, and about 12 minutes are needed for completing the stitching, 8-bit rotation and down-sampling (i.e. the previous image processing step) of the surface layer images with the thickness of 10um, which means 22 minutes are summed, that is, the time for completing the cell position information in the surface layer image with the thickness of 100um is about 22 minutes plus the time for the previous image processing, and the time for completing the imaging and cutting of the surface layer with the thickness of 100um is about 30 minutes, the calculation can be completed before the imaging and cutting of the surface layer with the thickness of 10um in the next round is completed.
After the acquisition of the surface images of the whole brain is finished, cell position information in 150 surface images with the thickness of 100 microns is fused, and a complete three-dimensional position information file of the neurons of the whole brain can be obtained. The fused SWC file can be rendered in commercial software Amira to obtain a three-dimensional position map of the whole brain neuron cell body, as shown in fig. 8. And modifying the value in the Z direction in the position information of the SWC file according to the layer number of the data block corresponding to each SWC in the fusion process. If the SWC file is an N-th data block, the record cell position of the SWC file needs to be added with (N-1) multiplied by 100 mu m in the Z direction, and if the SWC cell position of the third data block needs to be added with 200 mu m in the Z direction.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention. Any other corresponding changes and modifications made according to the technical idea of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. A method for rapidly and accurately obtaining the neuronal cell location of a tissue sample, comprising the steps of:
step a, imaging a tissue sample with the height of H from one end to a surface layer with the thickness of H to form a surface layer image with clear cell morphology, and then cutting the imaged surface layer with the thickness of H;
step b, continuously repeating the step a, wherein the time required for finishing imaging and cutting the surface layers with a plurality of H thicknesses is t1, wherein H < < H;
c, sequentially finishing imaging and cutting of a plurality of h-thickness surface layers adjacent to the h-thickness surface layers which are finished imaging and cutting in the step b, calculating the cell positions of partial or whole-thickness surface layer images which are finished imaging and cutting in the imaging and cutting process in the step c to determine the position information of the cells in the partial or whole-thickness surface layer images in a three-dimensional space, and calculating the time t2 required by the cell positions, wherein t1 is more than or equal to t2 and less than or equal to t1 and less than or equal to 5t2;
and d, repeating the steps b and c, sequentially connecting the adjacent surface images for calculating the cell positions until the position information of the cells in the last part of the surface images in the three-dimensional space is finished, and fusing the cell position information in the plurality of surface images to obtain the position information of the cells of all the tissue samples in the three-dimensional space.
2. The method for rapidly and precisely obtaining the neuronal cell position of a tissue sample according to claim 1, wherein the clear cell morphology means that the outline of the cell is clear.
3. The method of claim 1, wherein H is 0 ≦ 100um, and H is 100um ≦ 5cm.
4. The method for rapidly and accurately acquiring the neuronal cell position of a tissue sample according to any one of claims 1 to 3, wherein the method for imaging the surface layer in step a is high-throughput optical tomography, and the specific steps of the high-throughput optical tomography are as follows:
s1, modulating a light beam into a modulated light beam which can be focused on a focal plane of an objective lens and can be diverged on a defocusing surface of the objective lens, wherein the modulated light beam has incompletely the same modulation intensity on the focal plane of the objective lens;
s2, imaging the surface layer with the same thickness h under the illumination of the modulated light beam by adopting a camera under different pixels, wherein the calculation formula of the formed surface layer image is as follows:
I(i)=I in f(i)+I out
wherein I (I) is a surface image formed under I pixels, f (I) is a modulation intensity corresponding to the surface image I (I), I in Focal plane image, which is the surface image, I out Being surface imagesAn out-of-focus plane image;
s3, demodulating the surface layer images under different pixels through a demodulation algorithm to obtain a focal plane image of the surface layer image, wherein the focal plane image is an optical tomography image; wherein, the demodulation formula of the demodulation algorithm is as follows: I.C. A in =c×|βI 1 –αI 2 |
Wherein alpha and beta are positive integers, c is a constant greater than 0, I 1 For the cumulative sum of the surface images acquired at alpha pixels, I 2 The sum of the surface images obtained under beta pixels; the accumulated value of the modulation intensities corresponding to the surface layer image under alpha pixels is different from the accumulated value of the modulation intensities corresponding to the surface layer image under beta pixels.
5. The method for rapidly and accurately acquiring the neuron cell positions of the tissue sample according to claim 4, wherein the imaging area of the camera is N rows of pixels, N is more than or equal to 2; forming two directions perpendicular to the X direction and the Y direction on a plane parallel to the surface imaging plane, wherein the modulated light beam respectively has the following characteristics in the X direction and the Y direction: the modulated light beams have not completely same modulation intensity on the N rows of pixels along the X direction, and the modulated light beams have the same modulation intensity on each row of pixels of the N rows of pixels along the Y direction; the pixels are row pixels, and the surface layer image is a strip image; and the step S2 includes:
s21, driving the modulated light beam and the tissue sample to relatively move continuously and uniformly in the X direction;
s22, the camera sequentially and continuously images the tissue samples along the relative motion direction of the tissue samples;
s23, acquiring a strip image block of the ith row of pixels in each frame of image obtained according to the time sequence, wherein the calculation formula of the strip image block is as follows:
Figure FDA0001851056580000021
wherein, I t (i) For the ith row of pixels in the t frame imageThe corresponding slice image block is a slice image block,
Figure FDA0001851056580000022
is I t (i) Focal plane images of opposite strip image blocks, i.e.
Figure FDA0001851056580000023
The focal plane image of the mth strip image block in the complete strip image,
Figure FDA0001851056580000024
is I t (i) F (i) is the modulation intensity corresponding to the ith row of pixels of the defocused surface image of the opposite strip image block;
s24, sequentially splicing the strip image blocks of the ith row of pixels in each frame of image to obtain the strip image of the ith row of pixels, wherein the calculation formula is as follows:
Figure FDA0001851056580000025
wherein M is the number of the strip image blocks corresponding to the complete strip image, and M is less than or equal to M.
6. The method for rapidly and accurately obtaining the neuronal cell location of a tissue sample according to claim 5, wherein said step S3 comprises:
s31, accumulating the strip images of at least one row of pixels to form a first strip image, and accumulating the strip images of at least one row of pixels to form a second strip image;
s32, demodulating the first strip image and the second strip image into the optical tomography image of the strip image according to the demodulation formula, then
Figure FDA0001851056580000026
7. The method for rapidly and accurately obtaining the neuronal cell location of a tissue sample according to claim 5 or 6, wherein the modulated light beam is linear.
8. The method according to claim 5 or 6, wherein in the process of completing the imaging and cutting of the next h-thick surface layer, the h-thick surface layer image is subjected to a bit reduction and down-sampling process, the processing time is t3, the timing units of t3 and t1 are the same, and t2 is greater than or equal to t1 and t3 is greater than or equal to t2 and less than or equal to 5t2.
9. The method for fast and accurately obtaining the neuron cell locations of the tissue sample according to claim 1, wherein the algorithm used in the step c for calculating the cell locations in the surface image of h' thickness is NeuroGPS algorithm.
CN201811295425.4A 2018-11-01 2018-11-01 Method for quickly and accurately acquiring neuron body position of tissue sample Active CN110458923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811295425.4A CN110458923B (en) 2018-11-01 2018-11-01 Method for quickly and accurately acquiring neuron body position of tissue sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811295425.4A CN110458923B (en) 2018-11-01 2018-11-01 Method for quickly and accurately acquiring neuron body position of tissue sample

Publications (2)

Publication Number Publication Date
CN110458923A CN110458923A (en) 2019-11-15
CN110458923B true CN110458923B (en) 2022-11-04

Family

ID=68480441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811295425.4A Active CN110458923B (en) 2018-11-01 2018-11-01 Method for quickly and accurately acquiring neuron body position of tissue sample

Country Status (1)

Country Link
CN (1) CN110458923B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260747B (en) * 2020-01-19 2022-08-12 华中科技大学 Method and system for high-throughput optical tomography based on virtual digital modulation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023291B (en) * 2016-05-12 2019-05-17 华中科技大学 The imaging device and method of quick obtaining large sample three-dimensional structure information and molecular phenotype information
CN106501228B (en) * 2016-10-31 2020-06-26 华中科技大学 Chromatographic imaging method

Also Published As

Publication number Publication date
CN110458923A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
DE102006011707B4 (en) Method and device for producing a structure-free fiberscopic recording
US20070273930A1 (en) Method and system for super-resolution of confocal images acquired through an image guide, and device used for implementing such a method
JP7252190B2 (en) System for generating enhanced depth-of-field synthetic 2D images of biological specimens
US20110157177A1 (en) Automatic tracing algorithm for quantitative analysis of continuous structures
KR101346965B1 (en) Image processing device, measuring/testing system, and recording medium
CN104933707A (en) Multi-photon confocal microscopic cell image based ultra-pixel refactoring segmentation and reconstruction method
Denker Speckle masking imaging of sunspots and pores
BR112020007609A2 (en) image reconstruction method, device and microscopic imaging device
CN111338068B (en) Fourier laminated imaging system based on telecentric scanning lens
CN110458923B (en) Method for quickly and accurately acquiring neuron body position of tissue sample
CN110567959B (en) Self-adaptive aberration correction image scanning microscopic imaging method
CN113327211B (en) Correction method and device for large field of view high resolution light field microscope system
DE602004009875T2 (en) Method for image processing for profile determination by means of structured light
JP7190034B2 (en) High-throughput optical tomography 3D imaging system
US11300767B2 (en) Method for high-resolution scanning microscopy
EP3729161A1 (en) Method and apparatus for optical confocal imaging, using a programmable array microscope
US11356593B2 (en) Methods and systems for single frame autofocusing based on color- multiplexed illumination
CN116542857B (en) Multi-image self-adaptive stitching method
CN114387387B (en) Color three-dimensional reconstruction method, system, terminal and medium for high reflectivity surface
WO2020088013A1 (en) High-throughput optical tomography method and imaging system
CN110348569B (en) Real-time optical tomography method and system based on convolutional neural network
CN115689959A (en) Orthogonal line scanning imaging processing method and system
Wang et al. Surface Roughness Measurement from Shape-from-focus
DE102023128709B3 (en) Method, light microscope and computer program for locating or tracking emitters in a sample
CN114067012B (en) Linear structure optical chromatography method, system and device based on strip intensity estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant