CN119477965A - Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium - Google Patents
Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN119477965A CN119477965A CN202411353615.2A CN202411353615A CN119477965A CN 119477965 A CN119477965 A CN 119477965A CN 202411353615 A CN202411353615 A CN 202411353615A CN 119477965 A CN119477965 A CN 119477965A
- Authority
- CN
- China
- Prior art keywords
- particle
- image
- track
- diffusion
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002245 particle Substances 0.000 title claims abstract description 566
- 238000009792 diffusion process Methods 0.000 title claims abstract description 214
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000003860 storage Methods 0.000 title claims abstract description 12
- 230000033001 locomotion Effects 0.000 claims abstract description 155
- 238000012549 training Methods 0.000 claims abstract description 79
- 238000013139 quantization Methods 0.000 claims abstract description 66
- 230000006870 function Effects 0.000 claims description 31
- 238000009826 distribution Methods 0.000 claims description 20
- 238000005457 optimization Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 9
- 238000009877 rendering Methods 0.000 claims description 9
- 238000011049 filling Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 206010034972 Photosensitivity reaction Diseases 0.000 abstract description 10
- 208000007578 phototoxic dermatitis Diseases 0.000 abstract description 10
- 231100000018 phototoxicity Toxicity 0.000 abstract description 10
- 238000011160 research Methods 0.000 abstract description 8
- 238000011002 quantification Methods 0.000 abstract 1
- 210000004027 cell Anatomy 0.000 description 31
- 238000003384 imaging method Methods 0.000 description 27
- 238000010586 diagram Methods 0.000 description 21
- 238000004422 calculation algorithm Methods 0.000 description 12
- 230000004807 localization Effects 0.000 description 12
- 102000004169 proteins and genes Human genes 0.000 description 12
- 108090000623 proteins and genes Proteins 0.000 description 12
- 238000006073 displacement reaction Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 239000004005 microsphere Substances 0.000 description 9
- 238000000386 microscopy Methods 0.000 description 8
- 108010052285 Membrane Proteins Proteins 0.000 description 6
- 230000004913 activation Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000005284 excitation Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 102000018697 Membrane Proteins Human genes 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 125000003275 alpha amino acid group Chemical group 0.000 description 4
- 238000010276 construction Methods 0.000 description 4
- 239000000975 dye Substances 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 150000001875 compounds Chemical class 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000012417 linear regression Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- PXHVJJICTQNCMI-UHFFFAOYSA-N Nickel Chemical compound [Ni] PXHVJJICTQNCMI-UHFFFAOYSA-N 0.000 description 2
- 102000009572 RNA Polymerase II Human genes 0.000 description 2
- 108010009460 RNA Polymerase II Proteins 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 238000002073 fluorescence micrograph Methods 0.000 description 2
- 238000000799 fluorescence microscopy Methods 0.000 description 2
- 102000034287 fluorescent proteins Human genes 0.000 description 2
- 108091006047 fluorescent proteins Proteins 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012933 kinetic analysis Methods 0.000 description 2
- 239000003446 ligand Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000005653 Brownian motion process Effects 0.000 description 1
- 108010077544 Chromatin Proteins 0.000 description 1
- 235000008694 Humulus lupulus Nutrition 0.000 description 1
- 102100036789 Protein TBATA Human genes 0.000 description 1
- 101100221606 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) COS7 gene Proteins 0.000 description 1
- 241001591024 Samea Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000004071 biological effect Effects 0.000 description 1
- 238000004061 bleaching Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000005537 brownian motion Methods 0.000 description 1
- 230000009087 cell motility Effects 0.000 description 1
- 210000003855 cell nucleus Anatomy 0.000 description 1
- 230000003833 cell viability Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000003483 chromatin Anatomy 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005489 elastic deformation Effects 0.000 description 1
- 239000007850 fluorescent dye Substances 0.000 description 1
- 238000003234 fluorescent labeling method Methods 0.000 description 1
- 102000037865 fusion proteins Human genes 0.000 description 1
- 108020001507 fusion proteins Proteins 0.000 description 1
- HNDVDQJCIGZPNO-UHFFFAOYSA-N histidine Natural products OC(=O)C(N)CC1=CN=CN1 HNDVDQJCIGZPNO-UHFFFAOYSA-N 0.000 description 1
- 238000011534 incubation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003834 intracellular effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000329 molecular dynamics simulation Methods 0.000 description 1
- 239000002105 nanoparticle Substances 0.000 description 1
- 229910052759 nickel Inorganic materials 0.000 description 1
- 210000004940 nucleus Anatomy 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006862 quantum yield reaction Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 108010027322 single cell proteins Proteins 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000000492 total internal reflection fluorescence microscopy Methods 0.000 description 1
- 238000013518 transcription Methods 0.000 description 1
- 230000035897 transcription Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10064—Fluorescence image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a single particle diffusion quantization characteristic prediction method, a single particle diffusion quantization characteristic prediction device, electronic equipment and a storage medium, wherein the method comprises the steps of obtaining a target input image corresponding to a target single particle, predicting to obtain a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, and calculating to obtain the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image, wherein the particle track prediction model is obtained by training and optimizing a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image. According to the method, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion quantification characteristic of the target single particle is calculated based on the actual motion path, so that the diffusion coefficient of a large number of single particles can be rapidly and accurately predicted in a high-mark-density environment, phototoxicity is remarkably reduced, the diffusion characteristics of different biological particles can be effectively distinguished, and a powerful tool is provided for the dynamic research of the particles in a diffusion system and living cells.
Description
Technical Field
The present invention relates to the field of super-resolution imaging technologies, and in particular, to a single particle diffusion quantization feature prediction method, a single particle diffusion quantization feature prediction device, an electronic device, and a storage medium.
Background
Traditional imaging methods are limited by optical diffraction limits and tend to be capable of studying population molecular dynamics at lower spatial-temporal resolutions. Single-molecule positioning microscopy (single-molecule localization microscopy, SMLM) is a super-resolution imaging technique that has acquired one of the chemical nobel prizes in 2014, and can image single fluorescent particles or fluorescent biomolecules.
With the development of fluorescence microscopy and novel labeling techniques, SMLM was used to image, locate and track single particles or single biomolecules in real time in extracellular diffusion systems and living cells (single-PARTICLE TRACKING, SPT; single-molecule tracking, SMT; hereinafter referred to as SMT). The molecular trace obtained is analyzed to obtain measurement indexes such as a molecular diffusion coefficient, a binding fraction, a diffusion angle distribution (anisotropy) and the like, and the biological activity of the marked protein and the interaction strength with other molecules can be further estimated through the indexes. This technological innovation is becoming a powerful tool for understanding the single molecule dynamics and function of biomolecules during cell life.
In the currently published single-molecule kinetic analysis of the main living cells, the general flow is to carry out single-molecule positioning and frame-by-frame track connection, then to carry out summarization statistics on the obtained molecular track length, and to fit various physical model formulas based on molecular diffusion to obtain parameters such as diffusion coefficient, combination percentage and the like. After the original single-molecule fluorescence image is obtained, a single-molecule positioning algorithm is needed to be similar to fit the single-molecule fluorescence intensity distribution according to a two-dimensional Gaussian function which is an approximate function of a point spread function (point spread function, PSF), and a spatial coordinate corresponding to the maximum value of the function is obtained as the position of the molecule. The closest and physically most likely points between the two frames are then connected to form a trajectory using statistical inference.
To ensure the accuracy of the track connection, it is generally necessary to control the output intensity of the photo-activated laser. The more molecules that occur per frame, the less accurate the trajectory connection is made using the algorithm. In order to accurately acquire the traces, a low molecular density needs to be maintained throughout the imaging process of SMT to avoid misconnection. Analysis of single molecular trajectories often depends on the length of the molecular trajectories, the longer the molecular trajectories, the more accurate the kinetic parameters extracted from them. However, in order for subsequent kinetic analysis and model fitting to be statistically significant, it is desirable to obtain sufficient single molecule trajectory data in the experiment. For this purpose, a high frame number of images is required to capture a sufficient number of single molecules, and the effect of phototoxicity and cell movement on the experiment during this process is not negligible.
For two-dimensional SMT imaging, which is currently common in application, it is difficult to obtain long tracks with limited Z-axis resolution when the binding percentage of molecules is low and the diffusion coefficient is fast. On the other hand, most trajectory-based analysis requires a large number of trajectories (typically at least over hundred thousand trajectories) to obtain statistically reliable results. One common solution is to image many cells and pool together trajectories obtained from different cells. The dynamic parameters obtained by statistically modeling a large number of single molecular trajectories are accurate, but the results are average and have no spatial information, which limits the ability to study single cell protein spatial dynamics using SMT.
Another limitation of trajectory-based analysis is the difficulty in measuring the dynamics of protein clusters. For example, RNA polymerase II (RNA Pol II, hereinafter referred to as Pol II) forms dynamic aggregates with a radius near or below the diffraction limit (diffraction limit of about 200 nm) in living cells. Since the spatial resolution of SMT currently performed using a total internal reflection fluorescence microscope in high tilt light (HIGHLY INCLINED LIGHT SHEET, HILO) mode is about 30nm, sparse labeling of molecules within a cluster and tracking of trajectories to study the dynamic characteristics of the cluster is inherently challenging.
At present, a method for describing the spatial distribution of molecular diffusion in living cells is still limited, wherein the method comprises the steps of representing the molecular movement speed of each point in space by utilizing the average displacement of a track, but the method is limited by possible error influence in the track tracking process, wherein the error influence comprises random flickering and photo-bleaching of fluorescent molecules to cause track connection failure, and the density of excited molecules per frame cannot be too high, otherwise, the error connection is easy to cause. This method therefore requires the acquisition of long-term data, but long-term laser irradiation is prone to displace cells to some extent, so that it is only suitable for the imaging of proteins whose overall movement is slow and whose structure does not change significantly over time, such as chromatin.
In addition, there are also spatial distribution analyses such as sptPALM of molecular diffusion directly using the traced trajectory. The method utilizes high-intensity laser to excite molecules, uses a high-numerical aperture objective lens (model Olympus APO100XO-HR-SP,1.65NA, and the numerical aperture NA value of a commonly used 100-fold lens is generally below 1.49) to combine with total internal reflection illumination, tracks cell membrane proteins in a COS7 cell line which tolerates phototoxicity, and obtains the spatial distribution condition of molecular diffusion on the membrane surface by using a method of obtaining a diffusion coefficient by MSD-deltat fitting of each track.
On the one hand, this method relies on trajectory tracking and thus there is a possibility of misconnection of molecules per frame, and on the other hand, this method relies on a fitting method of MSD- Δt, which requires a longer molecular trajectory to obtain an accurate diffusion coefficient estimate. The cell membrane protein is mainly two-dimensional diffusion, so that the cell membrane protein is suitable for obtaining a long track (> 10 frames), and in other environments such as in a cell nucleus, molecules are three-dimensional diffusion, a two-dimensional imaging method using HILO light illumination can only obtain a molecular track length of 3-4 frames for stably combined molecules such as H2B, and an average molecular track obtained for other molecules which are more rapidly diffused is shorter, so that a fitting method using MSD-deltat has difficulty.
In addition, non-covalent binding of ATTO-647N dye molecule ligand modified by divalent nickel ion-nitrilotriacetic acid (Ni2+ trisN-nitrilotriacetic acid, ni-NTA) and membrane protein with 6 histidine tags is utilized to realize high-density labeling and tracking of membrane protein molecules, and finally, a median value of a track displacement length in a 200nm space is utilized to represent uPAINT (universal-points-acquisition-for-imaging-in-nanoscale-topograph) of molecular motion speed. Although this method increases molecular imaging density compared to sptPALM, it reduces the accuracy of trajectory reconstruction and furthermore it can only be used to label membrane proteins.
Also SMdM (Single-molecule DISPLACEMENT MAPPING) by arranging the sequence of the two-frame strobe exposures such that the 1ms strobe exposure is distributed between the two frames at 1ms intervals, this can further shorten the time interval between the two frames and track the molecular motion more quickly. This method obtains an estimated diffusion coefficient by taking the track displacement between two frames and fitting the distribution of all displacements over a fixed range (100 x 100 nm).
This method has good measurement effect on the diffusion coefficient distribution of fast moving molecules (diffusion coefficient higher than 10 μm 2/s) such as fluorescent proteins that do not specifically bind to the target in the cell. However, it was found from the simulation data that, for the transcription regulatory factor having a diffusion coefficient of less than 10 μm 2/s in the nucleus, the detection sensitivity of this part of the movement speed was affected when the localization error of the imaged fluorescent molecule was considered, because the average displacement of molecules having a diffusion coefficient of within 10 μm 2/s was within 200nm at a time interval of 1ms, and when there was a 30nm error for each molecule at the beginning of the trajectory, the error range of 60nm in total had a large influence on the detection result.
Therefore, how to solve the problems that the traditional particle track tracking technology has low prediction accuracy of diffusion quantization characteristics under the high-mark-density environment and cannot capture the spatial distribution of particles is an important topic to be solved in the field of super-resolution imaging.
Disclosure of Invention
The invention provides a single-particle diffusion quantization characteristic prediction method, a single-particle diffusion quantization characteristic prediction device, electronic equipment and a storage medium, which are used for solving the defects that the traditional single-particle track tracking technology is low in diffusion quantization characteristic prediction accuracy and cannot capture particle space distribution in a high-mark density environment, and can rapidly and accurately predict a large number of single-particle diffusion quantization characteristics in a high-mark density environment and remarkably reduce phototoxicity.
On one hand, the invention provides a single-particle diffusion quantization characteristic prediction method, which comprises the steps of obtaining a target input image corresponding to a target single particle, predicting to obtain a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, and calculating to obtain the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image, wherein the particle track prediction model is obtained by training and optimizing a first training sample set formed by a single-particle motion blur image and a corresponding particle motion track image.
The method comprises the steps of acquiring an original single-particle motion blur image of a target single particle, dividing the original single-particle motion blur image based on a pre-trained U-Net network to obtain a single-particle signal mask, pre-positioning single-particle signals in the original single-particle motion blur image, reserving a target single-particle signal mask only comprising one positioning, filling the target single-particle signal mask by using a background and noise level obtained by calculation of a median filter to obtain the target input image, wherein the U-Net network is obtained by training and optimizing a second training sample set formed by the single-particle motion blur image and a mask image corresponding to the single-particle motion blur image.
The method comprises the steps of obtaining a particle pseudo-track image, determining a pseudo-track area according to the particle pseudo-track image, obtaining a diffusion coefficient corresponding to the target single particle by fitting according to a quantitative relation between the pseudo-track area and the particle diffusion coefficient, and obtaining the diffusion direction corresponding to the target single particle by fitting according to the density space distribution of the particle pseudo-track image.
Further, training an optimized particle track prediction model, which specifically comprises simulating a single particle motion blur image, and acquiring a particle motion track image corresponding to the single particle motion blur image to construct and obtain a first training sample set; and taking the single particle motion blur image as a model input, taking a predicted pseudo-track image as a model output, taking the difference between the predicted pseudo-track image and the particle motion track image as a training loss, and performing iterative optimization on the particle track prediction model to obtain a particle track prediction model with training convergence.
Further, the single-particle motion blur image simulation method comprises the steps of simulating a two-dimensional particle track, superposing a Gaussian function on each point on the two-dimensional particle track to obtain a motion blur function, normalizing and pixelating the motion blur function, introducing Gaussian white noise and Poisson shooting noise, and obtaining the single-particle motion blur image under different signal-to-noise ratios and background levels.
Further, fitting to obtain a diffusion coefficient corresponding to the target single particle, wherein the method comprises the steps of calculating the mass center of the particle pseudo-track image to obtain the positioning of the target single particle under the condition that the diffusion coefficient of the target single particle is larger than a set threshold value, and performing ellipsoidal Gaussian fitting on the target input image under the condition that the diffusion coefficient of the target single particle is smaller than or equal to the set threshold value to obtain the positioning of the target single particle.
Further, the target single particle positioning is obtained, and then the target single particle positioning is meshed to obtain a first grid containing particle positioning and a second grid not containing particle positioning but adjacent to particle positioning, a particle density probability map is generated based on the target single particle positioning, interpolation processing is conducted on the second grid by using Gaussian weight sum based on the particle density probability map and the first grid to obtain a diffusion coefficient corresponding to the second grid, a diffusion coefficient matrix is obtained according to the diffusion coefficient corresponding to the first grid and the diffusion coefficient corresponding to the second grid, local smoothing processing is conducted on the diffusion coefficient matrix to obtain a mobility map, and HSV color map is used for displaying to complete image rendering.
The invention further provides a single-particle diffusion quantization characteristic prediction device, which comprises a target input image acquisition module, a particle pseudo-track image prediction module and a particle diffusion coefficient acquisition module, wherein the target input image acquisition module is used for acquiring a target input image corresponding to a target single particle, the particle pseudo-track image prediction module is used for predicting and obtaining a particle pseudo-track image according to a pre-trained particle track prediction model, the particle diffusion coefficient acquisition module is used for calculating and obtaining the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image, and the particle pseudo-track image prediction model is obtained by training and optimizing a first training sample set formed by a single-particle motion blurred image and a corresponding particle motion track image.
In a third aspect, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a single particle diffusion quantization characteristic prediction method as described in any one of the preceding claims when executing the computer program.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a single particle diffusion quantization characteristic prediction method as described in any one of the above.
According to the single-particle diffusion quantization characteristic prediction method, a target input image corresponding to a target single particle is obtained, a particle pseudo-track image is obtained through prediction according to the target input image based on a pre-trained particle track prediction model, and further the diffusion quantization characteristic of the target single particle is obtained through calculation based on the particle pseudo-track image, wherein the particle track prediction model is obtained through training optimization according to a first training sample set formed by a single-particle motion blur image and a corresponding particle motion track image. According to the method, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion quantization characteristic of the target single particle is calculated based on the actual motion path, so that the diffusion quantization characteristic of a large number of single particles can be rapidly and accurately predicted under a high-mark density environment, phototoxicity is obviously reduced, the method is suitable for living cell imaging for at least 10 minutes, the obtained diffusion quantization characteristic is solved to have a high dynamic range, the diffusion characteristics of different biological particles are effectively distinguished, and a powerful tool is provided for the dynamic research of the diffusion system and the particles in living cells.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a single particle diffusion quantization characteristic prediction method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a U-Net network according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a second training sample set construction of the single particle diffusion quantization characteristic prediction method according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the construction of a first training sample set according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of training optimization of a particle trajectory prediction model according to an embodiment of the present invention.
Fig. 6 is a schematic view of obtaining a diffusion coefficient of a target single particle according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a fitting effect of a linear relationship between a pseudo track area and a particle diffusion coefficient according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of positioning accuracy of different particle positioning algorithms according to an embodiment of the present invention.
FIG. 9 is a schematic representation of the positioning of particles and the rendering of the particle diffusion coefficients according to an embodiment of the present invention.
Fig. 10 is an overall flowchart of a single particle diffusion quantization characteristic prediction method according to an embodiment of the present invention.
FIG. 11 is a schematic diagram of a particle trajectory prediction model and the reasoning effect of a U-Net network provided by an embodiment of the invention.
Fig. 12 is a schematic diagram of an image registration process according to an embodiment of the present invention.
Fig. 13 is a schematic structural diagram of a single particle diffusion quantization characteristic prediction apparatus according to an embodiment of the present invention.
Fig. 14 is a schematic diagram of an entity structure of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It is easy to understand that, in order to solve the problems that the diffusion coefficient prediction accuracy is not high and the spatial distribution of particles cannot be captured in the high-marker-density environment in the conventional single-particle trajectory tracking technology, the invention provides a new single-particle diffusion quantization characteristic prediction method, and specifically, fig. 1 shows a flow diagram of a single-particle (e.g. biomolecule) diffusion quantization characteristic (e.g. diffusion coefficient) prediction method provided by the embodiment of the invention.
As shown in FIG. 1, the method includes steps S110-S130, and the detailed description of steps S110-S130 and related steps is provided below.
S110, acquiring a target input image corresponding to the target single particle.
It will be appreciated that in order for a single particle (e.g. a single biomolecule) to be visible under a microscope, it is first necessary to attach fluorescent labels to the target single particle, which fluorescent labels can emit light when irradiated with light of a specific wavelength, thereby making the particle visible. Common fluorescent labeling methods include antibody labeling, genetically engineering fusion proteins with fluorescent proteins (e.g., GFP), and the like.
Since single particles (target single particles) are usually very weak, it is necessary to use a microscope system with high sensitivity and high resolution. Common single particle imaging microscopy systems include, but are not limited to, light activated positioning microscopy (Photoactivated Localization Microscopy, PALM), random optical reconstruction microscopy (Stochastic Optical Reconstruction Microscopy, STORM).
In order to observe a single particle (target single particle), the intensity and wavelength of the excitation light needs to be precisely controlled to avoid simultaneous excitation of multiple adjacent fluorescent particles. In addition, a high sensitivity detector such as an EMCCD camera or APD (avalanche photodiode) is also required to capture extremely weak fluorescent signals.
After all parameters are set, the acquisition of images is started. A long exposure or a cumulative number of exposures are typically required to collect a sufficient signal.
It should be noted that the acquired image needs to be processed to obtain a clearer single-particle image, that is, the target input image in this embodiment. The processing herein includes, but is not limited to, background subtraction, signal enhancement, and the like.
In this step, the target single particle may be a biomolecule (for example, DNA, RNA, protein or other intracellular molecular structure, especially, protein), or a biomolecule such as a protein purified in a chemical solution, a chemically synthesized compound, a nanoparticle, or the like, and is not particularly limited herein.
The target input image is a processed single particle motion blur image/single particle fluorescence image. Single particle motion blurred images refer to blurring that occurs in images due to the rapid movement of single particles during single particle fluorescence imaging. Such blurring typically occurs when single particles are tracked in real time within living cells, as the particles within the cells tend to be in a constantly moving state.
Further, step S120 is performed on the basis of acquiring the target input image corresponding to the target single particle in step S110.
S120, predicting and obtaining a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, wherein the particle track prediction model is obtained by training and optimizing a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image.
The particle track prediction model is used for recovering or predicting or reconstructing the motion track (namely a particle pseudo track image) of the particle from the blurred single particle image (target input image) so as to more accurately detect the behavior of the single particle of the particle target.
The particle trajectory prediction model is constructed based on a deep learning neural network, where the deep learning neural network may include a convolutional neural network, a cyclic neural network, or a combination of the advantages of the two, which is not specifically limited herein.
In a specific embodiment, the particle trajectory prediction model includes a convolution feature extraction layer, a max-pooling layer, a bilinear upsampling layer, a normalization layer, and an activation function layer.
After determining the architecture of the particle track prediction model, a first training sample set is required to be constructed to perform training optimization on the particle track prediction model, specifically, during training optimization, a single particle motion blur image in the first training sample set is taken as a model input, a prediction pseudo track image is taken as a model input, and the difference between the prediction pseudo track image and the particle motion track image is taken as a training loss, so that the particle track prediction model is subjected to iterative optimization, and a trained particle track prediction model is obtained.
The single-particle motion blur image may be real data obtained through experiments or data generated through simulation, and is not particularly limited herein.
The particle motion trail image is a true motion trail image of biological particles, and can be obtained through a high-precision imaging technology or manual annotation to be used as a true value of model training.
After training the particle track prediction model, inputting the target input image obtained in the step S110 into the particle track prediction model, and obtaining the particle pseudo-track image of the output target single particle.
Further, step S130 is performed on the basis of the predicted particle-pseudo trajectory image in step S120.
S130, based on the particle pseudo-track image, calculating to obtain the diffusion quantization characteristic of the target single particle.
In particular, in order to simplify the problem, it may be assumed that there is less overlap of particle trajectories, and thus, in the case of obtaining a particle pseudo-trajectory image, the area covered by the particle trajectories, that is, the trajectory area, may be estimated. And then, fitting to obtain the diffusion coefficient of the target single particle according to the quantitative relation between the pseudo-track area and the particle diffusion coefficient, and fitting to obtain the diffusion direction corresponding to the target single particle according to the density space distribution of the particle pseudo-track image.
The quantitative relation between the pseudo track area and the particle diffusion coefficient is predetermined, and when the quantitative relation is applied, the pseudo track area is substituted into an equation corresponding to the linear relation. The diffusion coefficient and the diffusion direction are both diffusion quantization characteristics.
The diffusion coefficient of a target single particle is a physical quantity describing the random movement rate of the target single particle in a medium, and reflects the free diffusion capability of the target single particle under the action of no external force, and is denoted by a symbol D. The units of diffusion coefficient are generallyOr (b)The area covered by the diffusion of particles per unit time is shown.
It is worth mentioning that the single particle diffusion quantization characteristic prediction method provided by the embodiment has a high dynamic range in the common diffusion coefficient range of biological large particles, and can effectively distinguish the diffusion characteristics of different particles, so that a powerful tool is provided for dynamic research of living cell particles.
In the embodiment, a particle pseudo-track image is obtained through prediction according to a target input image by acquiring a target input image corresponding to a target single particle and based on a pre-trained particle track prediction model, and further a diffusion quantization characteristic of the target single particle is obtained through calculation based on the particle pseudo-track image, wherein the particle track prediction model is obtained through training optimization according to a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image. According to the method, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion quantization characteristic of the target single particle is calculated based on the actual motion path, so that the diffusion quantization characteristic of a large number of single particles can be rapidly and accurately predicted in a high-mark density environment, phototoxicity is obviously reduced, the method is suitable for living cell imaging for at least 10 minutes, the obtained diffusion coefficient has a high dynamic range, the diffusion characteristics of different biological particles are effectively distinguished, and a powerful tool is provided for living cell particle dynamic research.
Further, on the basis of the above-described embodiments, a detailed description will be given below regarding the acquisition process of the target input image.
The method comprises the steps of acquiring an original single-particle motion blur image of a target single particle, segmenting the original single-particle motion blur image based on a pre-trained U-Net network to obtain a single-particle signal mask, pre-locating single-particle signals in the original single-particle motion blur image, reserving a target single-particle signal mask which only comprises one location, filling the target single-particle signal mask by using a background and noise level obtained by calculating a median filter to obtain the target input image, wherein the U-Net network is obtained by training and optimizing a second training sample set formed by the single-particle motion blur image and a mask image corresponding to the single-particle motion blur image.
It will be appreciated that unlike typical localization-based single-particle detection algorithms, the single-particle motion blur image is captured followed by segmentation of the pixels containing the single particle. The present embodiment utilizes a U-Net network to accomplish this task.
Fig. 2 shows a schematic diagram of a U-Net network according to an embodiment of the present invention. As shown in fig. 2, the U-Net network consists of a contracted path (left side) and an expanded path (right side).
Where the shrink path follows the typical architecture of a convolutional network, which includes repeated application of a 3x 3 convolution twice (non-filled convolution), followed by a ReLU activation function and a 2x 2 max pooling operation, with a step size of 2, for downsampling. In each downsampling step, the number of characteristic channels is doubled.
Each step in the dilation path involves upsampling the feature map followed by a 2 x 2 convolution ("up-convolution"), halving the number of feature channels, concatenating with the correspondingly cropped feature map in the dilation path, and 3 x 3 convolving twice, each time followed by a ReLU activation function. Clipping is necessary because each convolution loses boundary pixels.
At the last layer, each 64-dimensional feature vector is mapped to the required number of classes (here only 1 class, corresponding to the pixel range of a single-grain motion blurred image) using a1×1 convolution. The U-Net network has a total of 23 convolutional layers.
A second training sample set is required to train the optimized U-Net network prior to actual application. Each training sample in the constructed second training sample set comprises a single particle motion blur image and its corresponding mask image. Specifically, fig. 3 shows a second training sample set construction schematic diagram of the single particle diffusion quantization characteristic prediction method provided by the embodiment of the invention.
As shown in fig. 3, first, a two-dimensional particle trajectory (2D particle trajectory) is simulated using simSPT algorithm, and a gaussian function is superimposed on each point on the two-dimensional particle trajectory to obtain a motion modulus function Traj i,j.
The motion blur function is then normalized and pixelated to align with the actual camera pixel size (e.g., 110 nanometers) to give pixelation Traj i,j, followed by the introduction of gaussian white noise and poisson shot noise into the pixelation Traj i,j to generate single-particle motion blurred images at different signal-to-noise ratios and background levels.
Meanwhile, more than 95% of the signal intensity area is cut out from the pixelation Traj i,j to be used as a mask, and a mask image is obtained. Therefore, a second training sample set can be constructed according to the single-particle motion blur image and the corresponding mask image.
And then training and optimizing the U-Net network by using the second training sample set. Specifically, during training, a single-particle motion blur image is used as a model input, a prediction mask image is used as a model input, the difference between the prediction mask image and a (actual) mask image is used as training loss, and the U-Net network is iteratively optimized, so that the trained U-Net network is obtained.
It is worth mentioning that, in training, in order to increase the segmented grain density, the present embodiment places four single grain motion blurred images in a 32×32 pixel wide image (pixel size 110 nm) and weightsThe cross entropy loss function is updated, forcing the U-Net network to learn the separation boundary between single-grain motion blur.
Specifically, the weight map of the U-Net network can be calculated by the following equation (1).
。
In the formula (1), the amino acid sequence of the formula (1),Is the weight map at a balanced class frequency (i.e. the proportion of the number of pixels between the instances with motion blur detection and the instances without motion blur detection),Is the distance to the nearest motion blur boundary,Is the distance to the second near motion blur boundary, in experiments, set upPixel and method for manufacturing the sameA pixel.
Thus, in this embodiment, the maximum detection density that can be achieved by the U-Net network is about 0.5 motion blur particle signal μm2/frame or 16.5 motion blur particle signals μm2/s (equivalent to placing 4 motion blur in a 32×32 pixel area with a pixel size of 110 nanometers), which is comparable to the localization density of classical living cells PALM (Photoactivated Localization Microscopy) and is more than 12 times higher than the localization density of fast blinking single particle tracking.
In a specific embodiment, to train the U-Net network, 10320 was simulated for a single particle motion blur image and mask image with an exposure time of 30 milliseconds covering 12 different diffusion coefficients D (ranging from 0.01 μm2/s to 10 μm2/s). Under each diffusion coefficient D, a single particle motion blur image is randomly generated, actual imaging data is simulated, the signal to noise ratio is between 19 and 35dB, and the background level is between 1000 and 3000. A drop layer (drop) and random elastic deformations (rotation, translation, and flip) are included in the data enhancement to improve the generalization ability of the network training.
After the U-Net network is trained, it is used to segment the pixels containing individual particles to ultimately acquire the target input image.
Specifically, first, an original single-particle motion blurred image of a target single particle is acquired, specifically by TIRF (Total Internal Reflection Fluorescence Microscopy, total internal reflection fluorescence microscope) or HILO (HIGHLY INCLINED LIGHT Optical Sheet, high-tilt light Sheet).
And then, inputting the acquired original single-particle motion blurred image into a trained U-Net network to generate a single-particle signal mask covering the single-particle signal.
Then, to remove the mask covering the plurality of single-grain signals, thunderSTORM is used to pre-position the single-grain signals in the original single-grain motion blurred image, and only the single-grain signal mask containing one position, i.e. the target single-grain signal mask, is reserved.
Next, the background and noise levels of each pixel in the target single particle signal mask within the previous and subsequent 250 frames are calculated using a median filter, and thus, the target single particle signal mask is filled with the calculated background and noise levels, thereby obtaining a target input image.
In this embodiment, by acquiring an original single-particle motion blur image of a target single particle, segmenting the original single-particle motion blur image based on a pre-trained U-Net network to obtain a single-particle signal mask, further pre-positioning single-particle signals in the original single-particle motion blur image, reserving a target single-particle signal mask only comprising one positioning, and filling the target single-particle signal mask with a background and a noise level calculated by a median filter to obtain a target input image, so that a later-stage particle track prediction model can predict a particle pseudo track image with higher accuracy.
Further, based on the above embodiments, a training optimization process for the particle trajectory prediction model will be described in detail below.
It will be appreciated that the optimized particle trajectory prediction model needs to be trained before the particle pseudo-trajectory image is predicted using the particle trajectory prediction model.
The particle track prediction model is trained and optimized by simulating a single particle motion blur image, acquiring a particle motion track image corresponding to the single particle motion blur image, constructing and obtaining a first training sample set, taking the single particle motion blur image as a model input, taking a prediction pseudo track image as a model output, taking the difference between the prediction pseudo track image and the particle motion track image as a training loss, and carrying out iterative optimization on the particle track prediction model to obtain a particle track prediction model with training convergence.
Fig. 4 shows a schematic construction diagram of a first training sample set according to an embodiment of the present invention. As shown in fig. 4, first, a two-dimensional particle trajectory (2D particle trajectory) is simulated using simSPT algorithm, and a gaussian function is superimposed on each point on the two-dimensional particle trajectory to obtain a motion modulus function Traj i,j.
The motion blur function is then normalized and pixelated to align with the actual camera pixel size (e.g., 110 nanometers) to give pixelation Traj i,j, followed by the introduction of gaussian white noise and poisson shot noise into the pixelation Traj i,j to generate single-particle motion blurred images at different signal-to-noise ratios and background levels. The single-grain motion blurred image size is 32×32 pixels.
In order to pair the particle motion trajectory with the single particle motion blur image for subsequent training, a brazison-hanm straight line algorithm is first applied to the simulated two-dimensional particle trajectory to generate a pixelated image with 320×320 pixel width, and then the pixelated image is convolved with a gaussian filter with the same pixel width to generate a final trajectory image, i.e., a particle motion trajectory image.
Therefore, a first training sample set can be constructed according to the single-particle motion blur image and the particle motion trail image.
Then, the particle track prediction model is iteratively optimized by using the first training sample set, and specifically, fig. 5 shows a training optimization schematic diagram of the particle track prediction model provided by the embodiment of the present invention.
As can be seen from fig. 5, in training, a single-particle motion blurred image (pixel size 110 nm) of 32×32 pixels with noise is first enlarged to 320×320 pixels (pixel size 11 nm) using nearest neighbor interpolation as an input image in training.
The input image is encoded by three convolution feature extraction layers (each containing a3 x 3 convolution kernel, output channels 32, 64, 128 and 512 respectively), followed by batch normalization, reLU activation function and max pooling layer (2 x2, step size 2). Then, three bilinear upsampling layers (interpolation with 2-fold upsampling) and convolutional feature extraction layers (each layer with a3 x 3 convolutional kernel, output channels 128, 64, 32, followed by batch normalization and ReLU activation functions) are run to upsample and decode features layer by layer. Finally, the 32-channel feature map is converted into a single-channel density prediction image using a1×1 convolution layer without an activation function, whereby a predicted pseudo-trajectory image (image size 320×320, pixel size 11 nm) predicted by the particle trajectory prediction model is obtained from the single-particle motion blur image.
Then, the difference between the predicted pseudo-track image and the real track image (namely the particle motion track image in the first training sample set) is balanced by the sum of the average absolute error loss function (L1 loss) and the mean square error loss function (MSE), and the weight parameters of the particle track prediction model are optimized by continuous iteration.
In a specific example, the exposure time was 30 milliseconds, covering a broad distribution of protein diffusion coefficients in living cells (a total of 27 diffusion coefficients, exponentially increasing from 0.001 μm2/s to 31.62 μm2/s). Under each diffusion coefficient D, 3000 single-particle motion blurred images are randomly generated, actual imaging data are simulated, the signal-to-noise ratio is between 19 and 35dB, and the background level is between 1000 and 3000. The total number of image pairs (single particle motion blur image and particle motion trajectory image) in the training sample set is 81000. During network training, the sum of the mean absolute error and the mean square error is used as a loss function to quantify and reduce the difference between the predicted pseudo-trajectory image and the particle motion trajectory image.
After the particle track prediction model is trained and optimized, in practical application, the obtained target input image is directly input into the particle track prediction model, and a corresponding accurate particle pseudo-track image can be predicted and obtained.
In the embodiment, a particle track prediction model is optimized through training, then a particle pseudo-track image is obtained through prediction according to a target input image based on the particle track prediction model, and further a diffusion quantization characteristic of a target single particle is obtained through calculation based on the particle pseudo-track image, wherein the particle track prediction model is obtained through training and optimization according to a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image. According to the method, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion quantization characteristic of the target single particle is calculated based on the actual motion path, so that the diffusion quantization characteristic of a large number of single particles can be rapidly and accurately predicted in a high-mark density environment, phototoxicity is obviously reduced, the method is suitable for living cell imaging for at least 10 minutes, the obtained diffusion coefficient has a high dynamic range, the diffusion characteristics of different biological particles are effectively distinguished, and a powerful tool is provided for living cell particle dynamic research.
Further, on the basis of the above-described embodiment, a detailed description will be given below with respect to a process of calculating a diffusion coefficient from a particle pseudo-trajectory image.
The method comprises the steps of determining a pseudo track area according to a particle pseudo track image, fitting according to a quantitative relation between the pseudo track area and the particle diffusion coefficient to obtain a diffusion coefficient corresponding to the target single particle, and fitting according to density space distribution of the particle pseudo track image to obtain a diffusion direction corresponding to the target single particle.
It will be appreciated that D is estimated from the particle pseudo-trajectory image assuming that the target single particle undergoes brownian motion, at a position after a time interval ΔτThe probability of finding a target single particle can be described by the following equation (2).
。
In the formula (2), the amino acid sequence of the formula (2),Representing the displacement of the target single particle from the origin at time t,The probability density representing the displacement, D, is the diffusion coefficient of the target single particle.
In single particle tracking, the trajectory displacement can be obtained and equation (2) fitted to extract the diffusion coefficient D. However, the particle pseudo-trajectory image predicted by the particle trajectory prediction model is a spatial projection of the trajectory probability, and there is no temporal order. Thus, steering derives the diffusion coefficient D through the travel/trajectory distance of the target individual particles rather than displacement.
By integrating equation (2), the probability density of the jump distance r of the target single particle per unit time can be derived from equation (2), and can be seen in particular in equation (3) below.
。
The total distance of the particle track may be approximately the sum of the N discrete jump lengths, time interval T, over the exposure time T of the entire track. The probability density for each jump length is determined by equation (3), assuming each step is an independent co-distributed event, since the diffusion coefficient D should be related to the square of the total distance of particle motion R, the total expected R2 can be written as equation (4) below.
。
After decomposing the total distance of particle motion R into N consecutive hops, equation (4) can be rewritten as equation (5) below.
。
Due toBy combining equation (3), it is possible to obtain. Bonding ofSubstituting equation (5) yields equation (6) below.
。
In order to simplify the problem, the present embodiment assumes that the particle motion trajectories themselves rarely overlap, and the particle pseudo-trajectory image predicted by the particle trajectory prediction model can be regarded as a trajectory convolved with a narrow width w, which enables estimation of an area covered by the particle pseudo-trajectory image (pseudo-TRACK AREA, PT area or pseudo-trajectory area for short), specifically, see the following formula (7).
。
In the formula (7), the amino acid sequence of the compound,Representing the total track distance of the target single particle.
Since w represents the uncertainty of the particle trajectory prediction model in predicting the particle trajectory (a constant after training), the present embodiment can obtain the quantitative relationship between the square of the pseudo-trajectory area and the particle diffusion coefficient D by substituting R for equation (7) in equation (6), and can be seen in the following equation (8).
。
Wherein, 。
Fig. 6 shows a schematic view of obtaining a diffusion coefficient of a target single particle according to an embodiment of the present invention. As shown in fig. 6, when calculating the diffusion coefficient of the target single particle, the numerical range of the particle pseudo-track image is linearly adjusted to 0 to 1, the pseudo-track area is defined as the pixel area exceeding 0.1, and then the diffusion coefficient corresponding to the target single particle can be obtained by fitting through equation (8).
In a specific embodiment, 1700 single-particle motion blur images at high signal-to-noise ratios (35 dB) were simulated, with diffusion coefficients D ranging from 0.1 μm2/s to 31.62 μm2/s, and FIG. 7 shows a schematic of the fitting effect of the quantitative relationship of pseudo-track area and particle diffusion coefficients provided by embodiments of the present invention. As can be seen from fig. 7, by fitting the pseudo-track area and the diffusion coefficient D using equation (8), an excellent determination coefficient (R 2 > 0.9) can be obtained.
Thus, equation (8) fitted at high signal-to-noise ratio can be used to convert the particle pseudo-trajectory image predicted by the particle trajectory prediction model into a diffusion coefficient of a single particle motion blur image.
In this embodiment, the pseudo-track area is determined according to the particle pseudo-track image, and the diffusion coefficient corresponding to the target single particle is obtained by fitting according to the quantitative relationship between the pseudo-track area and the particle diffusion coefficient. According to the method, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion coefficient of the target single particle is calculated based on the actual motion path, so that the diffusion coefficient of a large number of single particles can be rapidly and accurately predicted in a high-mark-density environment, phototoxicity is obviously reduced, the method is suitable for living cell imaging for at least 10 minutes, the obtained diffusion coefficient is solved to have a high dynamic range, the diffusion characteristics of different biological particles are effectively distinguished, and a powerful tool is provided for living cell particle dynamic research.
Further, on the basis of the above-described embodiments, a detailed description will be given below regarding the procedure of particle positioning.
Calculating to obtain the diffusion coefficient of the target single particle, and then carrying out centroid calculation on the particle pseudo-track image to obtain the positioning of the target single particle under the condition that the diffusion coefficient of the target single particle is larger than a set threshold value, and carrying out ellipsoidal Gaussian fitting on the target input image under the condition that the diffusion coefficient of the target single particle is smaller than or equal to the set threshold value to obtain the positioning of the target single particle.
It can be understood that in the present embodiment, a series of particle motion trajectories and corresponding single particle motion blur images are simulated first, which cover different diffusion coefficients D and signal-to-noise ratios, and particularly, refer to fig. 8, and fig. 8 shows a schematic diagram of positioning accuracy of different particle positioning algorithms provided by the embodiment of the present invention. When the diffusion coefficient D exceeds 1 μm2/s, the use of the gravity center algorithm on the particle pseudo-trajectory image shows higher positioning accuracy (positioning accuracy is evaluated by comparing deviations of the particle positioning obtained by the two methods from the geometric center of the actual particle motion trajectory) than when the particles are positioned using an elliptic gaussian function.
Notably, for fast diffusing particles with a diffusion coefficient D exceeding 10 μm2/s, the gravity center algorithm achieves an accuracy of 20 nm, while the gaussian function fit is 85 nm.
In contrast, for slow diffusing particles with a diffusion coefficient D below 1 μm2/s, locating the particles using an elliptic Gaussian function performs better than the gravity algorithm. Therefore, for downstream analysis, the present embodiment locates the position of the particles using the weighted center of gravity of the particle pseudo-locus image when the estimated diffusion coefficient D exceeds 1 μm2/s, otherwise an elliptic gaussian fit to the original motion blurred image (target input image) after preprocessing is used.
That is, the diffusion coefficient of the target single particle determines the manner in which the positioning of the target single particle is obtained.
Therefore, in this embodiment, the diffusion coefficient of the target single particle is compared with the set threshold, for the target single particle whose diffusion coefficient is greater than the set threshold, the centroid is calculated according to the particle pseudo-trajectory image predicted by the particle trajectory prediction model, and the position of the centroid is used as the location of the target single particle, for the target single particle whose diffusion coefficient is less than or equal to the set threshold, ellipsoidal gaussian fitting is performed on the target input image, and the location obtained by fitting is used as the location of the target single particle.
The setting threshold may be set according to actual requirements, which is not specifically limited herein. For example, in one particular embodiment, the threshold is set to 1。
In this embodiment, the centroid calculation is performed on the particle pseudo-trajectory image to obtain the location of the target single particle when the diffusion coefficient of the target single particle is greater than the set threshold, and the ellipsoidal gaussian fitting is performed on the target input image to obtain the location of the target single particle when the diffusion coefficient of the target single particle is less than or equal to the set threshold. The method can simultaneously capture the space positioning and the diffusion coefficient distribution of particles, can rapidly and accurately predict the diffusion coefficient of a large number of single particles under a high-mark density environment, obviously reduces phototoxicity, is suitable for living cell imaging for at least 10 minutes, and the obtained diffusion coefficient has a high dynamic range, effectively distinguishes the diffusion characteristics of different biological particles, and provides a powerful tool for living cell particle dynamic research.
Further, on the basis of the above-described embodiments, a detailed description will be developed below for the process of image rendering.
The method comprises the steps of obtaining positioning of target single particles, gridding the positioning of the target single particles to obtain a first grid containing no particle positioning and a second grid adjacent to the particle positioning, generating a particle density probability map based on the positioning of the target single particles, interpolating the second grid by using Gaussian weight sum based on the particle density probability map and the first grid to obtain diffusion coefficients corresponding to the second grid, obtaining a diffusion coefficient matrix according to the diffusion coefficients corresponding to the first grid and the diffusion coefficients corresponding to the second grid, carrying out local smoothing on the diffusion coefficient matrix to obtain a mobility map, and displaying by using an HSV color map to complete image rendering.
According to the above embodiments, the localization and diffusion coefficient D of the target single particle can be obtained. In order to intuitively represent the spatial distribution and correlation between the spatial organization and diffusivity of particles, the embodiment develops an image rendering algorithm, and a particle density probability map (PALM) and a diffusivity map (Mobility map) can be generated in the same dataset, and the particle diffusion density map (MPALM) of the particle density and the diffusivity is fused.
Specifically, first, the localization of a total of M total particles is grid-combined and summed in a square pixel grid of a given size to obtain I count. The positioning of the M total particles and the rendered pixel size are selected to achieve the desired spatial resolution at the nyquist sampling frequency.
Then, I count is convolved with a gaussian kernel (σ x,y is equal to the average positioning error) to generate a PALM image I density, visualized using a thermal red color map.
Next, the positioning of the M total particles is again gridded combined in a manner similar to I count, and the average diffusion coefficient is calculated for each combined grid. For a second grid (D null) that does not contain particle localization but is adjacent to the first grid (Draw) with particle localization, the second grid D null is interpolated using a gaussian weight sum, and a specific interpolation formula can be seen in the following formula (9).
。
In the formula (9), the amino acid sequence of the compound,Is the diffusion coefficient corresponding to the second grid obtained by interpolation,AndIs the pixel coordinates of the first grid with the particles positioned,AndIs the pixel coordinates of a second grid that is not positioned but that abuts the first grid. The resulting matrix after interpolation (i.e., the diffusion coefficient matrix) is labeled I D. Subsequently, the diffusion coefficient matrix I D is locally smoothed to obtain a mobility map I smoothed D, which can be seen in the following formula (10).
。
To fuse the information of the particle localization density and the diffusion coefficient D, the present embodiment selects an HSV color map to generate MPALM images. The values of the diffusion coefficient map are mapped to Hue channels (Hue), PALM image I density is mapped to Value channels (Value), while Saturation channels (Saturation) are set to 0 when the particle density is zero, otherwise to 1.
As a result of image rendering, refer to fig. 9, fig. 9 shows a schematic rendering diagram of particle positioning and particle diffusion coefficient according to an embodiment of the present invention. Fig. 9 includes a particle density probability map (PALM), a diffusion coefficient map (Mobility map), and a particle diffusion density map (MPALM).
In this embodiment, by performing image rendering on the positioning and diffusion coefficients of the target single particles, a particle density probability map, a diffusion coefficient map, and a particle diffusion density map are obtained, so that spatial distribution and correlation between the spatial organization and diffusivity of the particles can be more intuitively represented.
In addition, taking the diffusion coefficient and spatial positioning of single particles as an example, fig. 10 shows an overall flow chart of the single particle diffusion quantization characteristic prediction method according to the embodiment of the present invention.
As shown in fig. 10, an original single-particle motion blurred image of the target single particle (i.e., the single-particle motion blurred signal in fig. 10) is first acquired, and then the original single-particle motion blurred image is segmented by using a pre-trained U-Net network, so as to obtain a single-particle signal mask.
Next, the single-grain signal in the original single-grain motion blurred image is pre-located using ThunderSTORM, leaving behind a target single-grain signal mask that contains only one location.
Further, the background and noise level calculated by the median filter are used for filling the target single particle signal mask, and a target input image is obtained.
And then, inputting the target input image into a pre-trained particle track prediction model, and predicting to obtain an output particle pseudo-track image.
Then, calculating the pseudo track area based on the particle pseudo track image, and fitting to obtain a diffusion coefficient corresponding to the target single particle according to the quantitative relation between the pseudo track area and the particle diffusion coefficient.
And finally, under the condition that the diffusion coefficient of the target single particle is smaller than or equal to the set threshold value, performing ellipsoidal Gaussian fitting on the target input image to obtain the positioning of the target single particle.
Based on the above, the particle space distribution and diffusion coefficient distribution of the target single particles can be captured and obtained. It should be further noted that the above steps are described in detail in the above embodiments, and are not described herein.
It is worth mentioning that, for the single particle diffusion quantization characteristic prediction method provided by the embodiment of the invention, in order to verify the robustness of the method, single particle motion blur images generated by a series of particle motion trajectories with different diffusion coefficients under different signal to noise ratios are calculated and simulated, then the pseudo track areas obtained by the same particle motion trajectory under different signal to noise ratios are tested, and the result shows small fluctuation, which indicates the robustness of the particle trajectory prediction model to noise.
Meanwhile, the robustness performance of the U-Net network segmentation under different signal to noise ratios is tested, and the fact that the area of a single particle signal mask generated by the same particle motion track can be kept stable under different signal to noise ratios is found. Thus, the particle trajectory prediction model and the U-Net network exhibit strong robustness to different signal-to-noise ratios, which feature helps mitigate errors due to signal-to-noise ratio fluctuations during imaging, as these errors may lead to fluctuations in the estimated diffusion coefficient D.
The test results of the particle track prediction model and the U-Net network can be specifically seen in fig. 11, and fig. 11 shows a schematic diagram of the inference effects of the particle track prediction model and the U-Net network provided by the embodiment of the invention.
In some embodiments, the detailed description is also developed for the process of cell sample preparation and image data acquisition.
The target protein (i.e., target single particle) is first labeled with a suitable fluorescent tag, attached to the N-terminus or C-terminus of the target protein using HaloTag or SNAP-tag. The protein of interest is then expressed in the cells and stained by incubation with high quantum yield fluorescent dyes (e.g., PA-JF646, HM-SiR, etc.) modified with the tag protein ligand. By means of a total internal reflection fluorescence microscope, in a total internal reflection mode or a high-tilt light sheet mode, proper exposure time (for example, 30 ms) and shooting density (no overlap between most adjacent particles is observed by naked eyes or particle density is lower than 16.5 motion blur particle signals/mum < 2 >/s) are set, and a motion blur image of a single particle is obtained through shooting.
In single-channel MPALM (particle diffusion density map) imaging, a frame of whole image (wide field image other than a single particle image) is connected as one cycle with MPALM images of 2000 frames being continuously photographed. This cycle was repeated 5 to 20 times depending on the cell viability and particle photobleaching conditions. During each MPALM frame, excitation light (642 nm) continuously illuminates the excitation dye for a camera exposure time of 30 milliseconds. When a photo-activated dye is used (e.g., PA-JF 646), 405 nm laser light is pulsed within 0.5 ms dead time of the camera to activate the photo-activated dye. For each large-scale image, the excitation laser (560 nm) and exposure time were adjusted to achieve the desired signal-to-noise ratio.
In dual channel MPALM-PALM imaging, 560 nm and 642 nm lasers are operated simultaneously within a 30 ms camera exposure time. Pulse irradiation of 405 nm laser activates the photoconversion protein of the PALM channel, e.g., mEosEM, within a camera dead time of 0.5 ms. Two cameras are used simultaneously to detect excitation light of two channels simultaneously. Besides the shooting mode, MPALM-MPALM double-channel imaging can be adjusted according to requirements.
In some embodiments, the detailed description is developed for the image registration process at the time of dual channel MPALM-PALM imaging.
For two-color shooting (e.g., two-color MPALM-PALM), because the camera pixel sizes are different between the two channels, the image of one channel needs to be registered to match the coordinates of the image of the other channel, and referring specifically to fig. 12, fig. 12 shows a schematic diagram of the image registration process provided by the embodiment of the present invention.
The target image is defined as MPALM channels with a pixel size of 110 nanometers, while the moving image is defined as PALM channels with a pixel size of 160 nanometers. The task is to find a geometrical transformation (including rotation, scaling and displacement) that is applied to the moving image such that the resulting image has the same spatial coordinates as the target image. Prior to imaging each cell sample, 100 nm TETRASPECK fluorescent microspheres (Invitrogen, T7279) were imaged and used to estimate the transformation matrix for two-channel image registration.
Initially, a transformation matrix is estimated using a phase correlation between a moving image and a target image. This estimation enables the conversion of the moving image into an initial registration image. Next, the microspheres are positioned in the target image, the moving image, and the initial registration image using a two-dimensional gaussian fit.
And matching the microsphere positioning in the target image with the microsphere positioning in the initial registration image. Then, the microsphere positioning in the initial registration image is reversed and paired with the moving image. This method facilitates pairing of microsphere positioning between the target image and the moving image using the positioning of the initial registration image as an intermediary guide.
Subsequently, the estimate of the transformation matrix is updated using the paired microsphere positioning as a control point, and then the transformation matrix is applied to align the positioning of the two-channel images. When the two-channel fluorescent microsphere images are aligned, the method can realize that the average relative positioning distance of the same microsphere after image registration is 10nm.
In still other embodiments, the detailed description is developed for sample drift correction.
For a single color MPALM, the whole image captured between MPALM images (see above for details the acquisition of the first training sample set and the second training sample set) is used to estimate the sample drift. Specifically, the peak of the cross-correlation between the whole images is calculated, and the deviation of the peak from the center of the image is used as an estimate of the sample drift. The sample drift in the entire MPALM image sequence is then interpolated using local linear regression and the position of each MPALM frame is corrected to the first frame.
In single channel MPALM imaging, sample drift is estimated by analyzing the whole image captured between MPALM frames (see for details the example of "cell sample preparation and data acquisition"). Specifically, the peak of the cross-correlation between the whole images is calculated, and the deviation of the peak from the center of the image is used as an estimate of the sample drift. The sample drift of the entire MPALM image sequence is then interpolated by local linear regression. This may correct the positioning of each MPALM frames to the first frame.
In dual channel MPALM-PALM imaging, the PALM image is first aligned to the MPALM image (see the "image registration" embodiment for details). The positioning of the PALM channels is then divided into an appropriate number of bins, typically 2000 frames each. The positioning within each bin is then used to reconstruct a single PALM image. A cross-correlation peak between the reconstructed PALM images is calculated and the deviation of the peak from the center of the image is used as an estimate of the sample drift. The sample drift of the entire PALM image sequence is then interpolated by local linear regression. Finally, a regression model is applied to correct the positioning of each MPALM and PALM frame to align it with the first frame.
The present invention also provides a single-particle diffusion quantization characteristic prediction device corresponding to the single-particle diffusion quantization characteristic prediction method described in the above embodiments, and in particular, fig. 13 shows a schematic structural diagram of the single-particle diffusion quantization characteristic prediction device provided in the embodiment of the present invention.
As shown in fig. 13, the device comprises a target input image acquisition module 1310, a particle pseudo-track image prediction module 1320, a particle diffusion coefficient acquisition module 1330 and a particle motion estimation module, wherein the target input image acquisition module is used for acquiring a target input image corresponding to a target single particle, the particle pseudo-track image prediction module 1320 is used for predicting and obtaining a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, the particle diffusion coefficient acquisition module 1330 is used for calculating and obtaining the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image, and the particle track prediction model is obtained by training and optimizing a first training sample set formed according to a single particle motion blur image and a particle motion track image corresponding to the single particle motion blur image.
In this embodiment, a target input image corresponding to a single target particle is acquired through a target input image acquisition module 1310, a particle pseudo-track image prediction module 1320 predicts and obtains a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, and then a particle diffusion quantization feature acquisition module 1330 calculates and obtains a diffusion quantization feature of the single target particle based on the particle pseudo-track image, wherein the particle track prediction model is obtained by performing training optimization according to a first training sample set formed by the single particle motion blur image and a particle motion track image corresponding to the single particle motion blur image. According to the device, the actual motion path of the target single particle is predicted from the target input image through the particle track prediction model, and the diffusion quantization characteristic of the target single particle is calculated based on the actual motion path, so that the diffusion quantization characteristic of a large number of single particles can be rapidly and accurately predicted in a high-marker-density environment, phototoxicity is obviously reduced, the device is suitable for living cell imaging for at least 10 minutes, the obtained diffusion quantization characteristic is high in dynamic range, the diffusion characteristics of different biological particles are effectively distinguished, and a powerful tool is provided for living cell particle dynamic research.
Fig. 14 illustrates a physical schematic diagram of an electronic device, as shown in fig. 14, which may include a processor 1410, a communication interface (Communications Interface) 1420, a memory 1430, and a communication bus 1440, wherein the processor 1410, the communication interface 1420, and the memory 1430 communicate with each other via the communication bus 1440. The processor 1410 may invoke logic instructions in the memory 1430 to perform a single-particle diffusion quantization feature prediction method, where the method includes obtaining a target input image corresponding to a target single particle, predicting a particle pseudo-trajectory image based on a pre-trained particle trajectory prediction model based on the target input image, and calculating a diffusion quantization feature of the target single particle based on the particle pseudo-trajectory image, where the particle trajectory prediction model is obtained by performing training optimization based on a first training sample set formed from a single-particle motion blur image and a particle motion trajectory image corresponding to the single-particle motion blur image.
In addition, the logic instructions in the memory 1430 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
On the other hand, the invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, the computer program is realized when being executed by a processor to execute the single particle diffusion quantization characteristic prediction method provided by the methods, the method comprises the steps of obtaining a target input image corresponding to a target single particle, predicting and obtaining a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model, calculating and obtaining the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image, wherein the particle track prediction model is obtained by training and optimizing a first training sample set formed by the single particle motion blur image and the corresponding particle motion track image.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present invention.
Claims (10)
1. A single particle diffusion quantization characteristic prediction method, comprising:
acquiring a target input image corresponding to the target single particle;
Based on a pre-trained particle track prediction model, predicting to obtain a particle pseudo-track image according to the target input image;
calculating to obtain the diffusion quantization characteristic of the target single particle based on the particle pseudo-track image;
the particle track prediction model is obtained by training and optimizing a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image.
2. The method of claim 1, wherein the obtaining the target input image corresponding to the target single particle comprises:
collecting an original single-particle motion blurred image of a target single particle;
Dividing the original single-particle motion blur image based on a pre-trained U-Net network to obtain a single-particle signal mask;
Presetting a single-particle signal in the original single-particle motion blur image, and reserving a target single-particle signal mask only comprising one position;
Filling the target single-particle signal mask by using the background and noise level calculated by the median filter to obtain the target input image;
the U-Net network is obtained by training and optimizing a second training sample set formed by the single-particle motion blur image and a mask image corresponding to the single-particle motion blur image.
3. The single particle diffusion quantization signature prediction method according to claim 1, wherein the diffusion quantization signature of the target single particle includes a diffusion coefficient and a diffusion direction;
Correspondingly, the calculating, based on the particle pseudo-trajectory image, the diffusion quantization characteristic of the target single particle includes:
determining a pseudo track area according to the particle pseudo track image;
fitting to obtain a diffusion coefficient corresponding to the target single particle according to the quantitative relation between the pseudo-track area and the particle diffusion coefficient;
and fitting according to the density space distribution of the particle pseudo-track image to obtain the diffusion direction corresponding to the target single particle.
4. The single particle diffusion quantization characteristic prediction method according to claim 1, wherein training an optimized particle trajectory prediction model specifically comprises:
Simulating a single-particle motion blur image, and acquiring a particle motion track image corresponding to the single-particle motion blur image to construct and obtain a first training sample set;
And taking the single particle motion blur image as a model input, taking a predicted pseudo-track image as a model output, taking the difference between the predicted pseudo-track image and the particle motion track image as a training loss, and performing iterative optimization on the particle track prediction model to obtain a particle track prediction model with training convergence.
5. The single grain diffusion quantization signature prediction method according to claim 4, wherein said simulating a single grain motion blurred image comprises:
Simulating a two-dimensional particle track, and superposing a Gaussian function on each point on the two-dimensional particle track to obtain a motion blur function;
And normalizing and pixelating the motion blur function, introducing Gaussian white noise and poisson shooting noise, and obtaining single-particle motion blur images under different signal-to-noise ratios and background levels.
6. The method of predicting a single particle diffusion quantization characteristic according to claim 3, wherein fitting to obtain a diffusion coefficient corresponding to the target single particle includes:
under the condition that the diffusion coefficient of the target single particle is larger than a set threshold value, calculating the mass center of the particle pseudo-track image to obtain the positioning of the target single particle;
And under the condition that the diffusion coefficient of the target single particle is smaller than or equal to a set threshold value, performing ellipsoidal Gaussian fitting on the target input image to obtain the positioning of the target single particle.
7. The method of single particle diffusion quantization signature prediction according to claim 6, wherein the locating of the target single particle is obtained, followed by:
Gridding the positioning of the target single particles to obtain a first grid containing particle positioning and a second grid which does not contain particle positioning but is adjacent to the particle positioning;
Generating a particle density probability map based on the positioning of the target single particles;
Interpolation processing is carried out on the second grid by using Gaussian weight sum based on the particle density probability map and the first grid, so that a diffusion coefficient corresponding to the second grid is obtained;
Obtaining a diffusion coefficient matrix according to the diffusion coefficient corresponding to the first grid and the diffusion coefficient corresponding to the second grid;
And carrying out local smoothing on the diffusion coefficient matrix to obtain a mobility map, and displaying by using an HSV color map to finish image rendering.
8. A single particle diffusion quantization characteristic prediction apparatus, comprising:
The target input image acquisition module is used for acquiring a target input image corresponding to the target single particle;
the particle pseudo-track image prediction module is used for predicting and obtaining a particle pseudo-track image according to the target input image based on a pre-trained particle track prediction model;
the particle diffusion quantization characteristic acquisition module is used for calculating the diffusion coefficient of the target single particle based on the particle pseudo-track image;
the particle track prediction model is obtained by training and optimizing a first training sample set formed by a single particle motion blur image and a corresponding particle motion track image.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the single particle diffusion quantization characteristic prediction method of any one of claims 1 to 7 when the computer program is executed.
10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the single particle diffusion quantization characteristic prediction method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411353615.2A CN119477965A (en) | 2024-09-26 | 2024-09-26 | Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411353615.2A CN119477965A (en) | 2024-09-26 | 2024-09-26 | Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119477965A true CN119477965A (en) | 2025-02-18 |
Family
ID=94565002
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411353615.2A Pending CN119477965A (en) | 2024-09-26 | 2024-09-26 | Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119477965A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119693722A (en) * | 2025-02-24 | 2025-03-25 | 安徽中医药大学 | A method for determining the estrous cycle of rats based on image recognition |
CN119784767A (en) * | 2025-03-13 | 2025-04-08 | 中国地质调查局油气资源调查中心 | A microscope image analysis method for fine-grained sedimentary rocks based on ImageJ |
-
2024
- 2024-09-26 CN CN202411353615.2A patent/CN119477965A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119693722A (en) * | 2025-02-24 | 2025-03-25 | 安徽中医药大学 | A method for determining the estrous cycle of rats based on image recognition |
CN119784767A (en) * | 2025-03-13 | 2025-04-08 | 中国地质调查局油气资源调查中心 | A microscope image analysis method for fine-grained sedimentary rocks based on ImageJ |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Cortinhal et al. | Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds | |
CN112465828B (en) | Image semantic segmentation method and device, electronic equipment and storage medium | |
Wu et al. | Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning | |
Sarder et al. | Deconvolution methods for 3-D fluorescence microscopy images | |
EP2660639B1 (en) | Method and apparatus for single-particle localization using wavelet analysis | |
JP6503335B2 (en) | Three-dimensional image processing to locate nanoparticles in biological and non-biological media | |
Prakash et al. | Interpretable unsupervised diversity denoising and artefact removal | |
Speiser et al. | Teaching deep neural networks to localize sources in super-resolution microscopy by combining simulation-based learning and unsupervised learning | |
CN116503258B (en) | Super-resolution computing imaging method, device, electronic equipment and storage medium | |
CN119477965A (en) | Single particle diffusion quantitative characteristic prediction method, device, electronic device and storage medium | |
CN109300151A (en) | Image processing method and device, electronic equipment | |
CN116612472B (en) | Image-based single molecule immunoarray analyzer and method thereof | |
Aaron et al. | Practical considerations in particle and object tracking and analysis | |
Ayas et al. | Microscopic image super resolution using deep convolutional neural networks | |
Pino et al. | Semantic segmentation of radio-astronomical images | |
Scattarella et al. | Deep learning approach for denoising low-SNR correlation plenoptic images | |
Chobola et al. | Lucyd: A feature-driven richardson-lucy deconvolution network | |
Zhao et al. | Robust single-photon 3D imaging based on full-scale feature integration and intensity edge guidance | |
Wang et al. | Local conditional neural fields for versatile and generalizable large-scale reconstructions in computational imaging | |
CN111929688B (en) | Method and equipment for determining radar echo prediction frame sequence | |
Liu et al. | Enhancing structural illumination microscopy with hybrid CNN-transformer and dynamic frequency loss | |
Luo et al. | Revolutionizing optical imaging: computational imaging via deep learning | |
Wang et al. | Multi-nanoparticle recognition and tracking based on SPTGAN-YOLOv3 modeling | |
Abdehkakha et al. | Localization of Ultra-dense Emitters with Neural Networks | |
Maalouf | Contribution to fluorescence microscopy, 3D thick samples deconvolution and depth-variant PSF |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |