[go: up one dir, main page]

CN111626936A - Rapid panoramic stitching method and system for microscopic images - Google Patents

Rapid panoramic stitching method and system for microscopic images Download PDF

Info

Publication number
CN111626936A
CN111626936A CN202010439003.0A CN202010439003A CN111626936A CN 111626936 A CN111626936 A CN 111626936A CN 202010439003 A CN202010439003 A CN 202010439003A CN 111626936 A CN111626936 A CN 111626936A
Authority
CN
China
Prior art keywords
image
microscopic image
optical flow
microscopic
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010439003.0A
Other languages
Chinese (zh)
Other versions
CN111626936B (en
Inventor
谷秀娟
向北海
许会
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Guokezhitong Technology Co ltd
Original Assignee
Hunan Guokezhitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Guokezhitong Technology Co ltd filed Critical Hunan Guokezhitong Technology Co ltd
Priority to CN202010439003.0A priority Critical patent/CN111626936B/en
Publication of CN111626936A publication Critical patent/CN111626936A/en
Application granted granted Critical
Publication of CN111626936B publication Critical patent/CN111626936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种显微图像的快速全景拼接方法,利用端到端的网络可实现由粗到细的光流估计,计算速度快,光流估计精准,完全能达到边图像采集边计算的要求;结合空间变换网络,该空间变换网络可直接从光流特征图中预测变换矩阵,并根据该变换矩阵对局部显微图像进行空间变换,使相邻局部显微图像的重叠区域精准对齐;然后采用渐入渐出的线性融合方式对对齐显微图像的重叠区域进行线性融合,可以有效的消除融合后图像的缝隙和重影,实现无缝拼接;最后利用双边滤波对全景融合显微图像进行全景渲染,以消除拼接后的融合缝隙生成无缝拼接图像。与现有技术相比,本发明提供的全景拼接方法拼接速度快、精度高,可满足实际应用的需求。

Figure 202010439003

The invention discloses a fast panorama stitching method for microscopic images, which can realize optical flow estimation from coarse to fine by using an end-to-end network, has fast calculation speed, accurate optical flow estimation, and can fully meet the requirements of image acquisition and calculation; Combined with the spatial transformation network, the spatial transformation network can directly predict the transformation matrix from the optical flow feature map, and spatially transform the local microscopic images according to the transformation matrix, so that the overlapping areas of adjacent local microscopic images can be accurately aligned; The linear fusion method of fading in and fading out linearly fuses the overlapping areas of the aligned microscopic images, which can effectively eliminate the gaps and ghosts of the fused images and achieve seamless stitching. Finally, the panoramic fusion microscopic images are panorama by using bilateral filtering. Rendering to eliminate post-stitching fusion gaps to produce a seamless stitched image. Compared with the prior art, the panorama stitching method provided by the present invention has fast stitching speed and high precision, and can meet the needs of practical applications.

Figure 202010439003

Description

Rapid panoramic stitching method and system for microscopic images
Technical Field
The invention relates to the technical field of microscopic image processing, in particular to a method and a system for quickly splicing a panoramic image.
Background
In disease diagnosis and pathological research, the traditional microscope has the problems of multiple operation steps, large workload, difficult resource sharing, incapability of long-term storage and the like, and the digital technology of pathological sections is developed according to the problems. The pathological section scanner scans and photographs cells or tissue sections in the glass slide to obtain microscopic images, and the microscopic images are spliced into panoramic images through subsequent software and are automatically identified, so that the analysis and diagnosis of the cells or the tissue images are realized.
In the existing pathological section scanner, the field range and the resolution ratio are in inverse proportion in the optical field, the requirement of obtaining an image with large field and high resolution ratio on an optical system of a microscope is very strict, and the most common technology for solving the problems is an image splicing technology. The technology acquires high-resolution images of different areas of a slice and finally fuses the images into a panoramic image to construct a complete large-scale high-resolution microscopic image. However, in the acquisition process, too few feature points on image matching caused by too large displacement of two adjacent images often occur, so that the stitching failure is caused.
Disclosure of Invention
The invention provides a method and a system for quickly splicing a panoramic image, which are used for overcoming the defects of splicing failure and the like caused by too few characteristic points in the prior art.
In order to achieve the above object, the present invention provides a method for fast panoramic stitching of microscopic images, comprising:
controlling a microscope objective to acquire a local microscopic image of the pathological section according to a pre-planned acquisition path;
performing optical flow estimation on a local microscopic image acquired at the current position and a local microscopic image acquired at the previous position by using a pre-trained end-to-end network to obtain an optical flow characteristic diagram;
converting the optical flow characteristic diagram by using a pre-trained space transformation network to obtain an initial matrix, performing matrix multiplication operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix at the current position, and aligning two local microscopic images according to the transformation matrix at the current position to obtain a first aligned microscopic image;
performing linear fusion on the overlapping area of the first alignment microscopic image by adopting a gradually-in and gradually-out linear fusion mode to obtain a first fusion microscopic image;
controlling a microscope objective or a microscope objective stage to move to the next position according to a pre-planned acquisition path and acquiring a local microscopic image, then carrying out optical flow estimation and alignment on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position to obtain a second aligned microscopic image, and carrying out linear fusion on the second aligned microscopic image and the first fused microscopic image to obtain a second fused microscopic image;
the processes of the optical flow estimation, the alignment and the linear fusion are circulated until all the local microscopic images are fused to obtain a panoramic fusion microscopic image;
and performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering to obtain a panoramic splicing microscopic image.
In order to achieve the above object, the present invention further provides a system for fast panorama stitching of microscopic images, comprising:
the image acquisition module is used for controlling the microscope objective to acquire a local microscopic image of the pathological section according to a pre-planned acquisition path;
the optical flow estimation module is used for carrying out optical flow estimation on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position by utilizing a pre-trained end-to-end network to obtain an optical flow characteristic diagram;
the image alignment module is used for converting the optical flow characteristic diagram by utilizing a pre-trained space transformation network to obtain an initial matrix, carrying out matrix multiplication operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix at the current position, and aligning two local microscopic images according to the transformation matrix at the current position to obtain a first aligned microscopic image;
the fusion module is used for performing linear fusion on the overlapping area of the first alignment microscopic image in a gradually-in and gradually-out linear fusion mode to obtain a first fusion microscopic image;
the circulation module is used for controlling the microscope objective or the microscope objective stage to move to the next position and acquiring a local microscopic image according to a pre-planned acquisition path, then carrying out optical flow estimation and alignment on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position to obtain a second aligned microscopic image, and carrying out linear fusion on the second aligned microscopic image and the first fused microscopic image to obtain a second fused microscopic image; the processes of the optical flow estimation, the alignment and the linear fusion are circulated until all the local microscopic images are fused to obtain a panoramic fusion microscopic image;
and the rendering module is used for performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering to obtain a panoramic splicing microscopic image.
To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
Compared with the prior art, the invention has the beneficial effects that:
the rapid panoramic stitching method for the microscopic images provided by the invention can realize coarse-to-fine optical flow estimation by utilizing an end-to-end network, has high calculation speed and accurate optical flow estimation, and can completely meet the requirements of image acquisition and calculation; combining a spatial transformation network, wherein the spatial transformation network can directly predict a transformation matrix from the optical flow characteristic diagram, and performs spatial transformation on the local microscopic images according to the transformation matrix so as to accurately align the overlapping areas of the adjacent local microscopic images; then, linear fusion is carried out on the overlapped area of the aligned microscopic images by adopting a gradually-in and gradually-out linear fusion mode, so that gaps and double images of the fused images can be effectively eliminated, and seamless splicing is realized; and finally, performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering so as to eliminate the spliced fusion gap and generate a seamless spliced image. Compared with the prior art, the panoramic stitching method provided by the invention has the advantages of high stitching speed and high accuracy, and can meet the requirements of image acquisition and stitching.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a method for rapid panoramic stitching of microscopic images according to the present invention;
FIG. 2 is a block diagram of an end-to-end network in an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the operation of a spatial transform network according to an embodiment of the present invention;
FIG. 4 is a network structure diagram of a redefinition module according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides a rapid panoramic stitching method of microscopic images, which comprises the following steps of:
101: controlling a microscope objective or a microscope objective table to acquire a local microscopic image of the pathological section according to a pre-planned acquisition path;
the microscope objective will acquire one local microscopic image at each dwell position on the acquisition path.
The collection path may be a serpentine path from top to bottom, a serpentine path from left to right, or other paths as long as the cells in the pathological section are all collected.
102: performing optical flow estimation on a local microscopic image acquired at the current position and a local microscopic image acquired at the previous position by using a pre-trained end-to-end network to obtain an optical flow characteristic diagram;
end-to-end networking refers to the use of deep neural networks to replace the multi-phase process. For example, conventional optical flow estimation by including three stages: feature extraction, feature matching and optical flow estimation. The output optical flow can be predicted directly from the two input images using an end-to-end network.
The optical flow estimation means that when two frames of images are given, luminance motion vectors of corresponding points in the next frame of image and the previous frame of image are estimated, and the luminance motion vectors are instantaneous speeds of pixel motion of a space moving object on an observation imaging plane.
103: converting the optical flow characteristic diagram by using a pre-trained space transformation network to obtain an initial matrix, performing matrix multiplication (matrix multiplication) operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix at the current position, and aligning two local microscopic images according to the transformation matrix at the current position to obtain a first aligned microscopic image;
spatial Transformer Networks (STNs) are a convolutional neural network architecture model proposed by Jaderberg et al, and the classification accuracy of the convolutional network model is improved by transforming the input pictures and reducing the influence of the Spatial diversity of data, rather than by changing the network structure. The STNs have good robustness and spatial invariance such as translation, expansion, rotation, disturbance, bending and the like.
The transformation matrix includes parameters of translation, scaling, rotation, etc.
104: performing linear fusion on the overlapped area of the first alignment microscopic image by adopting a gradually-in and gradually-out linear fusion mode to obtain a first fusion microscopic image;
the linear fusion method of gradual-in and gradual-out is to linearly assign weights to pixels of two images in an overlapping area, and when the weights increase from 0 to 1, the pixel values of the overlapping area change from a left overlapping area to a right overlapping area.
105: controlling a microscope objective or a microscope objective stage to move to the next position according to a pre-planned acquisition path and acquiring a local microscopic image I2, then carrying out optical flow estimation in step 102 and alignment in step 103 on the local microscopic image I2 acquired at the current position and the local microscopic image I1 acquired at the previous position to obtain a second aligned microscopic image, and carrying out linear fusion in step 104 on the second aligned microscopic image and the first fused microscopic image to obtain a second fused microscopic image;
the local microscopic image I2 acquired at the current position and the local microscopic image I1 acquired at the previous position are fused and then the fused microscopic image is stored, and meanwhile, the local microscopic image I2 acquired at the current position is continuously stored until the local microscopic image acquired at the next position and the local microscopic image I2 acquired at the current position are fused and then released.
106: the processes of optical flow estimation, alignment and linear fusion in the step 105 are circulated until all the local microscopic images are fused to obtain a panoramic fusion microscopic image;
and collecting the local microscopic image, and simultaneously carrying out the processes of light stream estimation, alignment and linear fusion, and obtaining the panoramic fusion microscopic image after the local microscopic image is collected.
107: and performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering to obtain a panoramic splicing microscopic image.
Bilateral filtering is a non-linear filter whose basic idea is to represent the intensity of a pixel by a weighted average of the intensity values of surrounding pixels, the weighted average being based on a gaussian distribution. The weights for the Bilateral filtering include the Euclidean distance of the pixel and the radiation difference in the pixel domain, both of which are considered simultaneously when computing the center pixel (ref: Tomasi C, Manduchi R. "binary filtering for gray and color images". ICCV [ J ].1998: 839-) -846).
The panorama rendering smoothes all overlapping regions in the panorama fusion microscopy image.
The rapid panoramic stitching method for the microscopic images provided by the invention can realize coarse-to-fine optical flow estimation by utilizing an end-to-end network, has high calculation speed and accurate optical flow estimation, and can completely meet the requirements of image acquisition and calculation; combining a spatial transformation network, wherein the spatial transformation network can directly predict a transformation matrix from the optical flow characteristic diagram, and performs spatial transformation on the local microscopic images according to the transformation matrix so as to accurately align the overlapping areas of the adjacent local microscopic images; then, linear fusion is carried out on the overlapped area of the aligned microscopic images by adopting a gradually-in and gradually-out linear fusion mode, so that gaps and double images of the fused images can be effectively eliminated, and seamless splicing is realized; and finally, performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering so as to eliminate the spliced fusion gap and generate a seamless spliced image. Compared with the prior art, the panoramic stitching method provided by the invention has the advantages of high stitching speed and high accuracy, and can meet the requirements of image acquisition and stitching.
In one embodiment, before performing step 101, the method further includes the steps of:
001: pre-scanning pathological sections to obtain blank areas and target areas of the pathological sections;
the target region is a region with cells.
002: and planning the acquisition path in the target area.
The pre-scanning aims to remove blank areas of pathological sections, so that the working efficiency is improved with less workload.
In the next embodiment, for step 101, the present embodiment employs a high-magnification digital microscope to acquire a single local microscopic image in different fields of view in the pathological section. When acquiring the local microscope images in different fields, it is necessary to ensure that there is an overlapping area between adjacent local microscope images until the sum of the areas of the individual local microscope images in different fields is enough to cover the cell or tissue sample area in the original pathological section.
The microscope objective has a magnification of 20X or 40X or 100X.
The path of this embodiment is a serpentine path from left to right.
The microscope objective or stage is moved in a serpentine path from left to right to acquire local microscopic images of the pathological section as individual microscopic images in different fields of view superimposed on each other.
In the existing pathological section scanner, the speed of scanning and splicing and the quality of the generated large-size microscopic image are both key factors, and a balance needs to be made between the speed and the quality to ensure that the best spliced panoramic image is obtained at the fastest speed. The size of the overlapping area of each time the microscope objective lens or the microscope objective table moves is set to be 20-30% of the area of a single local microscope image.
In another embodiment, for step 102, the step of performing optical flow estimation on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position by using a pre-trained end-to-end network (a structure diagram of the end-to-end network in this embodiment is shown in fig. 2) to obtain an optical flow feature map includes:
201: local microscopic image I2 (Im) acquired at current position by using pre-trained FlowNet1 network (optical flow estimation network)age2) and a local microscopic Image I1(Image1) acquired at the previous position to carry out rough optical flow estimation to obtain a rough optical flow characteristic diagram F1(Flow1) from the rough optical Flow feature F1Carrying out position transformation on the local microscopic image I2 to obtain a first transformation image W1(Warped1), calculating a local microscopic image I1 and a first transformed image W1Obtaining a first brightness error BE from the brightness difference1(Brightness Error1);
Using a pre-trained FlowNet2 network to perform local microscopic image I1, local microscopic image I2 and rough optical flow characteristic diagram F1The first converted image W1And brightness error BE1Performing fine optical flow estimation to obtain fine optical flow characteristic diagram F2(Flow2) from the fine optical Flow feature map F2The position of the local microscopic image I2 is transformed to obtain a second transformed image W2(Warped2), calculating a local microscopic image I1 and a second transformed image W2Obtaining a second brightness error BE from the brightness difference2(Brightness Error2);
Using a pre-trained FlowNet2 network to perform local microscopic image I1, local microscopic image I2 and fine optical flow characteristic diagram F2The second converted image W2And a second luminance error BE2And performing fine optical flow estimation to obtain an optical flow feature map F (Featuremap).
Considering the inaccuracy of optical flow estimation, the invention uses the optical flow characteristic diagram to transform the first transformed image W obtained after the local microscopic image I2 is transformed1And a second transformed image W2. First transformed image W1And a second transformed image W2Since there is still a certain deviation from the local microscopic image I1, it is necessary to perform the local microscopic image I1 and the first transformed image W1The second converted image W2Is subtracted to obtain a first brightness error BE1And a second luminance error BE2And the method is used for subsequent accurate optical flow estimation.
In a certain embodiment, the FlowNet1 network includes, in order, 9 convolutional layers and 1 redefinement module (adjustment module, which may also be referred to as a decoding module);
the 9 convolutional layers are used for carrying out high-level feature extraction on the local micro-images stacked in advance to obtain a feature map;
and the redefinition module is used for performing upsampling operation by using the deconvolution layer to obtain a coarse optical flow feature map.
The network structure of the redefinition module is shown in fig. 4. The network structure comprises 4 deconvolution layers (deconv4, deconv3, deconv2 and deconv1), wherein the input of the deconv4 deconvolution layer is a feature map output by a conv6 convolution layer, the input of the last three deconvolution layers comprises two parts, the first part is a deconvolution output of the previous layer, and the second part is a feature map output of convolution layers (conv5_1, conv4_1 and conv3_1) in an end-to-end network, so that information of the upper layer and the bottom layer is fused, and a mechanism from coarse to fine is also introduced. The input of the FlowNet1 network is two 3-channel images, before the FlowNet1 network is input, the two 3-channel images are stacked to become 6-channel tensors, then the 6-channel tensors are input into the FlowNet1 network, the high-level feature extraction is carried out sequentially through 9 convolutional layers, finally the feature diagram in the previous convolutional layer is continuously up-sampled through a redefinement module, and a 2-channel rough optical flow feature diagram with the same size as the input original diagram is output.
In another embodiment, the FlowNet2 network includes, in order, 9 convolutional layers, 1 correlation layer, and 1 redefinition module;
the first 3 convolutional layers are used for carrying out feature extraction on each input local microscopic image to obtain a first feature map;
the correlation layer is used for performing correlation operation on the feature images of different local microscopic images so as to combine the features of the different local microscopic images to obtain a feature fusion image;
the last 6 convolutional layers are used for carrying out high-level feature extraction on the feature fusion graph to obtain a second feature graph;
and the redefinition module is used for performing upsampling operation by using the deconvolution layer to obtain a fine optical flow feature map.
The input of the FlowNet2 network is two 3-channel images, the two 3-channel images are subjected to higher-layer feature extraction through the first 3 convolutional layers respectively to obtain 1 first feature graph, then the first feature graph enters the related layers to perform related operation so as to merge the features of the first feature graph to obtain a feature fusion graph, the feature fusion graph is subjected to higher-layer feature extraction sequentially through the next 6 convolutional layers to obtain 1 second feature graph, and finally the feature graph of the previous convolutional layer is continuously up-sampled through a redefinement module to output a 2-channel rough optical flow feature graph with the same size as the original graph.
In the next embodiment, the formula of the correlation operation is:
Figure BDA0002503366730000111
in the formula, c (x)1,x2) Is the correlation value, x, of two feature maps1,x2Respectively corresponding points on the two characteristic graphs; f. of1And f2Two characteristic graphs are obtained; k is the boundary value of the region to be compared; o is the position value of any point in the area to be compared;<>the correlation operation is the multiplication of the pixels at the corresponding positions, and the larger the correlation value is, the closer the characteristic diagram is represented.
The correlation operation is used for comparing the correlation between the two input feature maps.
In a further embodiment, the feature map F is based on a coarse optical flow1Carrying out position transformation on the local microscopic image I2 to obtain a first transformation image W1The method comprises the following steps:
from a coarse light flow feature map F1Obtaining the offset value of each pixel (the value of each position in the light flow diagram is a two-dimensional vector and represents the instantaneous speed of pixel movement), offsetting each corresponding pixel in the local microscopic image I2 according to the offset value to obtain a first transformed image W1
In this embodiment, the feature map F is based on the fine optical flow2The position of the local microscopic image I2 is transformed to obtain a second transformed image W2The method comprises the following steps:
from fine optical flow feature maps F2Obtaining an offset value of each pixel, and offsetting each corresponding pixel in the local microscopic image I2 according to the offset value to obtain a second transformation image W2
In another embodiment, for step 103, the spatial transformation network includes a local network (localization network), a Grid generator (Grid generator), and a Sampler (Sampler), and the workflow of the spatial transformation network is as shown in fig. 3;
the spatial transformation network firstly generates a transformation matrix with 9 parameters through a simple regression network for carrying out transformation operation on an original image, then each point on a target image corresponds to the original image according to the transformation matrix, and finally a sampler is used for sampling pixel values on the original image into the target image. The spatial transformation network does not need to calibrate key points, and can adaptively carry out spatial transformation and alignment (comprising translation, scaling, rotation and other set transformation and the like) on data.
Converting the optical flow characteristic diagram by using a pre-trained space transformation network to obtain an initial matrix, performing matrix multiplication operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix at the current position, and aligning two local microscopic images according to the transformation matrix at the current position to obtain a first aligned microscopic image, wherein the method comprises the following steps of:
301: carrying out multilayer convolution operation on the optical flow characteristic diagram by utilizing a pre-trained local network, carrying out characteristic full connection, regressing and outputting an initial matrix, and carrying out matrix multiplication operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix with 9 parameters;
the local network is essentially a simple regression network.
The 9 parameters include translation, scaling, rotation, etc. Parameters in the obtained transformation matrix are different according to the input optical flow characteristic diagram.
302: fixing a local microscopic image I1 acquired at the previous position, recording a local microscopic image I2 acquired at the current position as an original image, setting a hollow microscopic image as a microscopic image aligned with I1 and marking as a target image, and calculating the coordinate position of each coordinate position in the target image in the original image according to a transformation matrix by using a pre-trained grid generator to obtain a mapping relation T (G) of the overlapping area of the target image and the original image;
303: and searching corresponding coordinate positions of all coordinate positions of the target image in the original image by using a pre-trained sampler according to the mapping relation T (G), and correspondingly copying pixels in the original image into the target image by adopting a bilinear interpolation mode to obtain a first alignment microscopic image.
The bilinear interpolation is also called bilinear interpolation, and mathematically, the bilinear interpolation is linear interpolation extension of an interpolation function with two variables, and the core idea is to perform linear interpolation in two directions respectively.
The reason for using the bilinear interpolation method is that the coordinates mapped from the target image coordinates onto the original image may be decimal, and the integer coordinates need to be obtained by interpolation operation in the original image.
In a next embodiment, for step 104, in order to eliminate the fused seam after the splicing and generate the seamless spliced image, a gradually-in and gradually-out linear fusion mode is adopted to perform linear fusion on the overlapped region of the first alignment microscopic image, and in the step of obtaining the first fused microscopic image, the formula of the linear fusion is as follows:
I(i,j)=αI1(i,j)+(1-α)I2(i,j)0≤α≤1 (2)
in the formula, I is a fusion pixel value; i is1And I2Respectively the original pixel values of the corresponding overlapping areas of the two local microscope images, (i, j) are pixel coordinates, (α) are weight coefficients,
Figure BDA0002503366730000131
wherein w represents the width of the overlapping region;1indicating that the overlap region is near the inner edge position of image I1; dis denotes computing pixel location I1(I, j) to an inner edge1The distance of (c).
In an embodiment, after step 104, an automatic color gradation process is further performed on the panoramic stitched microscopic image to remove the influence of the background light and the white balance, that is, the black background and the impurities are removed by using a soft-edge spray gun.
The automatic tone gradation process is to automatically define the brightest and darkest pixels in each channel as white and black, and then to redistribute the pixel values in between proportionally, so that the image contrast is enhanced and the gradation is clear. The background and edges can be made more natural using an edge smoothing operation.
The invention also provides a rapid panoramic stitching system of the microscopic image, which comprises:
the image acquisition module is used for controlling the microscope objective to acquire a local microscopic image of the pathological section according to a pre-planned acquisition path;
the optical flow estimation module is used for carrying out optical flow estimation on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position by utilizing a pre-trained end-to-end network to obtain an optical flow characteristic diagram;
the image alignment module is used for converting the optical flow characteristic diagram by utilizing a pre-trained space transformation network to obtain an initial matrix, carrying out matrix multiplication operation on the initial matrix and a transformation matrix obtained at the previous position to obtain a transformation matrix at the current position, and aligning two local microscopic images according to the transformation matrix at the current position to obtain a first aligned microscopic image;
the fusion module is used for performing linear fusion on the overlapping area of the first alignment microscopic image in a gradually-in and gradually-out linear fusion mode to obtain a first fusion microscopic image;
the circulation module is used for controlling the microscope objective or the microscope objective stage to move to the next position and acquiring a local microscopic image according to a pre-planned acquisition path, then carrying out optical flow estimation and alignment on the local microscopic image acquired at the current position and the local microscopic image acquired at the previous position to obtain a second aligned microscopic image, and carrying out linear fusion on the second aligned microscopic image and the first fused microscopic image to obtain a second fused microscopic image; the processes of the optical flow estimation, the alignment and the linear fusion are circulated until all the local microscopic images are fused to obtain a panoramic fusion microscopic image;
and the rendering module is used for performing panoramic rendering on the panoramic fusion microscopic image by utilizing bilateral filtering to obtain a panoramic splicing microscopic image.
The invention also relates to a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1.一种显微图像的快速全景拼接方法,其特征在于,包括:1. a fast panorama stitching method of microscopic image, is characterized in that, comprises: 控制显微镜物镜按预先规划的采集路径采集病理切片的局部显微图像;Control the microscope objective lens to collect local microscopic images of pathological sections according to the pre-planned acquisition path; 利用预先训练好的端到端的网络对当前位置采集的局部显微图像和上一位置采集的局部显微图像进行光流估计,获得光流特征图;Use the pre-trained end-to-end network to estimate the optical flow of the local microscopic image collected at the current position and the local microscopic image collected at the previous position, and obtain the optical flow feature map; 利用预先训练好的空间变换网络对所述光流特征图进行转换,获得初始矩阵,将所述初始矩阵与上一位置获得的变换矩阵进行矩阵乘法运算,获得当前位置的变换矩阵,并根据所述当前位置的变换矩阵进行两张局部显微图像的对齐,获得第一对齐显微图像;Use the pre-trained spatial transformation network to transform the optical flow feature map to obtain the initial matrix, and perform matrix multiplication operation on the initial matrix and the transformation matrix obtained at the previous position to obtain the transformation matrix of the current position, and according to the obtained transformation matrix The transformation matrix of the current position is used to align the two partial microscopic images to obtain a first aligned microscopic image; 采用渐入渐出的线性融合方式对所述第一对齐显微图像的重叠区域进行线性融合,获得第一融合显微图像;Linear fusion is performed on the overlapping area of the first aligned microscopic image by adopting a gradual-in-gradual-out linear fusion method to obtain a first fusion microscopic image; 按预先规划的采集路径控制显微镜物镜或显微镜载物台移动到下一个位置并采集一张局部显微图像,然后对当前位置采集的局部显微图像和上一位置采集的局部显微图像进行所述光流估计、所述对齐,获得第二对齐显微图像,将所述第二对齐显微图像与所述第一融合显微图像进行所述线性融合,获得第二融合显微图像;Control the microscope objective lens or microscope stage to move to the next position according to the pre-planned acquisition path and acquire a local microscopic image, and then perform the acquisition of the local microscopic image collected at the current position and the local microscopic image collected at the previous position. The optical flow estimation and the alignment are performed to obtain a second aligned microscopic image, and the second aligned microscopic image is subjected to the linear fusion with the first fusion microscopic image to obtain a second fusion microscopic image; 循环所述光流估计、所述对齐和所述线性融合的过程,直至将所有的局部显微图像进行融合,获得全景融合显微图像;The process of the optical flow estimation, the alignment and the linear fusion is cycled until all the local microscopic images are fused to obtain a panoramic fusion microscopic image; 利用双边滤波对所述全景融合显微图像进行全景渲染,获得全景拼接显微图像。The panoramic fused microscopic image is rendered panoramically by bilateral filtering to obtain a panoramic stitched microscopic image. 2.如权利要求1所述的显微图像的快速全景拼接方法,其特征在于,利用预先训练好的端到端的网络对当前位置采集的局部显微图像和上一位置采集的局部显微图像进行光流估计,获得光流特征图的步骤,包括:2. the fast panorama stitching method of microscopic images as claimed in claim 1, is characterized in that, utilizes pre-trained end-to-end network to the local microscopic image of current position collection and the local microscopic image of previous position collection The steps of performing optical flow estimation and obtaining the optical flow feature map include: 利用预先训练好的FlowNet1网络对当前位置采集的局部显微图像I2和上一位置采集的局部显微图像I1进行粗光流估计,获得粗光流特征图F1,根据所述粗光流特征图F1对所述局部显微图像I2进行位置变换得到第一变换图像W1,计算所述局部显微图像I1与所述第一变换图像W1的亮度差值得到第一亮度误差BE1Use the pre-trained FlowNet1 network to perform coarse optical flow estimation on the local microscopic image I2 collected at the current position and the local microscopic image I1 collected at the previous position, and obtain a coarse optical flow feature map F 1 . According to the coarse optical flow feature Fig . F1 performs position transformation on the partial microscopic image I2 to obtain a first transformed image W1, and calculates the brightness difference between the partial microscopic image I1 and the first transformed image W1 to obtain a first brightness error BE1 ; 利用预先训练好的FlowNet2网络对所述局部显微图像I1、所述局部显微图像I2、所述粗光流特征图F1、所述第一变换图像W1和所述亮度误差BE1进行细光流估计,获得细光流特征图F2,根据所述细光流特征图F2对所述局部显微图像I2进行位置变换得到第二变换图像W2,计算所述局部显微图像I1与所述第二变换图像W2的亮度差值得到第二亮度误差BE2The partial microscopic image I1 , the partial microscopic image I2, the coarse optical flow feature map F1, the first transformed image W1 and the brightness error BE1 are processed by using the pre - trained FlowNet2 network. Thin optical flow estimation, obtaining a thin optical flow feature map F 2 , performing position transformation on the local microscopic image I2 according to the thin optical flow feature map F 2 to obtain a second transformed image W 2 , and calculating the local microscopic image A second brightness error BE 2 is obtained from the brightness difference between I1 and the second transformed image W 2 ; 利用预先训练好的FlowNet2网络对所述局部显微图像I1、所述局部显微图像I2、所述细光流特征图F2、所述第二变换图像W2和所述第二亮度误差BE2进行细光流估计,获得光流特征图F。The partial microscopic image I1, the partial microscopic image I2, the thin optical flow feature map F2, the second transformed image W2 and the second brightness error BE are analyzed by using the pre-trained FlowNet2 network. 2. Perform thin optical flow estimation to obtain the optical flow feature map F. 3.如权利要求2所述的显微图像的快速全景拼接方法,其特征在于,所述FlowNet1网络依次包括9个卷积层和1个refinement模块;3. the fast panorama stitching method of microscopic image as claimed in claim 2, is characterized in that, described FlowNet1 network comprises successively 9 convolution layers and 1 refinement module; 9个所述卷积层用于对预先堆叠的局部显微图像进行高层特征提取,获得特征图;The 9 convolutional layers are used to extract high-level features from the pre-stacked local microscopic images to obtain feature maps; 所述refinement模块用于利用反卷积层进行上采样操作,获得粗光流特征图。The refinement module is used to perform an upsampling operation using a deconvolution layer to obtain a coarse optical flow feature map. 4.如权利要求2所述的显微图像的快速全景拼接方法,其特征在于,所述FlowNet2网络依次包括9个卷积层、1个相关层和1个refinement模块;4. the fast panorama stitching method of microscopic image as claimed in claim 2, is characterized in that, described FlowNet2 network comprises successively 9 convolution layers, 1 correlation layer and 1 refinement module; 前3个所述卷积层用于对输入的每张局部显微图像进行特征提取,获得第一特征图;The first three convolutional layers are used to perform feature extraction on each input local microscopic image to obtain the first feature map; 所述相关层用于对不同局部显微图像的特征图进行相关运算以将不同局部显微图像的特征合并,获得特征融合图;The correlation layer is used to perform a correlation operation on the feature maps of different local microscopic images to combine the features of different local microscopic images to obtain a feature fusion map; 后6个所述卷积层用于对所述特征融合图进行高层特征提取,获得第二特征图;The last 6 convolution layers are used to perform high-level feature extraction on the feature fusion map to obtain a second feature map; 所述refinement模块用于利用反卷积层进行上采样操作,获得细光流特征图。The refinement module is used to perform an upsampling operation using a deconvolution layer to obtain a thin optical flow feature map. 5.如权利要求4所述的显微图像的快速全景拼接方法,其特征在于,所述相关运算的公式为:5. the fast panorama stitching method of microscopic image as claimed in claim 4 is characterized in that, the formula of described correlation operation is:
Figure FDA0002503366720000031
Figure FDA0002503366720000031
式中,c(x1,x2)为两张特征图的相关性值,x1,x2分别为两个特征图上的对应点;f1和f2为两张特征图;k为需要进行比较的区域的边界值;o为待比较区域中任意点的位置值;<>为相关运算。In the formula, c(x 1 , x 2 ) is the correlation value of the two feature maps, x 1 , x 2 are the corresponding points on the two feature maps respectively; f 1 and f 2 are the two feature maps; k is the The boundary value of the area to be compared; o is the position value of any point in the area to be compared; <> is the correlation operation.
6.如权利要求2所述的显微图像的快速全景拼接方法,其特征在于,根据所述粗光流特征图F1对所述局部显微图像I2进行位置变换得到第一变换图像W1的步骤,包括:6. The fast panorama stitching method for microscopic images as claimed in claim 2 , wherein the first transformed image W1 is obtained by performing positional transformation on the local microscopic image I2 according to the coarse optical flow feature map F1 steps, including: 从所述粗光流特征图F1中获取每个像素的偏移值,根据所述偏移值对所述局部显微图像I2中对应的每个像素进行偏移,得到第一变换图像W1The offset value of each pixel is obtained from the coarse optical flow feature map F1, and each pixel corresponding to the local microscopic image I2 is offset according to the offset value to obtain a first transformed image W 1 . 7.如权利要求1所述的显微图像的快速全景拼接方法,其特征在于,所述空间变换网络包括本地网络、网格生成器和采样器;7. The fast panorama stitching method of microscopic images as claimed in claim 1, wherein the spatial transformation network comprises a local network, a grid generator and a sampler; 利用预先训练好的空间变换网络对所述光流特征图进行转换,获得初始矩阵,将所述初始矩阵与上一位置获得的变换矩阵进行矩阵乘法运算,获得当前位置的变换矩阵,并根据所述当前位置的变换矩阵进行两张局部显微图像的对齐,获得第一对齐显微图像的步骤,包括:Use the pre-trained spatial transformation network to transform the optical flow feature map to obtain the initial matrix, and perform matrix multiplication operation on the initial matrix and the transformation matrix obtained at the previous position to obtain the transformation matrix of the current position, and according to the obtained transformation matrix The transformation matrix of the current position is used to align the two partial microscopic images, and the steps of obtaining the first aligned microscopic image include: 利用预先训练好的本地网络对光流特征图进行多层卷积操作,并进行特征全连接,回归输出初始矩阵,将初始矩阵与上一位置获得的变换矩阵进行矩阵乘法运算,获得具有9个参数的变换矩阵;Use the pre-trained local network to perform multi-layer convolution operation on the optical flow feature map, and perform feature full connection, return to output the initial matrix, perform matrix multiplication operation on the initial matrix and the transformation matrix obtained from the previous position, and obtain 9 The transformation matrix of the parameters; 固定上一位置采集的局部显微图像I1,将当前位置采集的局部显微图像I2记为原始图像,设置一个空的显微图像作为和I1对齐的显微图像并标记为目标图像,利用预先训练好的网格生成器根据变换矩阵计算目标图像中每个坐标位置对应在原始图像中的坐标位置,得到目标图像和原始图像重叠区域的映射关系;Fix the local microscopic image I1 collected at the previous position, record the local microscopic image I2 collected at the current position as the original image, set an empty microscopic image as the microscopic image aligned with I1 and mark it as the target image. The trained grid generator calculates the coordinate position corresponding to each coordinate position in the target image in the original image according to the transformation matrix, and obtains the mapping relationship between the overlapping area of the target image and the original image; 利用预先训练好的所述采样器根据所述映射关系寻找所述目标图像各个坐标位置在所述原始图像中的对应坐标位置,并采用双线性插值方式将所述原始图像中的像素对应复制到所述目标图像中,得到第一对齐显微图像。Use the pre-trained sampler to find the corresponding coordinate positions of each coordinate position of the target image in the original image according to the mapping relationship, and use bilinear interpolation to copy the corresponding pixels in the original image. into the target image to obtain a first aligned microscopic image. 8.如权利要求1所述的显微图像的快速全景拼接方法,其特征在于,采用渐入渐出的线性融合方式对所述第一对齐显微图像的重叠区域进行线性融合,获得第一融合显微图像的步骤中,所述线性融合的公式为:8. The fast panorama stitching method for microscopic images according to claim 1, characterized in that, linear fusion is performed on the overlapping regions of the first aligned microscopic images by using a linear fusion method of gradually in and out to obtain the first In the step of fusing the microscopic images, the formula of the linear fusion is: I(i,j)=αI1(i,j)+(1-α)I2(i,j)0≤α≤1 (2)I(i,j)=αI 1 (i,j)+(1-α)I 2 (i,j)0≤α≤1 (2) 式中,I为融合像素值;I1和I2分别为两张所述局部显微图像的对应重叠区域的原始像素值;(i,j)为像素坐标;α为权重系数。In the formula, I is the fusion pixel value; I 1 and I 2 are the original pixel values of the corresponding overlapping regions of the two local microscopic images respectively; (i, j) are the pixel coordinates; α is the weight coefficient. 9.一种显微图像的快速全景拼接系统,其特征在于,包括:9. A fast panoramic stitching system for microscopic images, comprising: 图像获取模块,用于控制显微镜物镜按预先规划的采集路径采集病理切片的局部显微图像;The image acquisition module is used to control the microscope objective lens to collect local microscopic images of the pathological slices according to the pre-planned acquisition path; 光流估计模块,用于利用预先训练好的端到端的网络对当前位置采集的局部显微图像和上一位置采集的局部显微图像进行光流估计,获得光流特征图;The optical flow estimation module is used to use the pre-trained end-to-end network to perform optical flow estimation on the local microscopic image collected at the current position and the local microscopic image collected at the previous position, and obtain the optical flow feature map; 图像对齐模块,利用预先训练好的空间变换网络对所述光流特征图进行转换,获得初始矩阵,将所述初始矩阵与上一位置获得的变换矩阵进行矩阵乘法运算,获得当前位置的变换矩阵,并根据所述当前位置的变换矩阵进行两张局部显微图像的对齐,获得第一对齐显微图像;The image alignment module converts the optical flow feature map using a pre-trained spatial transformation network to obtain an initial matrix, and performs matrix multiplication between the initial matrix and the transformation matrix obtained at the previous position to obtain the transformation matrix of the current position. , and perform alignment of two partial microscopic images according to the transformation matrix of the current position to obtain a first aligned microscopic image; 融合模块,用于采用渐入渐出的线性融合方式对所述第一对齐显微图像的重叠区域进行线性融合,获得第一融合显微图像;a fusion module, configured to linearly fuse the overlapping regions of the first aligned microscopic images by adopting a gradual-in-gradual-out linear fusion method to obtain a first fusion microscopic image; 循环模块,用于按预先规划的采集路径控制显微镜物镜或显微镜载物台移动到下一个位置并采集一张局部显微图像,然后对当前位置采集的局部显微图像和上一位置采集的局部显微图像进行所述光流估计、所述对齐,获得第二对齐显微图像,将所述第二对齐显微图像与所述第一融合显微图像进行所述线性融合,获得第二融合显微图像;循环所述光流估计、所述对齐和所述线性融合的过程,直至将所有的局部显微图像进行融合,获得全景融合显微图像;The cycle module is used to control the microscope objective lens or microscope stage to move to the next position according to the pre-planned acquisition path and acquire a local microscopic image, and then the local microscopic image collected at the current position and the local microscopic image collected at the previous position are collected. performing the optical flow estimation and the alignment on the microscopic image to obtain a second aligned microscopic image, and performing the linear fusion on the second aligned microscopic image and the first fusion microscopic image to obtain a second fusion Microscopic images; cycle the process of optical flow estimation, alignment and linear fusion until all local microscopic images are fused to obtain panoramic fusion microscopic images; 渲染模块,用于利用双边滤波对所述全景融合显微图像进行全景渲染,获得全景拼接显微图像。The rendering module is configured to perform panoramic rendering on the panoramic fusion microscopic image by using bilateral filtering to obtain a panoramic stitched microscopic image. 10.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1~8中任一项所述方法的步骤。10. A computer device, comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 8 when the processor executes the computer program .
CN202010439003.0A 2020-05-22 2020-05-22 A fast panorama stitching method and system for microscopic images Active CN111626936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010439003.0A CN111626936B (en) 2020-05-22 2020-05-22 A fast panorama stitching method and system for microscopic images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010439003.0A CN111626936B (en) 2020-05-22 2020-05-22 A fast panorama stitching method and system for microscopic images

Publications (2)

Publication Number Publication Date
CN111626936A true CN111626936A (en) 2020-09-04
CN111626936B CN111626936B (en) 2023-05-12

Family

ID=72272566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010439003.0A Active CN111626936B (en) 2020-05-22 2020-05-22 A fast panorama stitching method and system for microscopic images

Country Status (1)

Country Link
CN (1) CN111626936B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815690A (en) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
CN112750078A (en) * 2020-12-28 2021-05-04 广州市明美光电技术有限公司 Microscopic image real-time splicing method and storage medium based on electric platform
CN113537238A (en) * 2021-07-05 2021-10-22 上海闪马智能科技有限公司 Information processing method and image recognition device
CN114187334A (en) * 2021-10-12 2022-03-15 武汉兰丁云医学检验实验室有限公司 Adjacent slice image superposition and alignment method based on HE staining, Ki67 and P16 combination
CN114519730A (en) * 2022-02-21 2022-05-20 维沃移动通信有限公司 Image processing method, apparatus, device and medium
CN114627028A (en) * 2022-03-30 2022-06-14 东南大学 A Method for Correcting Microscope Sample Drift Based on Image Processing Algorithm
WO2022133683A1 (en) * 2020-12-21 2022-06-30 京东方科技集团股份有限公司 Mixed reality display method, mixed reality device, and storage medium
CN114897698A (en) * 2022-05-19 2022-08-12 苏州卡创信息科技有限公司 Method and device for acquiring large-range microscopic imaging image
CN115115522A (en) * 2022-08-15 2022-09-27 浙江工业大学 Goods shelf commodity image splicing method and system
WO2022213734A1 (en) * 2021-04-06 2022-10-13 北京车和家信息技术有限公司 Method and apparatus for fusing traffic markings, and storage medium and electronic device
CN116309036A (en) * 2022-10-27 2023-06-23 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method
CN116343206A (en) * 2023-05-29 2023-06-27 山东科技大学 An automatic splicing and recognition method for marine plankton analysis microscope images
CN116978005A (en) * 2023-09-22 2023-10-31 南京凯视迈科技有限公司 Microscope image processing system based on attitude transformation
CN119323514A (en) * 2024-12-19 2025-01-17 深圳市生强科技有限公司 Image real-time stitching transition method and device based on pathological slide and readable storage medium
US12278938B2 (en) 2020-12-21 2025-04-15 Beijing Boe Optoelectronics Technology Co., Ltd. Mixed reality display method, mixed reality device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
WO2009055913A1 (en) * 2007-10-30 2009-05-07 Cedara Software Corp. System and method for image stitching
WO2010105015A2 (en) * 2009-03-11 2010-09-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for microscopy tracking
US20120275658A1 (en) * 2011-02-28 2012-11-01 Hurley Neil F Petrographic image analysis for determining capillary pressure in porous media
CN109191380A (en) * 2018-09-10 2019-01-11 广州鸿琪光学仪器科技有限公司 Joining method, device, computer equipment and the storage medium of micro-image
CN111007661A (en) * 2019-12-02 2020-04-14 湖南国科智瞳科技有限公司 Microscopic image automatic focusing method and device based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090041314A1 (en) * 2007-08-02 2009-02-12 Tom Vercauteren Robust mosaicing method. notably with correction of motion distortions and tissue deformations for a vivo fibered microscopy
WO2009055913A1 (en) * 2007-10-30 2009-05-07 Cedara Software Corp. System and method for image stitching
WO2010105015A2 (en) * 2009-03-11 2010-09-16 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for microscopy tracking
US20120275658A1 (en) * 2011-02-28 2012-11-01 Hurley Neil F Petrographic image analysis for determining capillary pressure in porous media
CN109191380A (en) * 2018-09-10 2019-01-11 广州鸿琪光学仪器科技有限公司 Joining method, device, computer equipment and the storage medium of micro-image
CN111007661A (en) * 2019-12-02 2020-04-14 湖南国科智瞳科技有限公司 Microscopic image automatic focusing method and device based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谢安宁;张宏伟;赵志刚;孟智勇;王增国;: "基于三维旋转模型的红外全景图像拼接方法" *
霍春宝;童帅;赵立辉;崔汉峰;: "SIFT特征匹配的显微全景图拼接" *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815690B (en) * 2020-09-11 2020-12-08 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
CN111815690A (en) * 2020-09-11 2020-10-23 湖南国科智瞳科技有限公司 Method, system and computer equipment for real-time splicing of microscopic images
US12278938B2 (en) 2020-12-21 2025-04-15 Beijing Boe Optoelectronics Technology Co., Ltd. Mixed reality display method, mixed reality device, and storage medium
WO2022133683A1 (en) * 2020-12-21 2022-06-30 京东方科技集团股份有限公司 Mixed reality display method, mixed reality device, and storage medium
CN116034397A (en) * 2020-12-21 2023-04-28 京东方科技集团股份有限公司 Mixed reality display method, mixed reality device and storage medium
CN112750078A (en) * 2020-12-28 2021-05-04 广州市明美光电技术有限公司 Microscopic image real-time splicing method and storage medium based on electric platform
WO2022213734A1 (en) * 2021-04-06 2022-10-13 北京车和家信息技术有限公司 Method and apparatus for fusing traffic markings, and storage medium and electronic device
CN113537238A (en) * 2021-07-05 2021-10-22 上海闪马智能科技有限公司 Information processing method and image recognition device
CN113537238B (en) * 2021-07-05 2022-08-05 上海闪马智能科技有限公司 Information processing method and image recognition device
CN114187334A (en) * 2021-10-12 2022-03-15 武汉兰丁云医学检验实验室有限公司 Adjacent slice image superposition and alignment method based on HE staining, Ki67 and P16 combination
CN114519730A (en) * 2022-02-21 2022-05-20 维沃移动通信有限公司 Image processing method, apparatus, device and medium
CN114627028A (en) * 2022-03-30 2022-06-14 东南大学 A Method for Correcting Microscope Sample Drift Based on Image Processing Algorithm
CN114897698A (en) * 2022-05-19 2022-08-12 苏州卡创信息科技有限公司 Method and device for acquiring large-range microscopic imaging image
CN115115522A (en) * 2022-08-15 2022-09-27 浙江工业大学 Goods shelf commodity image splicing method and system
CN116309036B (en) * 2022-10-27 2023-12-29 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method
CN116309036A (en) * 2022-10-27 2023-06-23 杭州图谱光电科技有限公司 Microscopic image real-time stitching method based on template matching and optical flow method
CN116343206A (en) * 2023-05-29 2023-06-27 山东科技大学 An automatic splicing and recognition method for marine plankton analysis microscope images
CN116343206B (en) * 2023-05-29 2023-08-08 山东科技大学 An automatic splicing and recognition method for marine plankton analysis microscope images
CN116978005A (en) * 2023-09-22 2023-10-31 南京凯视迈科技有限公司 Microscope image processing system based on attitude transformation
CN116978005B (en) * 2023-09-22 2023-12-19 南京凯视迈科技有限公司 Microscope image processing system based on attitude transformation
CN119323514A (en) * 2024-12-19 2025-01-17 深圳市生强科技有限公司 Image real-time stitching transition method and device based on pathological slide and readable storage medium

Also Published As

Publication number Publication date
CN111626936B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN111626936B (en) A fast panorama stitching method and system for microscopic images
KR101026585B1 (en) A system, computer-implemented method, and computer-readable recording medium for generating high dynamic range (HDR) images from an image sequence of a scene
US20150170405A1 (en) High resolution free-view interpolation of planar structure
US20090209833A1 (en) System and method for automatic detection of anomalies in images
CN115060367B (en) Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform
Kaufmann et al. Elimination of color fringes in digital photographs caused by lateral chromatic aberration
CN111738964A (en) Image data enhancement method based on modeling
CN116402685A (en) Microscopic image stitching method based on objective table motion information
Saalfeld Computational methods for stitching, alignment, and artifact correction of serial section data
CN114998184A (en) Microscope system and method for processing microscope images
Lu et al. Event camera demosaicing via swin transformer and pixel-focus loss
Rong et al. Mosaicing of microscope images based on SURF
Soh et al. Joint high dynamic range imaging and super-resolution from a single image
Zhao et al. Low-light image enhancement based on normal-light image degradation
Qin et al. A New Dataset and Framework for Real-World Blurred Images Super-Resolution
EP4365774A1 (en) Microscope-based super-resolution method and apparatus, device and medium
CN112203023B (en) Billion pixel video generation method and device, equipment and medium
Krishna et al. GloFlow: Whole slide image stitching from video using optical flow and global image alignment
Yang et al. Mipi 2022 challenge on rgbw sensor fusion: Dataset and report
Qian et al. Extending depth of field and dynamic range from differently focused and exposed images
Gherardi et al. Real-time whole slide mosaicing for non-automated microscopes in histopathology analysis
Kostrzewa et al. B4MultiSR: a benchmark for multiple-image super-resolution reconstruction
CN113723465A (en) Improved feature extraction method and image splicing method based on same
US20250035911A1 (en) Microscopy system and method for processing a microscope image
CN113379608A (en) Image processing method, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xiang Beihai

Inventor after: Xu Hui

Inventor before: Gu Xiujuan

Inventor before: Xiang Beihai

Inventor before: Xu Hui

CB03 Change of inventor or designer information