CN118314309B - 3D suture splicing and fusion method and system based on structural content perception - Google Patents
3D suture splicing and fusion method and system based on structural content perception Download PDFInfo
- Publication number
- CN118314309B CN118314309B CN202410741131.9A CN202410741131A CN118314309B CN 118314309 B CN118314309 B CN 118314309B CN 202410741131 A CN202410741131 A CN 202410741131A CN 118314309 B CN118314309 B CN 118314309B
- Authority
- CN
- China
- Prior art keywords
- layer
- optimal
- sets
- dimensional
- suture line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008447 perception Effects 0.000 title claims abstract description 47
- 238000007500 overflow downdraw method Methods 0.000 title claims description 7
- 238000000034 method Methods 0.000 claims abstract description 49
- 238000013507 mapping Methods 0.000 claims abstract description 31
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 33
- 238000013519 translation Methods 0.000 claims description 21
- 238000009826 distribution Methods 0.000 claims description 13
- 238000000547 structure data Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 238000004458 analytical method Methods 0.000 claims description 11
- 230000007704 transition Effects 0.000 claims description 10
- 230000007797 corrosion Effects 0.000 claims description 7
- 238000005260 corrosion Methods 0.000 claims description 7
- 230000004807 localization Effects 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 6
- 239000010410 layer Substances 0.000 description 278
- 230000008569 process Effects 0.000 description 19
- 238000005457 optimization Methods 0.000 description 14
- 230000000007 visual effect Effects 0.000 description 9
- 238000003384 imaging method Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 6
- 239000000523 sample Substances 0.000 description 4
- 239000012472 biological sample Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005530 etching Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a 3D suture splicing and fusing method and system based on structural content perception, belonging to the field of image processing, wherein the method comprises the following steps: three-dimensional image acquisition is carried out on the target structure by utilizing a light sheet microscope, so that a plurality of three-dimensional images are generated; mapping and translating the three-dimensional images to generate a plurality of overlapped areas; performing structural content perception recognition on the plurality of overlapped layer sets to generate a plurality of structural content factor sets; performing layer-by-layer iteration on the overlapped layer sets according to a preset iteration step length to generate a plurality of optimal suture lines; and carrying out plane scanning on the target structure, and splicing the images after the plane scanning to generate a target image of the target structure. Through the structure perception based on deep learning and the 3D optimal suture fusion algorithm, the technical effects of effectively avoiding artifacts in a splicing area and greatly reducing the damage to the fine structure of the medical image are achieved.
Description
Technical Field
The invention relates to the field of image processing, in particular to a 3D suture splicing and fusing method and system based on structural content perception.
Background
To obtain microscopic images of a high resolution large sample, it is often necessary to tile the sample at multiple locations and then fuse the images together to form a complete large field of view image. However, during the stitching process, the use of common linear fusion or maximum fusion algorithms introduces the following problems due to systematic and optical effects and errors in the alignment algorithm: firstly, generating checkerboard stripe-shaped shadow artifacts in a fusion area; secondly, fine structures in the medical image are changed and destroyed, and subsequent analysis is affected.
Aiming at the problem of fusion area shadow artifact, the prior art trains a deep learning network mainly by artificially constructing a training set containing shadows. However, the method has the defects of difference between training samples and actual existence, insufficient generalization capability and the like. Aiming at the problem of fine structure damage, the existing scheme utilizes deep learning to identify the authenticity of a structure to screen when extracting a structural signal, but has limited effect on the structural change caused by fusion.
Disclosure of Invention
The application provides a 3D suture line splicing fusion method and system based on structural content perception, and aims to solve the technical problems of artifact introduction and fine structure damage caused by the fact that a common fusion algorithm is used in the medical 3D image splicing process in the prior art.
In view of the above problems, the application provides a 3D suture splicing and fusing method and system based on structural content perception.
In a first aspect of the disclosure, a 3D suture splicing and fusing method based on structural content perception is provided, the method comprising: three-dimensional image acquisition is carried out on a target structure by utilizing a light sheet microscope, and a plurality of three-dimensional images are generated, wherein the three-dimensional images have a plurality of positioning marks; mapping and translating the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of overlapped areas, wherein the plurality of overlapped areas comprise a plurality of overlapped layer sets; performing structural content perception identification on the plurality of overlapping layer sets by using a structural perception model to generate a plurality of structural content factor sets; performing layer-by-layer iteration on the multiple overlapped layer sets according to a preset iteration step length, sequentially performing optimal suture line identification in each overlapped layer, performing loss analysis on a layer-by-layer iteration result by combining multiple structure content factor sets by utilizing a preset suture loss function, and generating multiple optimal suture line groups; and performing planar scanning on the target structure by using the light sheet microscope again, and splicing the images after planar scanning based on a plurality of optimal suture lines to generate a target image of the target structure.
In another aspect of the present disclosure, a 3D suture splice fusion system based on structural content awareness is provided, the system comprising: the three-dimensional image acquisition module is used for carrying out three-dimensional image acquisition on the target structure by utilizing the light sheet microscope to generate a plurality of three-dimensional images, wherein the three-dimensional images are provided with a plurality of positioning marks; the image mapping translation module is used for mapping and translating a plurality of three-dimensional images based on a plurality of positioning identifiers to generate a plurality of overlapped areas, wherein the overlapped areas comprise a plurality of overlapped layer sets; the structure perception module is used for carrying out structure content perception recognition on the plurality of overlapped layer sets by utilizing the structure perception model to generate a plurality of structure content factor sets; the suture line group generation module is used for carrying out layer-by-layer iteration on the plurality of overlapped layer sets according to a preset iteration step length, carrying out optimal suture line identification in each overlapped layer in sequence, carrying out loss analysis on the layer-by-layer iteration result by combining a plurality of structure content factor sets by utilizing a preset suture loss function, and generating a plurality of optimal suture line groups; and the target image generation module is used for carrying out planar scanning on the target structure by utilizing the light sheet microscope again, and splicing the images after planar scanning based on a plurality of optimal suture line groups to generate a target image of the target structure.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
Because the three-dimensional image acquisition is carried out on the target structure by utilizing the light sheet microscope, a plurality of three-dimensional images with positioning marks are generated, original three-dimensional image data with position information are obtained, and a foundation is laid for the follow-up accurate splicing; mapping translation is carried out based on positioning marks of a plurality of three-dimensional images, an overlapping area containing a plurality of overlapped layers is generated, image registration is completed through the positioning marks, the overlapping area between spliced images is found, overlapping layer information is reserved, and preparation is made for structure perception and optimal suture line identification; performing structural content perception recognition on the overlapped layer set by using a structural perception model, generating a structural content factor set, quantitatively representing content information of an overlapped region, and providing an important basis for suture line optimization; iterating the overlapped layer by layer according to a preset step length, identifying optimal stitching lines in each overlapped layer, and simultaneously analyzing an iterated result by combining the structure content factor set and a preset loss function to obtain a plurality of optimal stitching line groups, so that the optimal stitching positions are solved by utilizing the structure information and the iterative optimization, and the defect that only gray information is considered in the prior art is overcome; the method comprises the steps of carrying out planar scanning on a target structure, splicing scanned images based on an optimal suture line group, generating a target image of the target structure, guiding image splicing by utilizing the obtained optimal suture line, avoiding shadow artifacts of a fusion area, well retaining a fine structure of an original image, obtaining a high-quality and seamless three-dimensional large image, solving the technical problems of introducing artifacts in an overlapping area and damaging the fine structure caused by using a common fusion algorithm in a medical 3D image splicing process in the prior art, and achieving the technical effects of effectively avoiding the artifacts of the splicing area and greatly reducing the damage to the fine structure of the medical image through the structure perception based on deep learning and the 3D optimal suture line fusion algorithm.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
FIG. 1 is a schematic flow chart of a 3D suture splicing and fusing method based on structural content perception according to an embodiment of the present application;
Fig. 2 is a schematic structural diagram of a 3D suture splicing fusion system based on structural content perception according to an embodiment of the present application.
Reference numerals illustrate: the device comprises a three-dimensional image acquisition module 11, an image mapping translation module 12, a structure perception module 13, a suture line group generation module 14 and a target image generation module 15.
Detailed Description
The technical scheme provided by the application has the following overall thought:
The embodiment of the application provides a 3D suture splicing fusion method and system based on structural content perception, which are used for solving the problems of splicing artifacts and fine structure loss caused by the limitation of a fusion algorithm in the prior art.
Specifically, first, a light sheet microscope is used to perform three-dimensional acquisition on a target structure, and original image data with a positioning mark is obtained. And then registering the images according to the positioning identification, and extracting an overlapping region containing rich level information. On the basis, a structure perception model based on deep learning is introduced, the overlapping layers are subjected to content recognition, and the structural features of the image are quantitatively depicted. And then, adopting an iterative optimization strategy, comprehensively utilizing the structural content factors and a preset stitching loss function, and identifying the optimal stitching position in each overlapping layer to obtain an optimal stitching line group. And finally, intelligently splicing the scanning images of the target structure by taking the method as a guide to generate a large-view three-dimensional image with no artifact and a fidelity fine structure.
In general, the application fully excavates and utilizes the inherent structural information of the medical image, and combines the structural information with the three-dimensional suture line in an optimized way, so that the high-quality and seamless image splicing is realized, and compared with the prior art, the splicing quality and the fine structure retention are obviously improved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides a 3D suture splicing and fusion method based on structural content perception, which includes:
s1: three-dimensional image acquisition is carried out on a target structure by utilizing a light sheet microscope, and a plurality of three-dimensional images are generated, wherein the plurality of three-dimensional images are provided with a plurality of positioning marks;
in particular, in the biomedical field, researchers often need to three-dimensionally image large-sized biological samples such as tissue slices, cells, biological organs, etc. in order to more fully understand the fine structural features inside. The target structure refers to a large-sized biological sample that requires three-dimensional imaging analysis. Because the biological sample has larger size and cannot be completely covered by single imaging of a light sheet microscope, a local three-dimensional structure image needs to be acquired by adopting a regional imaging mode, and then a complete three-dimensional structure is rebuilt by an image stitching method.
First, the target structure is placed under a light sheet microscope, and the focal plane of the light sheet microscope is adjusted to focus with a specific layer of the target structure. Subsequently, the target structure is scanned layer by continuously moving the focal point in a direction perpendicular to the focal plane by controlling the imaging system of the light sheet microscope, thereby obtaining a series of two-dimensional slice images. These two-dimensional slice images contain structural information of the target structure at different depths. And three-dimensional images reflecting the three-dimensional morphological characteristics of the target structure can be generated by carrying out three-dimensional reconstruction on the obtained series of two-dimensional slice images. Due to the large size of the target structure, complete coverage by a single imaging is difficult. Therefore, when three-dimensional image acquisition is performed on a target structure, different areas of the target structure need to be imaged respectively, so as to obtain a plurality of three-dimensional images, and each three-dimensional image represents one local area of the target structure. Meanwhile, in order to facilitate the subsequent stitching of a plurality of three-dimensional images, in the process of acquiring the three-dimensional images, spatial position information of each three-dimensional image, such as boundary coordinates of three-dimensional sub-images, center coordinates of the three-dimensional sub-images, and the like, is synchronously recorded. The spatial position information forms positioning marks of the three-dimensional images, and the positioning marks are used for registering and splicing a plurality of three-dimensional images.
And carrying out layer-by-layer scanning imaging on the target structure through a light sheet microscope to obtain a plurality of three-dimensional images reflecting three-dimensional structural characteristics of different local areas of the target structure, and obtaining a positioning mark of each three-dimensional image so as to lay a data foundation for the follow-up three-dimensional image splicing.
S2: mapping and translating the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of overlapping areas, wherein the plurality of overlapping areas comprise a plurality of overlapping layer sets;
Specifically, after obtaining multiple three-dimensional images of the target structure, these three-dimensional images need to be stitched to obtain a three-dimensional reconstruction result of the complete target structure. However, since each three-dimensional image only characterizes a local area of the target structure, and there may be a deviation in the position and angle acquired during imaging, multiple three-dimensional images cannot be directly stitched, and spatial registration is performed by mapping translation.
Firstly, the position and the direction of each three-dimensional image are adjusted by using the positioning mark of each three-dimensional image through space mapping transformation, so that adjacent three-dimensional images can be correctly joined in space to form an overlapping area. The positioning mark provides the relative position information of each three-dimensional image in the target structure and is the basis for spatial registration. In the implementation, one three-dimensional image is selected as a reference, and the coordinate system of the other three-dimensional images is mapped under the coordinate system of the reference image in a mapping translation mode, so that the transformed coordinates can accurately reflect the relative position relationship of the three-dimensional images in the target structure. After the mapping transformation is completed, adjacent three-dimensional images will spatially produce a certain overlap region. Since the three-dimensional image is composed of a series of two-dimensional slice images, the overlapping region is also three-dimensional, and is formed by stacking overlapping regions of a plurality of two-dimensional slice images in the vertical direction. The overlapping area on each two-dimensional slice image is referred to as an overlapping layer, and the set of overlapping areas of the plurality of two-dimensional slice images constitutes a set of overlapping layers.
By utilizing the positioning mark of the three-dimensional image, the spatial position relation of the three-dimensional image is adjusted in a mapping translation mode, an overlapping area is generated between adjacent three-dimensional images, the overlapping area is organized into a plurality of overlapping layer sets according to the slice hierarchical relation in the vertical direction, and data support is provided for the follow-up three-dimensional image splicing in the overlapping area.
S3: performing structural content perception recognition on the overlapped layer sets by using a structural perception model to generate a plurality of structural content factor sets;
In particular, after obtaining an overlapping layer set of multiple three-dimensional images, the image content within the overlapping region needs to be analyzed and understood in order to be able to more intelligently determine the optimal stitching scheme of the three-dimensional images later. The conventional image stitching method usually only considers low-level visual characteristics of the image, such as pixel values, gradients and the like, but ignores high-level semantic information of the image content, so that the stitching effect is poor. Therefore, a structural perception model is introduced, and semantic level understanding and description are carried out on the image content in a deep learning mode. The structural perception model is based on a convolutional neural network, and can automatically extract and learn high-level semantic features from the image through training on a large amount of marked data, and effectively characterize the structural content of the image.
After the multiple overlapping layer sets are acquired, the generated multiple overlapping layer sets are subjected to reasoning and prediction by utilizing a pre-trained structural perception model. Specifically, each overlapped layer is input into a structural perception model, the model extracts deep semantic features of image contents of the overlapped layers through multi-layer rolling and pooling operation, and corresponding structural content factors are generated. The structural content factor is a real value vector that quantifies the degree of structural deviation that characterizes the content of the input overlapping layer images. And sequentially carrying out structural content perception identification on the overlapping layers in all the plurality of overlapping layer sets to obtain a structural content factor set corresponding to the overlapping layer set. Each overlapping layer set produces a set of structural content factors that includes structural content factors for all two-dimensional slice images within the overlapping layer set. The structure content factor sets comprehensively describe the spatial distribution characteristics of the image structure content in the overlapping area and reflect the structure deviation degree of the image content. In the subsequent three-dimensional image splicing process, the structural content factors are fully utilized, and the optimal splicing position is more accurately determined by combining the low-layer visual characteristics and the high-layer semantic characteristics so as to obtain the splicing result with coherent content and complete structure.
By introducing a structural perception model, deep semantic understanding and high-level characteristic characterization are carried out on the structural content of the three-dimensional image overlapping region, and a corresponding structural content factor set is generated, so that important priori information and knowledge guidance are provided for subsequent three-dimensional image stitching.
S4: performing layer-by-layer iteration on the plurality of overlapped layer sets according to a preset iteration step length, sequentially performing optimal suture line identification in each overlapped layer, performing loss analysis on a layer-by-layer iteration result by combining the plurality of structure content factor sets by utilizing a preset suture loss function, and generating a plurality of optimal suture line groups;
Specifically, after the structural content factors of the multiple overlapping layer sets are obtained, the optimal stitching position in each overlapping layer is further determined so as to achieve accurate stitching of the three-dimensional images. And adopting a strategy of iteration layer by layer, searching an optimal suture line in each overlapped layer, introducing a suture loss function to optimize, and finally generating an optimal suture line group applicable to the whole three-dimensional overlapped region.
Firstly, carrying out layer-by-layer processing on a plurality of overlapped layer sets according to a preset iteration step length. The preset iteration step length refers to an interlayer distance between two adjacent overlapped layers in the vertical direction. The size of the preset iteration step can be set according to the balance of actual requirements and calculation efficiency, and a value equivalent to or slightly larger than the slice thickness is generally selected. Then, for each overlapping layer, the identification of the best stitch line is performed on its corresponding two-dimensional image. Optimal stitching refers to a line of demarcation that maximizes image content continuity and structural integrity in the overlapping region. In order to find the optimal suture line, a minimum segmentation algorithm based on graph theory is adopted, and the overlapped area is expressed as a weighted graph, wherein each pixel point corresponds to one node in the graph, and the edge weight between adjacent pixel points is determined by visual characteristics such as pixel value difference, gradient difference and the like. By solving the minimal cut problem on this graph, an optimal dividing line across the overlap region, i.e. an optimal seam, can be obtained.
In the process of identifying the optimal suture line, a preset suture loss function is introduced for evaluating the merits of the candidate suture line. For example, the preset stitching loss function can comprehensively evaluate the visual characteristic loss, the structural content loss and the stitching line smoothing loss, wherein the visual characteristic loss is used for measuring the difference of the pixel points at two sides of the stitching line in visual characteristics such as color, texture and the like, and the smaller the difference is, the lower the loss is; the structural content loss is used for measuring the influence degree of the structural integrity and continuity of the suture on the image in the overlapping area, the generated structural content factor set is utilized to calculate the weighted sum of the structural content factors of the suture passing area, and the larger the structural content factor is, the higher the loss is; suture smoothness loss is a measure of the geometric complexity of the suture, encouraging the suture to be as straight and smooth as possible, avoiding the appearance of serrated or overly tortuous sutures. By minimizing the preset suture loss function, the position of the optimal suture can be comprehensively optimized in three aspects of visual features, structural content and geometry.
After the identification of the optimal stitching lines in the single overlapping layer is completed, the identification result is used as a constraint condition, and the processing of the next overlapping layer is iterated until the optimal stitching lines in all the overlapping layers are obtained. The stitching lines form a three-dimensional curved surface in the vertical direction, and the original three-dimensional overlapping area is divided into two three-dimensional images to be spliced. The optimal sutures for all overlapping layer sets are organized to form a plurality of optimal suture sets. Each optimal stitch line set corresponds to a pair of adjacent three-dimensional images, indicating the optimal stitching location and boundary of the adjacent three-dimensional images in three-dimensional space.
And identifying an optimal suture line in each overlapped layer in a layer-by-layer iteration mode, introducing a preset suture loss function to optimize, guiding the selection of the suture line by utilizing the structural content factors, and finally generating an optimal suture line group suitable for the whole three-dimensional overlapped region, thereby providing an optimal splicing path for the subsequent three-dimensional image splicing.
S5: and carrying out plane scanning on the target structure by using the light sheet microscope again, and splicing the images after plane scanning based on the optimal suture line groups to generate a target image of the target structure.
Specifically, after obtaining the plurality of optimal suture sets, a second planar scan of the target structure is performed using a light sheet microscope. Unlike the first scan, only one two-dimensional slice image is acquired at each scan location during this scan, rather than a complete three-dimensional image. The purpose of this is to reduce the amount of data, improve the scanning efficiency, and at the same time ensure that the scanning position is consistent with the first scanning for subsequent image stitching. After the scan is completed, a series of two-dimensional slice images will be obtained that cover the entire target structure.
Next, these two-dimensional slice images are stitched using the generated plurality of optimal stitch lines. For each pair of adjacent two-dimensional slice images to be spliced, determining the relative position relationship of the two-dimensional slice images in the three-dimensional space, and finding out the corresponding three-dimensional optimal suture line group. And projecting the three-dimensional optimal suture line group onto the current two-dimensional slice image plane to obtain a two-dimensional suture line. This stitching line divides the overlapping area of the current two-dimensional slice images into two sub-areas. And splicing the non-overlapping areas of the two-dimensional slice images and the corresponding overlapping sub-areas by taking the two-dimensional suture line as a boundary to form a new two-dimensional slice image. Stitching is repeated until all two-dimensional slice images are stitched into one complete two-dimensional slice image. During the splicing process, visual artifacts and structural distortion in the splice result will be minimized due to the use of optimized optimal suture sets. The spliced two-dimensional slice images realize seamless connection on the image content, and the integrity and continuity of a target structure are maintained. And then, stacking the spliced two-dimensional slice images in sequence in the vertical direction to form a three-dimensional image of the target structure, namely the target image, covering the whole space range of the target structure, and realizing seamless splicing in the three-dimensional space.
And performing planar scanning on the target structure again, and splicing the two-dimensional slice images obtained by scanning by utilizing the optimal suture line group, so that a complete three-dimensional image of the target structure is finally generated. The target image not only realizes seamless splicing on a two-dimensional plane, but also keeps good continuity and consistency in a three-dimensional space, thereby avoiding the generation of splicing artifacts and reserving a fine structure to the maximum extent.
Further, the embodiment of the application further comprises:
Performing position recognition on a plurality of edge scatter point sets of the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of edge scatter point sets, wherein the plurality of edge scatter point sets comprise a plurality of transverse position identifier sets and a plurality of longitudinal position identifier sets;
Performing transverse mapping translation on the plurality of three-dimensional images according to the plurality of transverse position identification sets to obtain a plurality of initial overlapping areas;
And carrying out longitudinal mapping translation on the initial overlapping areas according to the longitudinal position identifiers to obtain the overlapping areas, wherein the overlapping areas comprise overlapping layer sets.
In a possible implementation manner, in order to realize the spatial registration of the three-dimensional images, the relative position of each three-dimensional image in the target structure needs to be determined, and the positioning identification of each three-dimensional image is utilized to extract the position information of the edge scattered points of each three-dimensional image, so that the spatial mapping relation between the three-dimensional images and the target structure is established. First, the edge area of each three-dimensional image is sampled to obtain a set of scattered points representing the boundary shape of the image, and the spatial coordinates of the scattered points of the edge are calculated through the positioning identification (such as boundary frame coordinates, center point coordinates and the like) of the image and the pixel coordinate system of the image itself. And summarizing the edge scatter point sets of all the three-dimensional images to obtain a plurality of edge scatter point sets. To facilitate subsequent spatial transformations, the scatter points are categorized according to both lateral and longitudinal directions, forming a plurality of sets of lateral position identifiers and a plurality of sets of longitudinal position identifiers. The transverse position identification set characterizes the relative position relation of the three-dimensional image in the horizontal direction, and the longitudinal position identification set characterizes the relative position relation of the three-dimensional image in the vertical direction. By identifying and classifying the edge scattered point positions, a spatial corresponding relation between the three-dimensional image and the target structure is established, and a foundation is laid for subsequent image registration.
After the transverse position identification set of the three-dimensional image is obtained, the information is used for carrying out space transformation on the three-dimensional image in the transverse direction, so that the three-dimensional image is subjected to preliminary registration in the horizontal direction. First, a three-dimensional image is selected as a reference image, and a set of lateral position identifiers thereof is used as a lateral reference of a target structure. Then, for each of the other three-dimensional images, a positional shift amount between the set of lateral position identifications thereof and the reference is calculated, and the image is subjected to a horizontal-direction translation operation in accordance with the shift amount. After the transversal mapping translation, all three-dimensional images are aligned in the horizontal direction, and the edge scatter point identification sets of the three-dimensional images become closer in the transversal direction. In this way, adjacent three-dimensional images are overlapped to a certain extent in the horizontal direction, a preliminary overlapping region is formed, and a plurality of initial overlapping regions are obtained.
After the initial overlapping area after the transverse mapping translation is obtained, the longitudinal position identification set of the three-dimensional image is further utilized to carry out space transformation on the initial overlapping area in the vertical direction, and finally a plurality of complete overlapping areas are obtained. First, the positional shift amount of the other initial overlapping region in the vertical direction is calculated with reference to a certain initial overlapping region. These initial overlapping regions are then subjected to a translation operation in the vertical direction based on the offset, so that they are also aligned in the vertical direction. After the longitudinal mapping is shifted, the positions of all initial overlapping areas in the vertical direction are adjusted, so that a plurality of overlapping areas are formed. These overlapping areas have not only an overlap in the horizontal direction but also a certain overlap in the vertical direction.
The three-dimensional image is subjected to primary registration in space by recognizing the edge scattered point positions of the three-dimensional image and performing space mapping transformation, so that a plurality of overlapped areas are generated, and the overlapped areas are overlapped in the horizontal direction and the vertical direction to form a plurality of overlapped layer sets, thereby laying a foundation for subsequent processing.
Further, the embodiment of the application further comprises:
Collecting a plurality of original images and a plurality of original structure data;
performing binarization processing on the plurality of original images to obtain a plurality of processed images, wherein the plurality of processed images comprise a plurality of structures and backgrounds, and the plurality of processed images comprise a plurality of structure feature point sets;
marking and optimizing a plurality of structure feature point sets of the plurality of processed images by combining the plurality of original structure data through a corrosion algorithm to obtain a plurality of original structure distribution identifiers;
And performing supervision training on a framework constructed based on the U-net network by utilizing the plurality of original images and the plurality of original structure distribution identifiers to obtain the structure perception model.
In a preferred embodiment, to train a deep learning model that is capable of efficiently perceiving the structural content of an image, first, a training data set is constructed that contains the original image and its corresponding original structural data markers. In practice, multiple target structure samples are collected, and original images from different areas and different levels of the multiple target structure samples are collected, wherein the images cover various tissue and structural features of the target structure samples, so that multiple original images are formed. Meanwhile, for each original image, corresponding structure marking data is acquired, namely, the structure data in the image is identified in a certain form (such as a contour, a key point and the like) through manual marking. Through collection, a plurality of paired original images and original structure data are obtained, so that not only is rich image texture information contained, but also fine structure semantic information is contained, and a foundation is provided for subsequent model training. Then, the original image is converted into a binary image containing only two pixel values of the structure and the background by adopting a binarization processing method. The binarization processing sets a threshold value, sets pixels with pixel values higher than the threshold value as a structure, and sets pixels with pixel values lower than the threshold value as a background. The selection of the threshold value is adaptively determined according to the gray level histogram distribution characteristics of the image, so that the optimal separation of the structural background is realized. After binarization processing, the original image is converted into a plurality of processed images. In the processed image, the target structure sample is represented as a structure region, the background is represented as a background region, and a distinct contrast is formed between the two. And simultaneously, extracting a plurality of structural feature point sets from the processed image, wherein the structural feature points correspond to pixel point coordinates in a structural region and describe the spatial distribution characteristics of the target structural sample. Through binarization processing, the original image is converted into a simpler and clearer form, interesting structural information is highlighted, and a foundation is laid for subsequent structural marking optimization and characteristic representation.
After the processed image and the structure feature point set are obtained, in order to further improve the precision and reliability of the structure marking data, a corrosion algorithm is introduced to optimize the structure feature points. In particular embodiments, an erosion algorithm is applied to process the image to morphologically contract the region of structure therein. The etching operation removes small noise and fine structures in the structural region, leaving only relatively large, stable structural components. By progressive etching, a series of differently contracted structural regions can be obtained. Then, by calculating the indexes such as the overlapping degree, the similarity and the like between the corroded structure region and the original structure data, the corrosion degree which is most matched with the original structure data is selected. And selecting an optimal one from the plurality of corroded structural areas by combining the original structural data to serve as an optimization result of the structural feature point set. And obtaining a plurality of original structure distribution identifiers through the optimization of the corrosion algorithm and the original structure data. These identifications indicate which pixels belong to the real target structure and which pixels belong to the background or noise in the processed image. Compared with the initial structure feature point set, the original structure distribution identification is more accurate in space position and more reliable in structure integrity, and high-quality marking data is provided for subsequent model training. After the original image and the optimized structure distribution identification are obtained, a framework based on a U-net network is adopted to construct a structure perception model. The U-net network is a deep learning network based on a Full Convolution Network (FCN) and can be used for semantic segmentation of images. In the training process, an original image is used as input, an original structure distribution identifier is used as a label, and the U-net network is subjected to end-to-end training. By minimizing the loss function (e.g., cross entropy loss) between the prediction and the labels, the model can learn the correspondence between the image and the structural markers, thereby providing the ability to perceive and locate structural components in the image. And training to finally obtain the structural perception model. The model can receive any image input and automatically predict the probability that each pixel in the image belongs to a structural component. Wherein the prediction result may be represented as a probability map of the same size as the input image, wherein a higher probability indicates a greater likelihood that the location belongs to the target structure.
The marking result is optimized by constructing an original image and original structure data and utilizing a corrosion algorithm and the original structure data, and finally a structure perception model is obtained through training, so that a target structure in an input image is automatically perceived and positioned, and support is provided for subsequent structure content perception and image splicing.
Further, the embodiment of the application further comprises:
Extracting a first overlapped layer set from the plurality of overlapped layer sets, extracting a first three-dimensional space formed by a first layer from the first overlapped layer set, and randomly generating a first target point in the first three-dimensional space;
Extracting a first adjacent point set of the first target point, wherein the first adjacent point set comprises eight adjacent points of the first target point;
Generating a first initial suture line in a first layer based on the first target point, dividing the first adjacent point set according to the first initial suture line in the first layer, and generating a first dividing point set and a second dividing point set;
Carrying out gradient identification on the first target point, the first dividing point set and the second dividing point set by utilizing a three-dimensional gradient difference formula to generate a first gradient difference;
generating a first stage suture line in a first layer based on the first target point again, and determining a second gradient difference of the first stage suture line in the first layer;
Judging whether the second gradient difference is smaller than or equal to the first gradient difference, if so, taking the first stage suture line in the first layer as the first suture line in the first layer of the first target point;
And taking the first suture line in the first layer corresponding to the minimum gradient difference value as the first optimal suture line in the first layer of the first target point after multiple divisions.
In a preferred embodiment, a method is provided for searching for optimal sutures layer by layer in a three-dimensional overlap region and optimizing suture selection using a three-dimensional gradient difference formula. Before starting searching for the optimal suture, determining a starting point of searching, and randomly selecting one of a plurality of overlapped layer sets as a first overlapped layer set to start suture searching from the overlapped layer set. Then, the bottommost layer is selected from the first overlapping layer set and is taken as a two-dimensional slice plane where the searching starting point is located, and the two-dimensional slice plane is called a first layer. The first layer and the adjacent slice layers form a three-dimensional space together, which is called a first three-dimensional space and represents a three-dimensional area which takes the first layer as a starting point and extends downwards for a certain layer number. After the three-dimensional space for initiating the search is determined, a specific initiation search point, called the first target point, is selected within the space. The selection of the first target point is achieved by means of random sampling, namely, a three-dimensional coordinate point is randomly generated in the first three-dimensional space and serves as an initial search point. After the initial search point is determined, a local search is performed in its neighborhood in order to find the optimal suture path. And taking the first target point as the center, extracting 8 adjacent points around the first target point to form a first adjacent point set. The 8 neighboring points are respectively located in the upper direction, the lower direction, the left direction, the right direction, the upper left direction, the upper right direction, the lower left direction and the lower right direction of the first target point, and form a 3x3 neighboring window together with the first target point. The first set of neighboring points delineates local structural and texture information around the first target point, providing an important reference for subsequent suture path selection. By analyzing the image characteristics and the structural content of each point in the first adjacent point set, the advantages and disadvantages of different suture paths can be evaluated, so that the optimal local path can be selected. Because the suture search is performed in three-dimensional space, the first set of neighboring points actually contains pixels from different slice layers. The pixel points form a cube-shaped neighborhood in the three-dimensional space and reflect the three-dimensional structure information around the first target point.
After extracting the neighborhood information of the initial search point, an initial suture path is selected in the neighborhood as the starting point of the subsequent search. A direction is randomly selected in the first layer by taking the first target point as a starting point, and a starting suture is generated and is called a first starting suture in the first layer. The suture extends from the first target point along a selected direction, passes through the first adjacent point set, and divides the pixels in the adjacent point into two parts. Dividing the first adjacent point set into two subsets according to the position of a first starting suture line in the first layer, wherein the pixel points positioned at one side of the suture line form a first dividing point set, and the pixel points positioned at the other side of the suture line form a second dividing point set. The two division point sets respectively correspond to two sub-areas of the image to be spliced and reflect the difference of image contents at two sides of the suture line. The choice of the first starting suture in the first layer is random and therefore it is not necessarily an optimal suture path. The subsequent search process will gradually find a better suture path by optimization and iteration based on the starting suture.
After the first starting stitch line in the first layer is generated and the neighborhood point set is divided, a gradient difference metric is introduced in order to evaluate the merits of the stitch line. And respectively calculating gradient values of the first target point, each point in the first divided point set and each point in the second divided point set in a three-dimensional space by utilizing a three-dimensional gradient difference formula. The three-dimensional gradient reflects the gray scale change rate of the image in three directions in space, and can characterize local textures and edge features of the image. Then, the gradient values of all points in the first set of division points are summed, and the gradient values of all points in the second set of division points are subtracted to obtain a scalar value, referred to as a first gradient difference. The first gradient difference quantifies the degree of overall difference in gradient characteristics between the set of points on either side of the first starting suture in the first layer. Then, starting again with the first target point, a direction different from the first starting suture is selected in the first layer, and a new suture is generated, which is called a first-stage suture in the first layer. Similar to the first starting stitch, the first stage stitch also passes through the neighborhood of the first target point, dividing the pixels within the neighborhood into two subsets. And re-dividing the first adjacent point set according to the position of the first stage suture line to obtain two new dividing point sets corresponding to the first adjacent point set. Then, the gradient difference of the two new division point sets is calculated by using a three-dimensional gradient difference formula, and a second gradient difference is obtained. The second gradient difference reflects the overall degree of difference in gradient characteristics between the two sets of points on the first stage suture of the first layer. and then judging whether the second gradient difference is smaller than or equal to the first gradient difference. If the second gradient difference is smaller than or equal to the first gradient difference, the position of the first-stage suture line in the first layer is better than that of the first starting suture line in the first layer, so that the first-stage suture line in the first layer is selected as the optimal suture line of the first target point in the first layer, and the first-stage suture line in the first layer is called as the first suture line in the first layer; conversely, if the second gradient difference is greater than the first gradient difference, it is indicated that the first starting suture in the first layer is located more preferentially than the first stage suture in the first layer, so that the first starting suture is reserved as the optimal suture of the first target point in the first layer as the first suture in the first layer. By comparing gradient differences of different suture candidates, a suture with the smallest gradient difference can be found in a local range and used as the optimal selection of the current position, and the suture selection method based on the gradient difference can effectively reduce visual artifacts and structural distortion in the image splicing process and improve the quality of the splicing result. Next, a plurality of new suture candidates are generated in different directions within the neighborhood of the first target point, referred to as second stage sutures within the first layer, third stage sutures within the first layer, etc. For each new suture candidate, the corresponding gradient difference is repeatedly calculated and compared with the gradient difference of the current optimal suture. If the gradient difference of the new suture candidate is smaller than that of the current optimal suture, the new suture candidate is taken as the current optimal suture, and the minimum gradient difference is updated; otherwise, if the gradient difference of the new suture line candidate is greater than or equal to the gradient difference of the current optimal suture line, the current optimal suture line is kept unchanged. And continuously generating new suture candidates in an iterative optimization mode, comparing with the current optimal suture, and gradually finding the suture with the minimum gradient difference at the position of the first target point. When no more suture candidates with smaller gradient differences can be generated, the iterative optimization process ends, and we take the current optimal suture as the first optimal suture in the first layer of the first target point.
Further, the embodiment of the application further comprises:
Searching in the first three-dimensional space according to the preset iteration step length by taking the first target point as a starting point to generate a second target point;
extracting a second adjacent point set of the second target point to perform gradient identification, and obtaining a second optimal suture line in the first layer;
Performing gradient identification for multiple times in the first three-dimensional space to obtain a plurality of optimal sutures in a first layer, and connecting the plurality of optimal sutures in the first layer to generate the optimal sutures in the first layer;
Extracting a second three-dimensional space formed by a second layer from the first overlapping layer set, randomly extracting a second layer starting point in the second three-dimensional space, and carrying out optimal suture line identification in the second three-dimensional space according to the preset iteration step length based on the second layer starting point to generate a second layer optimal suture line;
Performing transition recognition on the first-layer optimal suture line and the second-layer optimal suture line by using a loss function, judging whether the loss is smaller than a preset loss, if so, continuing to perform next-layer optimal suture line recognition to obtain a first optimal suture line group corresponding to a first overlapping layer set;
if not, extracting a second layer updating starting point except the second layer starting point in the second three-dimensional space again, carrying out optimal suture line identification again according to the preset iteration step length, and updating the second layer optimal suture line according to the identification result until the updating loss amount meets the preset loss amount;
And performing layer-by-layer iteration on the plurality of overlapped layer sets to generate a plurality of optimal suture groups.
In a preferred embodiment, the optimal stitch lines are searched layer by layer within the first set of overlapping layers and constraints are placed between adjacent layers, resulting in a complete optimal stitch line set for the first set of overlapping layers. After obtaining the first optimal suture in the first layer of the first target point, searching the optimal suture in other positions in the first three-dimensional space continuously to form a complete optimal suture group in the first layer. And searching in the first layer according to a preset iteration step length by taking the first target point as a starting point. In the searching process, the target point moves along the horizontal direction in the first layer, and a new target point meeting the condition is found and is called a second target point. And after the second target point is determined, extracting a pixel point set in the neighborhood of the second target point by taking the second target point as the center to form a second adjacent point set. The extraction method of the second set of neighboring points is similar to that of the first set of neighboring points, and 8 neighboring points around the second target point are also considered. Then, the optimal suture line is searched in the first layer by repeating the process taking the second target point as a starting point in the second adjacent point set. Similar to the search process for the first target point, a second optimal suture in the first layer of the second target point is finally obtained by generating a plurality of suture candidates and comparing their gradient differences. After the second best stitch in the first layer is obtained, searching is continued at other locations in the first layer until the entire first layer is covered. And repeatedly searching the optimal suture line point by point in the first layer by taking the second target point as a new starting point, obtaining a local optimal suture line at a new position in the first layer by each search, and continuing searching at the next position based on the optimal suture line at the current position. By means of point-by-point iteration, an optimal suture is obtained at all target points in the first layer. These optimal stitches are joined end-to-end within the first layer to form a complete optimal stitch line network covering the entire area of the first layer. In order to obtain the complete optimal suture line in the first layer, each local optimal suture line in the first layer is arranged at the end point in the first layer, and the local optimal suture lines are connected end to end according to the topological relation of the end points to form the optimal suture line in the first layer.
After the first layer of optimal stitching lines is obtained, searching for optimal stitching lines in other layers of the first set of overlapping layers is continued to form a complete three-dimensional optimal stitching plane. The second layer is extracted from the first set of overlapping layers as a new search starting point. The second layer and its adjacent layers form a second three-dimensional space representing a localized three-dimensional region centered on the second layer. And randomly selecting a pixel point in the second three-dimensional space as a starting point of the second layer and using the pixel point as a starting point of searching an optimal suture line in the second layer. And then, searching the optimal suture line in the second layer in the second three-dimensional space by taking the starting point of the second layer as the center according to a preset iteration step length. And similar to the searching process in the first layer, the complete optimal suture in the second layer is finally obtained through progressive point by point, multiple gradient identification and suture connection, namely the optimal suture in the second layer.
After the first layer of optimal stitching line and the second layer of optimal stitching line are obtained, transition identification and evaluation are carried out on the adjacent layer of optimal stitching line by using a loss function. Wherein the loss function is used to measure the spatial position deviation between the first layer of optimal stitching lines and the second layer of optimal stitching lines. For example, the loss function integrates a suture end point deviation, and a suture end point deviation, wherein the suture end point deviation measures the end point position deviation of the first layer of optimal suture and the second layer of optimal suture, and the smaller the deviation, the closer the two sutures are in space; the deviation of the end points of the suture is calculated by comparing geometric features such as curvature, length and the like of the two optimal sutures, wherein the similarity of the shapes of the optimal sutures of the first layer and the optimal sutures of the second layer is measured; the difference of gradients of the areas where the first layer of optimal suture and the second layer of optimal suture pass through is measured by the deviation of suture endpoints, and the smaller the gradient deviation is, the more consistent the two sutures are in image content. And calculating a loss function to obtain a quantized loss quantity, wherein the quantized loss quantity reflects the deviation degree of the first layer optimal stitching line and the second layer optimal stitching line in space position, shape and image content. And then comparing the loss amount with a preset loss amount, and if the loss amount is smaller than the preset loss amount, indicating that the first layer of optimal suture line and the second layer of optimal suture line have good continuity and consistency, and meeting the transition condition. In this case, the first layer of optimal sutures and the second layer of optimal sutures are considered to form part of the optimal suture set, and the search for optimal sutures for the next layer in the first set of overlapping layers is continued. If the loss amount of the first layer optimal stitching line and the second layer optimal stitching line is larger than the preset loss amount, the first layer optimal stitching line and the second layer optimal stitching line are larger in deviation in space position, shape or image content, and the transition condition is not met. In this case, the second layer of optimal stitching is tuned and optimized to reduce its deviation from the first layer of optimal stitching. Firstly, a random pixel point is selected again in the second three-dimensional space and is used as a second layer updating starting point. The second layer update start point should be different from the second layer original start point to avoid repeated searches. And then, searching the optimal suture line in the second layer again in the second three-dimensional space by taking the second layer updating starting point as the center according to the preset iteration step length. Next, using the loss function, the updated second layer best stitch and the first layer best stitch are subjected to transition identification and evaluation, and a new loss amount is calculated. If the new loss is still larger than the preset loss, the updated second-layer optimal suture line still does not meet the transition condition, adjustment and optimization are continued, different second-layer updating starting points are selected again in the second three-dimensional space, the second-layer optimal suture line is searched again, and the new loss is estimated. This process is iterated until a second layer of optimal stitching line is found that meets the transition condition, or a preset maximum number of iterations is reached. And (3) continuously adjusting the starting point of the second layer and searching again to find the second layer optimal suture with the smallest deviation and the best continuity with the first layer optimal suture in the second three-dimensional space.
After obtaining the optimal stitch groups for the first set of overlapping layers, the same process is performed on the other sets of overlapping layers to obtain a plurality of optimal stitch groups. And determining the three-dimensional space range of each overlapping layer set according to the space sequence of the overlapping layer sets, searching the optimal suture line in the three-dimensional space, and obtaining the optimal suture line group on all layers in the overlapping layer set through transition identification and iterative optimization. And finally obtaining the optimal suture line groups of all the overlapped layer sets in a layer-by-layer iteration mode, thereby obtaining a plurality of optimal suture line groups.
Further, a three-dimensional gradient difference formula is constructed as follows:
;
wherein, For the first gradient difference,For the pixel value of the i first division point in the first division point set,For the pixel value of the j-th second division point in the second division point set,N+m=8 for the pixel value of the first target point.
The purpose of the three-dimensional gradient difference formula is to measure the overall difference degree of the pixel points on the two sides of the current suture line on the gradient characteristics. In the formulaRepresenting a first gradient difference, i.e. the three-dimensional gradient difference of the current suture line. The right side of the formula consists of two summation terms, corresponding to gradient difference contributions on both sides of the suture line, respectively. First summation itemThe sum of the gradient differences between all pixel points on one side of the suture line and the starting point of the suture line is calculated, wherein,Representing the pixel value of the i-th pixel of the side,And representing the pixel value of the first target point, wherein n is the total number of the side pixel points. Similarly, the second summation termAnd calculating the sum of gradient differences between all pixel points on the other side of the suture line and the starting point of the suture line. Wherein,And representing the pixel value of the j-th pixel point at the side, wherein m is the total number of the pixel points at the side. Subtracting the sum of the gradient differences at the left side and the right side to obtain the total difference of the pixel points at the two sides of the suture line on the gradient characteristics。The smaller the absolute value of (2), the more similar the image content on two sides of the suture is in gradient characteristics, and the more reasonable the position of the suture is; on the contrary, the method comprises the steps of,The larger the absolute value of (c), the larger the difference in gradient characteristics between the image contents of both sides of the suture line, and the less suitable the position of the suture line.
Through a three-dimensional gradient difference formula, a mathematical model for quantitatively evaluating the quality of the suture line is established, the difference of pixel points on two sides of the suture line on gradient characteristics is comprehensively considered, and an important reference index is provided for suture line optimization. By minimizing the three-dimensional gradient differences, the optimal suture line position with the minimum gradient difference and the most similar image content is found.
Further, the construction loss function is:
;
wherein, In order to make the loss amount,For the first gradient difference,For the second gradient difference,For the gradient difference loss coefficient,As a result of the loss of coefficients of the structural factors,As a structural factor of the first layer,As a structural factor of the second layer,As the coefficient of the gap perturbation,For the gap between the first layer and the second layer,Is the reference gap between the two overlapping layers.
The loss function evaluates the spatial positional deviation between adjacent layers of optimal stitching lines. In the case of the loss function,Representing the total loss, and is formed by three weighted summation. First itemRepresenting suture gradient differential loss, whereinAndRepresenting the gradient difference of the optimal seam for the first and second layers respectively,The weight coefficient of the gradient difference loss is used for measuring the difference degree of adjacent layer stitching lines on the image gradient characteristics, and the smaller the gradient difference is, the smaller the loss is. Second itemRepresenting loss of suture structural content, whereinAndRepresenting the structural content factor of the region through which the first and second layers of optimal stitching thread pass,For the weight coefficient of the structural content loss, the weight coefficient is used for measuring the difference degree of adjacent layers of stitching lines on the image structural content, and the smaller the structural content difference is, the smaller the loss is. Third itemRepresenting loss of spatial position deviation of suture thread, whereinIndicating the actual distance between the optimal stitching lines of adjacent layers,Indicating the desired distance between the optimal seam lines of adjacent layers,The weight coefficient of the loss of the spatial position deviation is used for measuring the deviation degree of the adjacent layer stitching line on the spatial position, and the smaller the spatial position deviation is, the smaller the loss is.
The three losses are weighted and summed, the loss function establishes a mathematical model for comprehensively evaluating the quality of the optimal suture line, and in the optimization process, the value of the loss function is minimized by adjusting the position of the optimal suture line of the adjacent layers, so that the optimal suture line with the best continuity and the highest integrity is obtained.
In summary, the 3D suture splicing and fusing method based on structural content perception provided by the embodiment of the application has the following technical effects:
And acquiring three-dimensional images of the target structure by using a light sheet microscope to generate a plurality of three-dimensional images, wherein the plurality of three-dimensional images have a plurality of positioning marks, so that a foundation is laid for accurate splicing for data foundations of subsequent image registration, structure extraction and other operations. Mapping translation is carried out on a plurality of three-dimensional images based on a plurality of positioning identifiers to generate a plurality of overlapped areas, wherein the overlapped areas comprise a plurality of overlapped layer sets, image registration is realized through the positioning identifiers, the spatial correspondence between the images to be spliced is found out, a common area is extracted to form overlapped layers, the position offset between the images is accurately positioned, and area support is provided for structure sensing and suture line recognition. And performing structural content perception recognition on the plurality of overlapping layer sets by using the structural perception model to generate a plurality of structural content factor sets, quantifying the structural attribute of the image part, and providing an important reference basis for subsequent suture line optimization. Performing layer-by-layer iteration on the multiple overlapped layer sets according to a preset iteration step length, sequentially performing optimal suture line identification in each overlapped layer, performing loss analysis on a layer-by-layer iteration result by combining multiple structure content factor sets by utilizing a preset suture loss function, generating multiple optimal suture line groups, and determining optimal suture positions in an overlapped region in an iterative optimizing mode to form the optimal suture line groups. And performing planar scanning on the target structure by using the light sheet microscope again, splicing the images after planar scanning based on a plurality of optimal suture line groups to generate a target image of the target structure, and fusing the images by taking the optimal suture line groups as paths, so that not only is the generation of splicing artifacts avoided, but also the fine structure of the original image is reserved to the maximum extent, and the spliced large-view image is clear and natural, thereby providing high-quality data support for subsequent analysis application.
Example two
Based on the same inventive concept as the 3D suture splicing and fusing method based on structural content perception in the foregoing embodiment, as shown in fig. 2, an embodiment of the present application provides a 3D suture splicing and fusing system based on structural content perception, including:
The three-dimensional image acquisition module 11 is used for acquiring three-dimensional images of the target structure by utilizing a light sheet microscope to generate a plurality of three-dimensional images, wherein the plurality of three-dimensional images are provided with a plurality of positioning marks;
an image mapping translation module 12, configured to perform mapping translation on the plurality of three-dimensional images based on the plurality of positioning identifiers, and generate a plurality of overlapping areas, where the plurality of overlapping areas includes a plurality of overlapping layer sets;
the structure sensing module 13 is configured to perform structure content sensing identification on the multiple overlapping layer sets by using a structure sensing model, so as to generate multiple structure content factor sets;
The suture line group generating module 14 is configured to iterate the multiple overlapping layer sets layer by layer according to a preset iteration step, sequentially identify an optimal suture line in each overlapping layer, and perform loss analysis on the layer by layer iteration result by using a preset suture loss function and combining the multiple structure content factor sets to generate multiple optimal suture line groups;
and the target image generating module 15 is used for carrying out planar scanning on the target structure again by utilizing the light sheet microscope, and splicing the images after planar scanning based on the plurality of optimal suture line groups to generate a target image of the target structure.
Further, the image mapping translation module 12 includes the following steps:
Performing position recognition on a plurality of edge scatter point sets of the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of edge scatter point sets, wherein the plurality of edge scatter point sets comprise a plurality of transverse position identifier sets and a plurality of longitudinal position identifier sets;
Performing transverse mapping translation on the plurality of three-dimensional images according to the plurality of transverse position identification sets to obtain a plurality of initial overlapping areas;
And carrying out longitudinal mapping translation on the initial overlapping areas according to the longitudinal position identifiers to obtain the overlapping areas, wherein the overlapping areas comprise overlapping layer sets.
Further, the structure sensing module 13 includes the following steps:
Collecting a plurality of original images and a plurality of original structure data;
performing binarization processing on the plurality of original images to obtain a plurality of processed images, wherein the plurality of processed images comprise a plurality of structures and backgrounds, and the plurality of processed images comprise a plurality of structure feature point sets;
marking and optimizing a plurality of structure feature point sets of the plurality of processed images by combining the plurality of original structure data through a corrosion algorithm to obtain a plurality of original structure distribution identifiers;
And performing supervision training on a framework constructed based on the U-net network by utilizing the plurality of original images and the plurality of original structure distribution identifiers to obtain the structure perception model.
Further, the suture-set generating module 14 includes the following steps:
Extracting a first overlapped layer set from the plurality of overlapped layer sets, extracting a first three-dimensional space formed by a first layer from the first overlapped layer set, and randomly generating a first target point in the first three-dimensional space;
Extracting a first adjacent point set of the first target point, wherein the first adjacent point set comprises eight adjacent points of the first target point;
Generating a first initial suture line in a first layer based on the first target point, dividing the first adjacent point set according to the first initial suture line in the first layer, and generating a first dividing point set and a second dividing point set;
Carrying out gradient identification on the first target point, the first dividing point set and the second dividing point set by utilizing a three-dimensional gradient difference formula to generate a first gradient difference;
generating a first stage suture line in a first layer based on the first target point again, and determining a second gradient difference of the first stage suture line in the first layer;
Judging whether the second gradient difference is smaller than or equal to the first gradient difference, if so, taking the first stage suture line in the first layer as the first suture line in the first layer of the first target point;
And taking the first suture line in the first layer corresponding to the minimum gradient difference value as the first optimal suture line in the first layer of the first target point after multiple divisions.
Further, the suture-set generating module 14 includes the following steps:
Searching in the first three-dimensional space according to the preset iteration step length by taking the first target point as a starting point to generate a second target point;
extracting a second adjacent point set of the second target point to perform gradient identification, and obtaining a second optimal suture line in the first layer;
Performing gradient identification for multiple times in the first three-dimensional space to obtain a plurality of optimal sutures in a first layer, and connecting the plurality of optimal sutures in the first layer to generate the optimal sutures in the first layer;
Extracting a second three-dimensional space formed by a second layer from the first overlapping layer set, randomly extracting a second layer starting point in the second three-dimensional space, and carrying out optimal suture line identification in the second three-dimensional space according to the preset iteration step length based on the second layer starting point to generate a second layer optimal suture line;
Performing transition recognition on the first-layer optimal suture line and the second-layer optimal suture line by using a loss function, judging whether the loss is smaller than a preset loss, if so, continuing to perform next-layer optimal suture line recognition to obtain a first optimal suture line group corresponding to a first overlapping layer set;
if not, extracting a second layer updating starting point except the second layer starting point in the second three-dimensional space again, carrying out optimal suture line identification again according to the preset iteration step length, and updating the second layer optimal suture line according to the identification result until the updating loss amount meets the preset loss amount;
And performing layer-by-layer iteration on the plurality of overlapped layer sets to generate a plurality of optimal suture groups.
Further, the suture-set generation module 14 includes the following:
constructing a three-dimensional gradient difference formula, wherein the three-dimensional gradient difference formula is as follows:
;
wherein, For the first gradient difference,For the pixel value of the i first division point in the first division point set,For the pixel value of the j-th second division point in the second division point set,N+m=8 for the pixel value of the first target point.
Further, the suture-set generation module 14 includes the following:
Constructing a loss function, wherein the loss function is as follows:
;
wherein, In order to make the loss amount,For the first gradient difference,For the second gradient difference,For the gradient difference loss coefficient,As a result of the loss of coefficients of the structural factors,As a structural factor of the first layer,As a structural factor of the second layer,As the coefficient of the gap perturbation,For the gap between the first layer and the second layer,Is the reference gap between the two overlapping layers.
Any of the steps of the methods described above may be stored as computer instructions or programs in a non-limiting computer memory and may be called by a non-limiting computer processor to identify any method for implementing an embodiment of the present application, without unnecessary limitations.
Further, the first or second described above may represent not only a sequential relationship but also a particular concept, and/or may refer to individual or total selection among multiple elements. It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the scope of the application. Thus, the present application is intended to include such modifications and alterations insofar as they come within the scope of the application or the equivalents thereof.
Claims (8)
1. 3D suture splicing and fusion method based on structural content perception, which is characterized by comprising the following steps:
three-dimensional image acquisition is carried out on a target structure by utilizing a light sheet microscope, and a plurality of three-dimensional images are generated, wherein the plurality of three-dimensional images are provided with a plurality of positioning marks;
mapping and translating the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of overlapping areas, wherein the plurality of overlapping areas comprise a plurality of overlapping layer sets;
Performing structural content perception recognition on the overlapped layer sets by using a structural perception model to generate a plurality of structural content factor sets;
performing layer-by-layer iteration on the plurality of overlapped layer sets according to a preset iteration step length, sequentially performing optimal suture line identification in each overlapped layer, performing loss analysis on a layer-by-layer iteration result by combining the plurality of structure content factor sets by utilizing a preset suture loss function, and generating a plurality of optimal suture line groups;
And carrying out plane scanning on the target structure by using the light sheet microscope again, and splicing the images after plane scanning based on the optimal suture line groups to generate a target image of the target structure.
2. The method of claim 1, wherein mapping the plurality of three-dimensional images based on the plurality of localization identifications translates to generate a plurality of overlapping regions, wherein the plurality of overlapping regions comprises a plurality of overlapping layer sets, the method comprising:
Performing position recognition on a plurality of edge scatter point sets of the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of edge scatter point sets, wherein the plurality of edge scatter point sets comprise a plurality of transverse position identifier sets and a plurality of longitudinal position identifier sets;
Performing transverse mapping translation on the plurality of three-dimensional images according to the plurality of transverse position identification sets to obtain a plurality of initial overlapping areas;
And carrying out longitudinal mapping translation on the initial overlapping areas according to the longitudinal position identifiers to obtain the overlapping areas, wherein the overlapping areas comprise overlapping layer sets.
3. The method of claim 1, wherein the plurality of overlapping layer sets are structured content aware identified using a structured awareness model to generate a plurality of structured content factor sets, the method comprising:
Collecting a plurality of original images and a plurality of original structure data;
performing binarization processing on the plurality of original images to obtain a plurality of processed images, wherein the plurality of processed images comprise a plurality of structures and backgrounds, and the plurality of processed images comprise a plurality of structure feature point sets;
marking and optimizing a plurality of structure feature point sets of the plurality of processed images by combining the plurality of original structure data through a corrosion algorithm to obtain a plurality of original structure distribution identifiers;
And performing supervision training on a framework constructed based on the U-net network by utilizing the plurality of original images and the plurality of original structure distribution identifiers to obtain the structure perception model.
4. The method of claim 1, wherein the plurality of overlapping layer sets are iterated layer by layer in a predetermined iteration step, optimal stitch line identification is performed within each overlapping layer in turn, loss analysis is performed on the layer by layer iteration results in combination with the plurality of sets of structural content factors using a predetermined stitch loss function, and a plurality of optimal stitch line groups are generated, the method comprising:
Extracting a first overlapped layer set from the plurality of overlapped layer sets, extracting a first three-dimensional space formed by a first layer from the first overlapped layer set, and randomly generating a first target point in the first three-dimensional space;
Extracting a first adjacent point set of the first target point, wherein the first adjacent point set comprises eight adjacent points of the first target point;
Generating a first initial suture line in a first layer based on the first target point, dividing the first adjacent point set according to the first initial suture line in the first layer, and generating a first dividing point set and a second dividing point set;
Carrying out gradient identification on the first target point, the first dividing point set and the second dividing point set by utilizing a three-dimensional gradient difference formula to generate a first gradient difference;
generating a first stage suture line in a first layer based on the first target point again, and determining a second gradient difference of the first stage suture line in the first layer;
Judging whether the second gradient difference is smaller than or equal to the first gradient difference, if so, taking the first stage suture line in the first layer as the first suture line in the first layer of the first target point;
And taking the first suture line in the first layer corresponding to the minimum gradient difference value as the first optimal suture line in the first layer of the first target point after multiple divisions.
5. The method of claim 4, wherein the method comprises:
Searching in the first three-dimensional space according to the preset iteration step length by taking the first target point as a starting point to generate a second target point;
extracting a second adjacent point set of the second target point to perform gradient identification, and obtaining a second optimal suture line in the first layer;
Performing gradient identification for multiple times in the first three-dimensional space to obtain a plurality of optimal sutures in a first layer, and connecting the plurality of optimal sutures in the first layer to generate the optimal sutures in the first layer;
Extracting a second three-dimensional space formed by a second layer from the first overlapping layer set, randomly extracting a second layer starting point in the second three-dimensional space, and carrying out optimal suture line identification in the second three-dimensional space according to the preset iteration step length based on the second layer starting point to generate a second layer optimal suture line;
Performing transition recognition on the first-layer optimal suture line and the second-layer optimal suture line by using a loss function, judging whether the loss is smaller than a preset loss, if so, continuing to perform next-layer optimal suture line recognition to obtain a first optimal suture line group corresponding to a first overlapping layer set;
if not, extracting a second layer updating starting point except the second layer starting point in the second three-dimensional space again, carrying out optimal suture line identification again according to the preset iteration step length, and updating the second layer optimal suture line according to the identification result until the updating loss amount meets the preset loss amount;
And performing layer-by-layer iteration on the plurality of overlapped layer sets to generate a plurality of optimal suture groups.
6. The method of claim 4, wherein the method comprises:
constructing a three-dimensional gradient difference formula, wherein the three-dimensional gradient difference formula is as follows:
;
wherein, For the first gradient difference,For the pixel value of the i first division point in the first division point set,For the pixel value of the j-th second division point in the second division point set,N+m=8 for the pixel value of the first target point.
7. The method of claim 5, wherein the method comprises:
Constructing a loss function, wherein the loss function is as follows:
;
wherein, In order to make the loss amount,For the first gradient difference,For the second gradient difference,For the gradient difference loss coefficient,As a result of the loss of coefficients of the structural factors,As a structural factor of the first layer,As a structural factor of the second layer,As the coefficient of the gap perturbation,For the gap between the first layer and the second layer,Is the reference gap between the two overlapping layers.
8. A 3D suture splice fusion system based on structural content awareness, for implementing the 3D suture splice fusion method based on structural content awareness of any one of claims 1-7, the system comprising:
The three-dimensional image acquisition module is used for acquiring three-dimensional images of the target structure by utilizing a light sheet microscope to generate a plurality of three-dimensional images, wherein the plurality of three-dimensional images are provided with a plurality of positioning marks;
the image mapping translation module is used for mapping and translating the plurality of three-dimensional images based on the plurality of positioning identifiers to generate a plurality of overlapped areas, wherein the plurality of overlapped areas comprise a plurality of overlapped layer sets;
The structure sensing module is used for performing structure content sensing identification on the overlapped layer sets by utilizing a structure sensing model to generate a plurality of structure content factor sets;
The suture line group generation module is used for iterating the overlapped layer sets layer by layer according to a preset iteration step length, carrying out optimal suture line identification in each overlapped layer in sequence, carrying out loss analysis on the layer by layer iteration result by combining the structure content factor sets by utilizing a preset suture loss function, and generating a plurality of optimal suture line groups;
And the target image generation module is used for carrying out plane scanning on the target structure by utilizing the light sheet microscope again, and splicing the images after the plane scanning based on the optimal suture line groups to generate a target image of the target structure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410741131.9A CN118314309B (en) | 2024-06-11 | 2024-06-11 | 3D suture splicing and fusion method and system based on structural content perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410741131.9A CN118314309B (en) | 2024-06-11 | 2024-06-11 | 3D suture splicing and fusion method and system based on structural content perception |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118314309A CN118314309A (en) | 2024-07-09 |
CN118314309B true CN118314309B (en) | 2024-08-13 |
Family
ID=91722933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410741131.9A Active CN118314309B (en) | 2024-06-11 | 2024-06-11 | 3D suture splicing and fusion method and system based on structural content perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118314309B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119417990B (en) * | 2024-11-07 | 2025-06-24 | 重庆大学 | Image recognition-based three-dimensional space reconstruction method for complex jungle environment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112991176A (en) * | 2021-03-19 | 2021-06-18 | 南京工程学院 | Panoramic image splicing method based on optimal suture line |
CN113221665A (en) * | 2021-04-19 | 2021-08-06 | 东南大学 | Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965570A (en) * | 2021-10-13 | 2023-04-14 | 青岛海信医疗设备股份有限公司 | Method for generating three-dimensional panorama of ultrasonic breast and ultrasonic equipment |
CN114842095B (en) * | 2022-03-28 | 2025-04-25 | 南京邮电大学 | Optimal seam image fusion method considering spatiotemporal relationship for virtual reality |
CN117336620B (en) * | 2023-11-24 | 2024-02-09 | 北京智汇云舟科技有限公司 | Adaptive video stitching method and system based on deep learning |
-
2024
- 2024-06-11 CN CN202410741131.9A patent/CN118314309B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112991176A (en) * | 2021-03-19 | 2021-06-18 | 南京工程学院 | Panoramic image splicing method based on optimal suture line |
CN113221665A (en) * | 2021-04-19 | 2021-08-06 | 东南大学 | Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method |
Also Published As
Publication number | Publication date |
---|---|
CN118314309A (en) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111027547B (en) | Automatic detection method for multi-scale polymorphic target in two-dimensional image | |
Chen et al. | An end-to-end shape modeling framework for vectorized building outline generation from aerial images | |
CN110472616B (en) | Image recognition method and device, computer equipment and storage medium | |
Khoshelham et al. | Performance evaluation of automated approaches to building detection in multi-source aerial data | |
US8050473B2 (en) | Segmentation method using an oriented active shape model | |
Jarząbek-Rychard et al. | 3D building reconstruction from ALS data using unambiguous decomposition into elementary structures | |
CN109493346A (en) | It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device | |
CN107230203A (en) | Casting defect recognition methods based on human eye vision attention mechanism | |
CN118314309B (en) | 3D suture splicing and fusion method and system based on structural content perception | |
CN113066064B (en) | Biological structure recognition and 3D reconstruction system of cone beam CT images based on artificial intelligence | |
CN114821316B (en) | A method and system for identifying crack damage using three-dimensional ground penetrating radar | |
CN115731390A (en) | Method and equipment for identifying rock mass structural plane of limestone tunnel | |
CN113744195A (en) | Deep learning-based automatic detection method for hRPE cell microtubules | |
CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
CN117612021A (en) | Remote sensing extraction method and system for agricultural plastic greenhouse | |
CN114972751B (en) | Medical image recognition method, electronic device, and storage medium | |
CN114549396B (en) | An interactive and automatic spine segmentation refinement method based on graph neural network | |
CN119741499A (en) | A method, system and medium for colorectal cancer image segmentation | |
CN113633375B (en) | Construction method of non-diagnosis-purpose virtual bronchoscope | |
CN118609123B (en) | Prokaryotic target detection method based on double-layer optimization and multi-focal-length embryo image fusion | |
CN118711181B (en) | A lymphocyte image segmentation method and system based on three-dimensional reconstruction technology | |
CN118038384B (en) | A method and system for visual detection of early road damage based on feature collaboration | |
CN114387308A (en) | Machine vision characteristic tracking system | |
CN119205870A (en) | A method and system for estimating the mass of underwater swimming fish | |
CN117115198B (en) | A method, device and storage medium for positioning and tracking three-dimensional cell movement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |