CN120876517B - Tumor image automatic segmentation and three-dimensional reconstruction system - Google Patents
Tumor image automatic segmentation and three-dimensional reconstruction systemInfo
- Publication number
- CN120876517B CN120876517B CN202511404357.0A CN202511404357A CN120876517B CN 120876517 B CN120876517 B CN 120876517B CN 202511404357 A CN202511404357 A CN 202511404357A CN 120876517 B CN120876517 B CN 120876517B
- Authority
- CN
- China
- Prior art keywords
- tumor
- image
- region
- acquisition
- under
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Molecular Biology (AREA)
- Computer Graphics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The application discloses an automatic tumor image segmentation and three-dimensional reconstruction system, and relates to the technical field of image processing. The system comprises a tumor image acquisition module, a region selection module, a deformation degree determination module, an image correction module and a three-dimensional model construction module. And then selecting a reference area through deformation information of an area in the image, correcting the tumor image based on the reference area under each view angle, and constructing a tumor three-dimensional model. The method corrects the problems of discontinuous tumor position and morphological distortion caused by respiratory motion, and obtains more accurate corrected tumor images, so that the accuracy of the generated tumor three-dimensional model can be improved.
Description
Technical Field
The application relates to the technical field of medical image processing, in particular to an automatic tumor image segmentation and three-dimensional reconstruction system.
Background
The tumor of upper digestive tract refers to malignant tumor occurring in the esophagus, stomach, duodenum, etc., and common types include esophageal cancer, gastric cancer, and duodenal cancer . Early symptoms of dysphagia, pain in the upper abdomen, hematemesis , etc. are typical manifestations and require imaging to aid diagnosis. Medical imaging technology plays a vital role in the auxiliary diagnosis of upper gastrointestinal tumor , providing a great deal of valuable information to doctors. In order to achieve accurate and consistent tumor region identification targets, currently, tumor regions are generally identified and segmented online by means of a pre-trained deep neural network. After the segmented image of the tumor region is acquired, a three-dimensional reconstruction technology is also applied to convert the two-dimensional tumor segmented data into a three-dimensional model, so that powerful assistance is provided for a doctor to perform operation planning and radiotherapy scheme design.
At present, a deep learning method is generally adopted to automatically segment tumor images, and simultaneously, a multi-view image is combined to perform three-dimensional reconstruction work. However, during the acquisition phase of tumor images, the respiratory motion of the patient can cause dynamic deformation of the tumor region in different frames. This situation can cause discontinuities and morphological distortions in the tumor region of the upper gastrointestinal tumor image, which in turn can result in lower accuracy in the generated tumor three-dimensional model.
Disclosure of Invention
In order to solve the problem of low accuracy in modeling a tumor three-dimensional model by using a tumor image due to respiration of a patient, the application provides an automatic tumor image segmentation and three-dimensional reconstruction system, which adopts the following technical scheme:
The application provides an automatic tumor image segmentation and three-dimensional reconstruction system, which comprises:
The tumor image acquisition module is used for acquiring initial tumor images under a plurality of acquisition visual angles, wherein the initial tumor images comprise a plurality of image areas;
The region selection module is used for selecting a reference region from the plurality of image regions according to the deformation information of the biological tissues of the image regions in the target breathing period under each acquisition view angle, wherein the deformation degree of the biological tissues of the reference region is minimum;
the deformation determining module is used for comparing the tumor areas in the image areas under the acquisition view angles with the reference areas respectively to obtain the breathing deformation of each pixel point in the tumor areas under the acquisition view angles;
The image correction module is used for respectively correcting the positions of all pixel points of the tumor area in the initial tumor image under all the acquisition view angles based on the breathing deformation degree of all the pixel points in the tumor area under all the acquisition view angles to obtain corrected tumor images under all the acquisition view angles;
the three-dimensional model construction module is used for constructing a tumor three-dimensional model based on each corrected tumor image.
In some possible implementations, the region selection module specifically includes the following units:
The image acquisition unit is used for acquiring historical tumor images under all acquisition visual angles in a target respiratory period;
the image sequencing unit is used for sequencing the historical tumor images under each acquisition view angle and the corresponding initial tumor images according to the time sequence, and constructing and obtaining a tumor image sequence under each acquisition view angle;
the linkage determining unit is used for respectively executing the steps of determining respiratory linkage of each image area under the acquisition view angle based on the tumor image sequence under the acquisition view angle;
the evaluation value determining unit is used for determining respiratory deformation evaluation values of the image areas based on respiratory linkage of the image areas under the acquisition view angles;
and a region determination unit configured to determine, as a reference region, an image region in which the respiratory deformation evaluation value is minimum.
In some possible implementations, the linkage determining unit is specifically configured to:
In a tumor image sequence under an acquisition view angle, acquiring the image frame difference and the image matching degree of a target image area between adjacent tumor images, wherein the target image area is any image area;
and determining the respiration linkage of the target image area under the acquisition view angle by utilizing the image frame difference and the image matching degree of the target image area between adjacent tumor images.
In some possible implementations, the evaluation value determining unit is specifically configured to:
Sequencing all the acquisition view angles according to a preset traversal sequence to obtain the arrangement sequence of all the acquisition view angles;
Carrying out difference absolute value calculation on respiratory linkage of a target image area under the acquisition view angles of adjacent arrangement sequences to obtain a respiratory linkage difference value, wherein the target image area is any image area;
and determining the respiratory deformation evaluation value of the target image area by utilizing the respiratory linkage difference value.
In some possible implementations, the deformation determining module specifically includes the following units:
The region screening unit is used for screening and obtaining each tumor region under the target acquisition view angle from each image region under the target acquisition view angle, wherein the target acquisition view angle is any acquisition view angle;
the deformation proportion determining unit is used for determining the relative deformation proportion of each pixel point in each tumor area according to the distance between each pixel point in each tumor area and the centroid point of the corresponding tumor area;
The displacement degree determining unit is used for determining the relative displacement degree of each pixel point in each tumor area according to the distance between each pixel point in each tumor area and the centroid point of the reference area;
the deformation degree determining unit is used for determining the breathing deformation degree of each pixel point in each tumor area under the target acquisition view angle by utilizing the relative deformation proportion and the relative displacement degree of each pixel point in each tumor area.
In some possible implementations, the deformation ratio determining unit is specifically configured to:
Determining a first average distance between a j-th pixel point and a centroid point of a tumor area according to a first distance between the j-th pixel point and the corresponding centroid point of the tumor area in a tumor image sequence under a target acquisition view angle of a target respiratory cycle, wherein j is a positive integer;
And determining the relative deformation proportion of the jth pixel point in the second tumor region under the target acquisition view angle by using the first distance and the corresponding first average distance of the jth pixel point in the second tumor region, wherein the second tumor region is the first tumor region in the initial tumor image.
In some possible implementations, the displacement degree determining unit is specifically configured to:
determining a second average distance between a j-th pixel point and a reference area centroid point according to a second distance between the j-th pixel point and the reference area centroid point in a tumor image sequence under a target acquisition view angle in a target breathing period, wherein j is a positive integer;
And determining the relative displacement degree of the jth pixel point in the second tumor region under the target acquisition view angle by using the second distance and the corresponding second average distance of the jth pixel point in the second tumor region, wherein the second tumor region is the first tumor region in the initial tumor image.
In some possible implementations, the image correction module is specifically configured to:
Based on the breathing deformation degree of each pixel point of the tumor area in the initial tumor image under each acquisition view angle, respectively determining the deformation correction amount of each pixel point of the tumor area in the initial tumor image under each acquisition view angle;
and correcting the positions of all the pixel points of the tumor area in the initial tumor image under all the acquisition view angles based on the deformation correction amounts of all the pixel points of the tumor area in the initial tumor image under all the acquisition view angles, so as to obtain corrected tumor images under all the acquisition view angles.
In some possible implementations, the three-dimensional model building module is specifically configured to:
Sequencing the corrected tumor images according to the image acquisition sequence, and determining the spatial position of each corrected tumor image;
carrying out consistency processing on each corrected tumor image to obtain corresponding consistent tumor images;
Stacking the consistent tumor images according to the corresponding spatial positions to obtain an image stacking result;
And constructing a tumor three-dimensional model based on the image stacking result.
In some possible implementations, the tumor image acquisition module is specifically configured to:
transmitting an image acquisition signal to a tumor scanner so that the tumor scanner scans tumors of a target patient;
Receiving original signal data of a target patient collected by a tumor scanner;
And converting the original signal data into a two-dimensional image format through a target reconstruction algorithm to obtain initial tumor images under a plurality of acquisition view angles.
The application has the following beneficial effects:
In the tumor image automatic segmentation and three-dimensional reconstruction system provided by the embodiment of the application, the tumor image acquisition module is used for acquiring initial tumor images under a plurality of acquisition visual angles, so that a comprehensive image data base is provided for subsequent processing. Then, an image area with the minimum deformation degree is selected as a datum area through an area selection module, and a relatively stable reference standard is found. And comparing each pixel point in the tumor area under each acquisition view angle with the reference area through the deformation determining module to obtain the respiration deformation degree of each pixel point, and accurately quantifying the deformation degree of each pixel point caused by respiration. And then, the image correction module corrects the positions of the pixel points based on the breathing deformation degree of the pixel points in the tumor area under each acquisition view angle, so as to obtain corrected tumor images under each acquisition view angle. The method directly corrects the problems of discontinuous tumor position and morphological distortion caused by respiratory motion, so that the corrected tumor image is closer to the true form and position of the tumor, and the accuracy of the image data is greatly improved. And finally, constructing a tumor three-dimensional model based on each corrected tumor image by a three-dimensional model construction module. Therefore, the embodiment of the application can correct the problems of discontinuous tumor position and morphological distortion caused by respiratory motion, and obtain more accurate corrected tumor images, thereby improving the accuracy of the generated tumor three-dimensional model.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an automatic tumor image segmentation and three-dimensional reconstruction system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a region selection module according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a deformation determining module according to an embodiment of the present application.
Detailed Description
In order to further describe the technical means and effects adopted by the present application to achieve the preset purpose, the following detailed description refers to the specific implementation, structure, characteristics and effects of an automatic tumor image segmentation and three-dimensional reconstruction system according to the present application with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
The following describes a specific embodiment of a tumor image automatic segmentation and three-dimensional reconstruction system provided by the embodiment of the application.
As shown in fig. 1, a schematic structural diagram of an automatic tumor image segmentation and three-dimensional reconstruction system is provided. The tumor image automatic segmentation and three-dimensional reconstruction system 100 comprises a tumor image acquisition module 110, a region selection module 120, a deformation determination module 130, an image correction module 140 and a three-dimensional model construction module 150.
The tumor image acquisition module 110 is configured to acquire an initial tumor image under a plurality of acquisition view angles, where the initial tumor image includes a plurality of image areas.
In this embodiment, the acquisition view angle refers to a view angle from which tumor images are acquired from different angles or positions. For example, in medical imaging examinations, it is possible to acquire tumor images by means of different probe positions, scan directions or imaging modes (e.g. different scan slice combinations of CT, etc.), each different acquisition mode corresponding to an acquisition view.
The initial tumor image refers to original image data about a tumor acquired by a specific imaging device (such as CT) at a plurality of acquisition view angles, the images include information about the morphology, structure, etc. of the tumor and surrounding biological tissues, and each initial tumor image is composed of a plurality of image areas.
The image area refers to a local area with a certain boundary and characteristics, which is divided in the initial tumor image, and the areas can be divided according to the characteristics of the gray level, the texture and the like of the tissue based on an image segmentation technology and used for subsequent detailed analysis of the tumor and surrounding tissues. For example, the initial tumor image can be automatically segmented by using a deep learning neural network (U-Net) trained by large-scale labeling data to obtain a plurality of image areas.
As one example, the tumor image acquisition module 110 acquires initial tumor images using a variety of imaging devices or from different acquisition parameter settings. For example, for CT examination, multiple scans may be performed with different scan layer thicknesses, scan angles.
Then, preprocessing is carried out on the obtained initial tumor image, including denoising, image enhancement and other operations, so as to improve the image quality and facilitate subsequent analysis.
The region selection module 120 is configured to select a reference region from the plurality of image regions according to deformation information of the biological tissue in the target respiratory cycle of the image region under each acquisition view angle, where the deformation degree of the biological tissue is the smallest.
In this embodiment, the target respiratory cycle refers to a specific respiratory cycle (such as a complete inspiration and expiration process) selected as an analysis time range in consideration of the influence of respiratory motion on the morphology of the tumor and surrounding tissues during the medical image acquisition process, and the deformation information of the biological tissues is studied in the cycle.
The biological tissue deformation information is information describing changes in shape, position, etc. of biological tissue (including tumor and its surrounding normal tissue) in the target respiratory cycle. Deformation parameters such as displacement, strain and the like of the tissue in all directions can be calculated by comparing tumor images at different time points (different stages of a respiratory cycle) through technologies such as image registration, an optical flow method and the like.
The reference region is an image region with the smallest degree of deformation of the biological tissue among the plurality of image regions, and is determined as the reference region through analysis and comparison. The morphological change of the region in the breathing process is relatively stable and can be used as a reference standard for subsequent comparison and correction.
As an example, the region selection module 120 registers, for each acquired view angle, an initial tumor image in the target respiratory cycle and a historical tumor image at different time points in the target respiratory cycle by using an image registration technology, and calculates a displacement field of each image region at each time point, so as to obtain deformation information of the biological tissue.
And (3) calculating deformation degree indexes such as average displacement, maximum strain and the like of each image area by analyzing deformation information. And comparing deformation degree indexes of the image areas, and selecting the image area with the minimum deformation degree as a reference area.
The deformation determining module 130 is configured to compare the tumor area in the image area under each acquired view angle with the reference area, respectively, to obtain the respiratory deformation of each pixel point in the tumor area under each acquired view angle.
In this embodiment, the tumor region refers to a region containing tumor cells explicitly identified in the initial tumor image, and is usually segmented from the image region by medical image analysis techniques (such as classification based on image features, machine learning algorithms, etc.).
The breathing deformation degree is used for quantifying an index of deformation degree of each pixel point in the tumor area in the breathing process. And comparing the pixel points in the tumor area under each acquisition view angle with the reference area, and calculating the variation of each pixel point on the spatial position, wherein the variation is the breathing deformation degree.
As one example, the deformability determination module 130 first accurately segments a tumor region from the initial tumor image of each acquired view angle using an image segmentation technique.
Then, aligning the tumor area with the reference area by adopting an image registration or feature matching method, and calculating the spatial displacement of each pixel point in the tumor area relative to the corresponding position of the reference area, thereby determining the breathing deformation degree of the pixel point.
The image correction module 140 is configured to respectively correct the positions of the pixels in the tumor area in the initial tumor image under each acquired view angle based on the respiratory deformability of the pixels in the tumor area under each acquired view angle, so as to obtain corrected tumor images under each acquired view angle.
In this embodiment, the corrected tumor image is an image obtained by adjusting the position of each pixel point of the original tumor region in the initial tumor image based on the respiratory deformability of each pixel point in the tumor region at each acquisition view angle. The corrected image can more accurately reflect the real form and position of the tumor without being interfered by respiratory motion.
As an example, the image correction module 140 adjusts the original coordinates of each pixel according to the calculated breathing deformation. For example, if the respiratory deformability of a pixel point indicates that the respiratory deformability has a displacement of 2mm in the X direction, the X coordinate of the pixel point is correspondingly added or subtracted by 2mm in correction.
And carrying out the position correction operation on all pixel points of the tumor area in the initial tumor image to obtain a corrected tumor area, and further generating corrected tumor images under all acquisition visual angles.
The three-dimensional model construction module 150 is configured to construct a tumor three-dimensional model based on each corrected tumor image.
In this embodiment, the tumor three-dimensional model refers to a model which is constructed by using each corrected tumor image and through three-dimensional reconstruction technology (such as surface drawing, volume drawing, etc.), and can intuitively display the three-dimensional structure, shape and position of the tumor.
As one example, the three-dimensional model construction module 150 integrates and reconstructs the tumor information in each corrected tumor image using a three-dimensional reconstruction algorithm, such as Marching Cubes algorithm (face drawing) or ray casting algorithm (volume drawing).
In the reconstruction process, the spatial position relation and gray information among the images need to be considered so as to ensure that the constructed three-dimensional model can accurately reflect the real form and spatial structure of the tumor.
As an alternative embodiment, as shown in fig. 2, the region selection module 120 specifically includes the following units:
An image acquisition unit 121, configured to acquire a historical tumor image at each acquisition view angle in a target respiratory cycle;
The image sorting unit 122 is configured to sort the historical tumor images under each acquired view angle and the corresponding initial tumor images according to a time sequence, and construct a tumor image sequence under each acquired view angle;
A linkage determining unit 123 for determining respiratory linkage of each image region at the acquisition view angle based on the tumor image sequence at the acquisition view angle;
An evaluation value determination unit 124, configured to determine a respiratory deformation evaluation value of each image area based on respiratory linkage of each image area under each acquisition view angle;
the region determination unit 125 is configured to determine, as a reference region, an image region in which the respiratory deformation evaluation value is minimum.
In this embodiment, historical tumor images are used to characterize tumor image data at past times acquired from various acquisition perspectives during a target respiratory cycle. These images record information such as the morphology, location, etc. of the tumor at different time points.
The tumor image sequence is an image set formed by arranging the historical tumor images under each acquisition view angle and the corresponding initial tumor images according to the time sequence, and reflects the change condition of the tumor along with time in the target respiratory cycle.
Respiratory linkage is used to describe the degree of correlation between an image region and respiratory motion during breathing. If the variation trend of the breathing movement of a certain image area is consistent during breathing, the image area has higher breathing linkage, otherwise, the image area has lower breathing linkage.
The respiratory deformation evaluation value is used for quantifying an index of deformation degree of the image area in the respiratory process. The value comprehensively considers factors such as morphological change, position movement and the like of the image area in the respiratory period, and the higher the evaluation value is, the larger the deformation degree is.
As an example, the image acquisition unit 121 first determines the target respiratory cycle at the current time based on the current time corresponding to the initial tumor image. And then, acquiring the corresponding historical tumor images under each acquisition view angle in the target respiratory cycle.
Then, the image sorting unit 122 sequentially sorts the historical tumor images and the initial tumor images under each collection view angle according to the collection time stamp of the historical tumor images and the collection time stamp of the initial tumor images in time sequence, so as to form a tumor image sequence under each collection view angle.
Then, the linkage determination unit 123 analyzes the change characteristics of the form, position, and the like of each image region according to the tumor image sequence at the acquisition view angle, thereby determining the respiratory linkage of each image region at the acquisition view angle.
Then, the evaluation value determining unit 124 comprehensively considers the respiratory linkage of the image area under different acquisition viewing angles, establishes a mathematical model or algorithm, and converts the respiratory linkage into a respiratory deformation evaluation value. For example, the mean value of respiration linkage of the image area under different acquisition visual angles can be calculated, so as to obtain the respiration deformation evaluation value of the image area.
Finally, the region determination unit 125 compares and ranks the respiratory deformation evaluation values of all the image regions, and selects the image region with the smallest respiratory deformation evaluation value as the reference region.
By constructing a tumor image sequence and analyzing the respiratory linkage of the image region, the embodiment can more comprehensively understand the dynamic change of the tumor in the respiratory process, thereby more accurately evaluating the characteristics and behaviors of the tumor. Thus, the accuracy of the follow-up tumor three-dimensional model can be improved.
As an alternative embodiment, the linkage determining unit 123 is specifically configured to:
In a tumor image sequence under an acquisition view angle, acquiring the image frame difference and the image matching degree of a target image area between adjacent tumor images, wherein the target image area is any image area;
and determining the respiration linkage of the target image area under the acquisition view angle by utilizing the image frame difference and the image matching degree of the target image area between adjacent tumor images.
In this embodiment, the image frame differences are used to characterize the difference in pixel values of the target image region between two adjacent tumor images in the tumor image sequence. Specifically, the difference value of the gray values or other characteristic values of the corresponding pixels of the two tumor images can be calculated, and the form or gray change of the target image area between adjacent time points is reflected.
The image matching degree is used for measuring the index of the similarity degree of the target image areas in two adjacent tumor images. The higher the matching degree, the greater the similarity of the target image areas in the two images, i.e. the smaller the morphology and position change of the target image areas at adjacent time points.
As one example, the linkage determination unit 123 determines, for each pair of adjacent tumor images in the tumor image sequence, the position of the target image region in the two tumor images. Then, calculating the difference value of the characteristic values (such as gray values) of the target image region at the corresponding pixel points of the two tumor images, and adopting absolute difference values, square difference values and other methods. And then carrying out statistical processing (such as averaging, root mean square value and the like) on the difference values of all the pixel points to obtain the image frame difference of the target image region between the pair of adjacent tumor images.
Then, a suitable image registration algorithm is selected, such as a feature-based registration algorithm (SIFT, SURF, etc.) or a gray-scale based registration algorithm (mutual information, normalized cross-correlation, etc.). And (3) taking the target image area as a registration object, performing registration operation in two adjacent tumor images, and finding out transformation parameters which enable the target image area to be optimally matched in the two tumor images. And calculating a matching degree index, such as a similarity score, a registration error and the like of the matched target image region, according to the registration result, and taking the matching degree index as a quantized value of the image matching degree.
Finally, according to the image frame difference and the image matching degree of the target image area between adjacent tumor images, the respiration linkage of the target image area under the acquisition view angle is determined by the following formula:
;
In the formula, The method is used for representing respiratory linkage of an ith image area under an acquisition view angle, and N is used for representing the number of tumor images in a tumor image sequence, wherein N is a positive integer not less than 2.For characterizing an image frame difference between an nth frame of tumor image and an n+1th frame of tumor image of an ith image region,The method is used for representing the image matching degree of the ith image area between the nth frame of tumor image and the n+1th frame of tumor image, and exp is used for representing the exponential operation of the base number e of the natural logarithm.
The respiratory linkage of the ith image area under the acquisition view angle is larger as the image frame difference of the ith image area between the adjacent tumor images is larger, and the respiratory linkage of the ith image area under the acquisition view angle is smaller as the image matching degree of the ith image area between the adjacent tumor images is larger.
According to the embodiment, the image frame difference and the image matching degree are considered at the same time, so that the change condition of the target image area in the breathing process can be estimated more comprehensively and accurately. The image frame difference reflects the shape or gray level change of the target image area, the image matching degree measures the similarity degree, and the combination of the image frame difference and the image matching degree can more finely describe the linkage relation between the target image area and respiratory motion.
As an alternative embodiment, the evaluation value determining unit 124 is specifically configured to:
Sequencing all the acquisition view angles according to a preset traversal sequence to obtain the arrangement sequence of all the acquisition view angles;
Carrying out difference absolute value calculation on respiratory linkage of a target image area under the acquisition view angles of adjacent arrangement sequences to obtain a respiratory linkage difference value, wherein the target image area is any image area;
and determining the respiratory deformation evaluation value of the target image area by utilizing the respiratory linkage difference value.
In this embodiment, the preset traversal order is a preset rule or order for ordering the acquisition view angles, and may be an order determined according to a specific logic, such as an angle size, an acquisition time sequence, and the like.
The arrangement sequence is a position sequence number of each acquisition view after the acquisition view is ordered according to a preset traversal sequence, and the position sequence number is used for determining the sequence relation between the acquisition views.
The breath linkage difference value is the absolute value of the difference value of the breath linkage of the target image area under the acquisition view angles in adjacent arrangement order, and is obtained by calculating the difference value of the breath linkage under the adjacent acquisition view angles.
As an example, the evaluation value determination unit 124 first explicitly presets a specific rule of the traversal order, for example, ordering according to the included angle between the acquisition view angle and a certain reference direction from small to large, or ordering according to the order of the acquisition time, etc. And then sorting all the acquisition view angles according to the rule, and distributing an arrangement sequence for each acquisition view angle, wherein the arrangement sequence of the first acquisition view angle is 1, the arrangement sequence of the second acquisition view angle is 2, and the like.
Then, for the orderly arranged acquisition visual angles, two adjacent acquisition visual angles are sequentially selected. And respectively acquiring the respiration linkage of the target image areas under the two adjacent acquisition visual angles. The difference between the two respiratory linkages, the respiratory linkage difference, is calculated.
And finally, determining a respiratory deformation evaluation value of the target image area according to each respiratory linkage difference value by the following formula:
;
In the formula, The breathing deformation evaluation value used for representing the ith image area is M used for representing the number of acquisition visual angles.For characterizing respiratory linkage of the ith image region at the mth acquisition view angle,The method is used for representing respiratory linkage of the ith image area at the (m+1) th acquisition view angle, and exp is used for representing exponential operation of a base e of natural logarithm.
Wherein, the The smaller the respiratory linkage difference of the ith image area under the characteristic of adjacent acquisition visual angles is, the larger the respiratory deformation evaluation value of the ith image area is.
According to the embodiment, the influence of the respiratory motion on the target image area can be more comprehensively estimated by considering the change of the respiratory linkage of the target image area under different acquisition visual angles. And, different acquisition visual angles can provide motion information about tumors in different directions, so that the actual deformation condition of a target image area can be reflected more accurately.
As an alternative embodiment, as shown in fig. 3, the deformation determining module 130 specifically includes the following units:
The region screening unit 131 is configured to screen and obtain each tumor region under the target acquisition view angle from each image region under the target acquisition view angle, where the target acquisition view angle is any one acquisition view angle;
A deformation ratio determining unit 132, configured to determine a relative deformation ratio of each pixel point in each tumor region according to a distance between each pixel point in each tumor region and a centroid point of a corresponding tumor region;
A displacement degree determining unit 133, configured to determine a relative displacement degree of each pixel point in each tumor region according to a distance between each pixel point in each tumor region and a centroid point of the reference region;
the deformation determining unit 134 is configured to determine the breathing deformation degree of each pixel point in each tumor region under the target acquisition view angle by using the relative deformation proportion and the relative displacement degree of each pixel point in each tumor region.
In this embodiment, the centroid point is the geometric center of an area, and for a tumor area, the centroid point can be obtained by calculating the average value of the coordinates of all the pixels in the tumor area, which represents the approximate center position of the tumor area.
The relative deformation proportion is used for measuring the deformation degree of each pixel point in the tumor area relative to the mass center point of the tumor area, and reflects the relative position change condition of each pixel point in the tumor area.
The relative displacement degree is used for representing the position movement degree of each pixel point in the tumor area relative to the centroid point of the reference area, and represents the spatial position change of each pixel point in the breathing process.
As an example, the region screening unit 131 may use an image segmentation algorithm, for example, based on a threshold segmentation, region growing, edge detection, etc., to mark the portion belonging to the tumor in the image region under the target acquisition view according to the feature difference (such as gray value, texture, etc.) between the tumor and the surrounding tissue, so as to obtain each tumor region.
Then, the deformation ratio determining unit 132 calculates the centroid point coordinates of each tumor region first, and then calculates the distances between the respective pixel points in the tumor region and the centroid point of the tumor region. And determining the relative deformation proportion according to the distance distribution condition from all the pixel points to the mass center point of the tumor area. For example, the distance of each pixel point may be compared with the average or maximum value of the distances from all the pixel points in the tumor area to the centroid point of the tumor area, to obtain a relative proportion value as the corresponding relative deformation proportion.
Then, the displacement degree determination unit 133 determines the centroid point coordinates of the reference region, calculates the distances from each pixel point in each tumor region to the centroid point of the reference region, and determines the relative displacement degree according to the distance distribution condition from all the pixel points to the centroid point of the reference region. For example, the distance of each pixel point may be compared with a certain reference value (such as the distance from the pixel point to the centroid point of the reference area in the initial state), to obtain the relative displacement degree.
And finally, determining the breathing deformation degree of each pixel point in each tumor region under the target acquisition view angle by using the relative deformation proportion and the relative displacement degree of each pixel point in each tumor region through the following formula:
;
In the formula, Used for representing the breathing deformation degree of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle,Is used for representing the relative deformation proportion of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle,The method is used for representing the relative displacement degree of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle.
The larger the relative deformation proportion of the pixel points or the larger the relative displacement degree of the pixel points, the larger the respiratory deformation degree of the pixel points is indicated.
According to the embodiment, the influence of the respiratory motion on the tumor area can be accurately estimated by comprehensively considering the relative deformation proportion and the relative displacement degree of the pixel points in the tumor area. The relative deformation proportion reflects the deformation condition in the tumor region, the relative displacement reflects the overall movement condition of the tumor region, and the combination of the two can more comprehensively describe the tumor deformation in the breathing process.
As an alternative embodiment, the deformation ratio determining unit 132 is specifically configured to:
Determining a first average distance between a j-th pixel point and a centroid point of a tumor area according to a first distance between the j-th pixel point and the corresponding centroid point of the tumor area in a tumor image sequence under a target acquisition view angle of a target respiratory cycle, wherein j is a positive integer;
And determining the relative deformation proportion of the jth pixel point in the second tumor region under the target acquisition view angle by using the first distance and the corresponding first average distance of the jth pixel point in the second tumor region, wherein the second tumor region is the first tumor region in the initial tumor image.
In this embodiment, the first distance is used to characterize a spatial distance between a j-th pixel point in the second tumor region and a centroid point of the corresponding tumor region under the target acquisition view angle, where the second tumor region is the first tumor region in the initial tumor image.
The first average distance is used for representing a spatial distance average value between a j-th pixel point in a first tumor region of all tumor images in a tumor image sequence under a target acquisition view angle in a target breathing period and a centroid point of a corresponding tumor region.
As an example, the relative deformation ratio is determined specifically by the following formula:
;
In the formula, Is used for representing the relative deformation proportion of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle,A first distance for representing a j-th pixel point in an O-th tumor area in an initial tumor image under a target acquisition view angle,The method is used for representing the first average distance of the jth pixel point in the O-th tumor region in the tumor image sequence under the target acquisition view angle in the target respiratory cycle.For characterizing the respiratory deformation evaluation value of the O-th tumor region,The exp is used for representing the index operation of the base number e of the natural logarithm.
Wherein, the For characterizing the O-th tumor region in the initial tumor imageThe ratio of the deformation of each pixel point in the target respiration period to the respiration deformation evaluation value of the O-th tumor area represents the firstThe deformation ratio of each pixel point in the tumor area. The larger the value, the more representative is the O-th tumor region in the initial tumor imageThe larger the relative deformation ratio of the individual pixel points.
According to the embodiment, based on calculation of the relative deformation proportion, the deformation degree of each pixel point in the tumor area relative to the centroid point can be quantified, so that more visual tumor deformation information is provided. Therefore, the method is favorable for correcting the initial tumor image later, and improves the accuracy of the tumor three-dimensional model.
As an alternative embodiment, the displacement degree determining unit 133 is specifically configured to:
determining a second average distance between a j-th pixel point and a reference area centroid point according to a second distance between the j-th pixel point and the reference area centroid point in a tumor image sequence under a target acquisition view angle in a target breathing period, wherein j is a positive integer;
And determining the relative displacement degree of the jth pixel point in the second tumor region under the target acquisition view angle by using the second distance and the corresponding second average distance of the jth pixel point in the second tumor region, wherein the second tumor region is the first tumor region in the initial tumor image.
In this embodiment, the second distance is used to characterize a spatial distance between a j-th pixel point in the second tumor region and a centroid point of the corresponding reference region under the target acquisition view angle, where the second tumor region is the first tumor region in the initial tumor image.
The second average distance is used for representing the average value of the spatial distances between the j-th pixel point in the first tumor region of all tumor images in the tumor image sequence under the target acquisition view angle in the target breathing period and the centroid point of the corresponding reference region.
As one example, the relative displacement is determined specifically by the following formula:
;
In the formula, Used for representing the relative displacement degree of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle,A second distance for representing a j-th pixel point in an O-th tumor area in the initial tumor image under the target acquisition view angle,Used for representing the second average distance of the jth pixel point in the jth tumor area in the initial tumor image under the target acquisition view angle,For characterizing normalization operations.
Wherein, the For characterizing the O-th tumor region in the initial tumor imagePositional change relation of each pixel point with respect to the reference region. The larger the value, the more representative is the O-th tumor region in the initial tumor imageThe more obvious the position change of each pixel point relative to the reference area, the larger the relative displacement degree of the jth pixel point in the O-th tumor area in the initial tumor image.
According to the embodiment, based on the calculated relative displacement, the deformation degree of each pixel point in the tumor area relative to the reference area can be quantified, so that more visual tumor deformation information is provided. Therefore, the method is favorable for correcting the initial tumor image later, and improves the accuracy of the tumor three-dimensional model.
As an alternative embodiment, the image correction module 140 is specifically configured to:
Based on the breathing deformation degree of each pixel point of the tumor area in the initial tumor image under each acquisition view angle, respectively determining the deformation correction amount of each pixel point of the tumor area in the initial tumor image under each acquisition view angle;
and correcting the positions of all the pixel points of the tumor area in the initial tumor image under all the acquisition view angles based on the deformation correction amounts of all the pixel points of the tumor area in the initial tumor image under all the acquisition view angles, so as to obtain corrected tumor images under all the acquisition view angles.
In this embodiment, the deformation correction amount is a numerical value calculated according to the respiratory deformation degree and used for correcting the position or shape of each pixel point in the tumor region in the initial tumor image. It shows the degree to which each pixel point needs to be adjusted to eliminate the deformation effects from respiratory motion.
The corrected tumor image is obtained by correcting each pixel point of the tumor area in the initial tumor image under each acquisition view angle. The tumor image eliminates the influence of respiratory deformation and reflects the true form of the tumor more accurately.
As one example, the image correction module 140 first determines the deformation correction by the following formula:
;
In the formula, Used for representing the deformation correction quantity of the jth pixel point in the O-th tumor area in the initial tumor image under the mth acquisition view angle,Used for representing the breathing deformation degree of the jth pixel point in the O-th tumor area in the initial tumor image under the mth acquisition view angle,The mean value of the breathing deformation degree of the j pixel point in the O-th tumor area in the initial tumor image under each acquisition view angle is used for representing.
Then, the image correction module 140 traverses each pixel point in the tumor area for the initial tumor image under each acquired view angle, and adjusts the position of the pixel point according to the calculated deformation correction amount. For example, if the original coordinates of a certain pixel point areThe deformation correction amount isThe corrected coordinates are。
And finally, recombining all corrected pixel points to form corrected tumor images under all the acquisition visual angles. In the correction process, care needs to be taken to keep parameters such as resolution, gray value range and the like of the image unchanged so as to ensure the quality of the corrected image.
By the embodiment, deformation influence caused by respiratory motion is eliminated, and the corrected tumor image can more accurately reflect the real form and position of the tumor. Thus, a tumor three-dimensional model is created based on the corrected tumor image, and the accuracy of the generated tumor three-dimensional model can be improved.
As an alternative embodiment, the three-dimensional model building module 150 is specifically configured to:
Sequencing the corrected tumor images according to the image acquisition sequence, and determining the spatial position of each corrected tumor image;
carrying out consistency processing on each corrected tumor image to obtain corresponding consistent tumor images;
Stacking the consistent tumor images according to the corresponding spatial positions to obtain an image stacking result;
And constructing a tumor three-dimensional model based on the image stacking result.
In this embodiment, the spatial position is a piece of relative position information in three-dimensional space given to each corrected tumor image according to the image acquisition sequence. The method reflects the spatial distribution condition of the tumor under different visual angles during image acquisition, and is the basis for subsequent image stacking and three-dimensional model construction.
The consistency processing is a series of operations performed on the corrected tumor images, and aims to eliminate differences in brightness, contrast, resolution and the like between the images, so that each image has consistency in vision and data characteristics, and image stacking and three-dimensional model construction can be better performed.
The consistent tumor image is a corrected tumor image after consistency treatment, has similar image characteristics, and can be fused together more accurately.
The image stacking result is a result obtained by stacking the consistent tumor images according to the corresponding spatial positions, and the result contains comprehensive information of the tumor at different visual angles and is key data for constructing a three-dimensional model.
As one example, the three-dimensional model construction module 150 first ranks the corrected tumor images according to the image acquisition order. Meanwhile, according to the geometrical relationship (such as the angle, the position and the like of the imaging equipment) and clinical requirements during acquisition, the corresponding spatial position of each image is determined. For example, if the acquisition is a circular scan around the patient's body, the position of each image in three-dimensional space may be determined from the scan angle and distance.
Then, the average brightness and contrast of each corrected tumor image are calculated, and the brightness and contrast of all corrected tumor images are adjusted to similar ranges by means of histogram equalization, gray scale stretching and the like. For example, a global histogram equalization algorithm may be employed to enhance the overall contrast of the images, reducing the brightness and contrast differences between different images. Meanwhile, the resolution of each corrected tumor image is checked, and if the difference exists, an interpolation algorithm (such as bilinear interpolation, cubic spline interpolation and the like) is adopted to unify the resolution of the corrected tumor image to the same level. For example, the resolution of all corrected tumor images is adjusted to 512×512 pixels. And then removing noise in the corrected tumor image by using a filtering algorithm (such as Gaussian filtering, median filtering and the like) so as to improve the quality of the image. For example, gaussian filtering is used to smooth the corrected tumor image, reducing the effect of noise on subsequent analysis.
Then, each consistent tumor image is placed at a corresponding position in three-dimensional space according to the determined spatial position. This process may be implemented using image processing software or a specialized medical image processing platform. In the stacking process, the overlapping area between the images needs to be considered, and a proper fusion algorithm (such as weighted average fusion, maximum fusion and the like) is adopted to process the overlapping area, so that the transition of the images is natural. For example, for the pixel values of the overlapping area, a weighted average method may be used to calculate the final pixel value according to the positions and weights of the pixel points in different images.
Finally, the contour information of the tumor is extracted from the image stacking result, and a three-dimensional surface model of the tumor is constructed by using a surface reconstruction algorithm (such as Marching Cubes algorithm). The algorithm can intuitively display the appearance of the tumor by extracting the isosurface in the three-dimensional data field to generate the surface model of the tumor. The image stacking result can also be regarded as a three-dimensional voxel data field, and each voxel corresponds to a pixel point in the image. Based on the gray values or other characteristic information of the voxels, a voxel model of the tumor is constructed, which may contain internal structural information of the tumor.
By means of image stacking and three-dimensional model construction, the method and the device can integrate tumor image information under different acquisition visual angles, provide complete form and structure information of tumors in a three-dimensional space, and help doctors to comprehensively understand the characteristics of the tumors, including the size, shape, position, relationship with surrounding tissues and the like of the tumors.
As an alternative embodiment, the tumor image acquisition module 110 is specifically configured to:
transmitting an image acquisition signal to a tumor scanner so that the tumor scanner scans tumors of a target patient;
Receiving original signal data of a target patient collected by a tumor scanner;
And converting the original signal data into a two-dimensional image format through a target reconstruction algorithm to obtain initial tumor images under a plurality of acquisition view angles.
In this embodiment, the image acquisition signal is an instruction signal for triggering the tumor scanner to start a scanning operation. The tumor scanner can be an electric signal, a digital signal and the like in a specific format, and contains parameter information required by scanning, such as a scanning range, a scanning layer thickness, a scanning mode and the like, so as to guide the tumor scanner to scan in a preset mode.
Tumor scanners are medical devices that are specifically used to detect and image tumors, and are commonly known as computed tomography scanners, magnetic resonance imaging (mri), positron emission tomography-computed tomography scanners, and the like. The method can acquire the relevant information of the tumor in the human body through different imaging principles.
Raw signal data is raw data collected by a tumor scanner during scanning of a target patient. The data is a digitized representation of various physical signals (e.g., X-ray attenuation signals, magnetic resonance signals, gamma ray signals from positron annihilation, etc.) received by the scanner detector, including structural and functional information of tumors and other tissues within the patient.
The target reconstruction algorithm is an algorithm specifically used to convert the original signal data into a two-dimensional image format. Different imaging devices have different imaging principles and data characteristics, so that the raw signal data needs to be processed by adopting a corresponding reconstruction algorithm to generate a clear and accurate two-dimensional image.
The two-dimensional image format is an image data format which is obtained by processing the original signal data and is displayed in a two-dimensional plane form. Common two-dimensional image formats are medical digital imaging and communication formats, etc., which facilitate viewing, analysis and diagnosis by a physician on a computer.
As one example, the tumor image acquisition module 110 determines parameters required for scanning, such as scan range (determining body parts and regions scanned), scan layer thickness (thickness of each scan), scan mode (e.g., pan scan, enhancement scan, etc.), according to the specific condition (e.g., condition, examination site, etc.) and clinical needs of the target patient. And then coding the determined scanning parameters according to a signal format required by the tumor scanner to generate an image acquisition signal. For example, for CT scanners, it is necessary to encapsulate the parameter information into command signals via a specific communication protocol. And then the generated image acquisition signals are sent to a tumor scanner in a wired or wireless mode. Common transmission modes include Ethernet connection, optical fiber transmission and the like, so that signals can reach a scanner accurately and timely.
Then, a data receiving channel is established with the tumor scanner while the image acquisition signal is transmitted. In particular, this typically involves setting parameters such as data transfer protocol, port number, etc., to ensure that the data sent by the scanner can be received correctly. When the tumor scanner completes scanning and starts to send the original signal data, the tumor image acquisition module 110 receives the data according to the set protocol and caches the data in the memory or the hard disk. During the reception process, integrity checks of the data are required to ensure that the data are not lost or corrupted.
Finally, according to the type of the tumor scanner and the characteristics of the original signal data, a proper target reconstruction algorithm is selected. For example, for the original data of CT scan, common reconstruction algorithms are filtered back projection algorithm and iterative reconstruction algorithm. And then using a programming language to realize a selected target reconstruction algorithm, and inputting the received original signal data into the algorithm for processing. The algorithm performs a series of mathematical operations and transformations, such as filtering, back-projection, fourier transformation, etc., on the raw data to extract the image information. After the algorithm processing, the obtained image data is converted into a two-dimensional image format. Meanwhile, according to the acquired view angle information during scanning, corresponding view angle identifiers are given to each two-dimensional image, and initial tumor images under a plurality of acquired view angles are obtained.
According to the embodiment, the initial tumor images under a plurality of acquisition view angles are acquired, so that the form, structure and position information of the tumor can be displayed more comprehensively and accurately, and doctors can diagnose the type, stage and severity of the tumor more accurately.
It should be noted that the foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solutions described in the foregoing embodiments may be modified or some of the technical features may be replaced equivalently, and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the scope of the technical solutions of the embodiments of the present application and are included in the protection scope of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511404357.0A CN120876517B (en) | 2025-09-29 | 2025-09-29 | Tumor image automatic segmentation and three-dimensional reconstruction system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202511404357.0A CN120876517B (en) | 2025-09-29 | 2025-09-29 | Tumor image automatic segmentation and three-dimensional reconstruction system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120876517A CN120876517A (en) | 2025-10-31 |
| CN120876517B true CN120876517B (en) | 2025-11-25 |
Family
ID=97472129
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202511404357.0A Active CN120876517B (en) | 2025-09-29 | 2025-09-29 | Tumor image automatic segmentation and three-dimensional reconstruction system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120876517B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003091726A (en) * | 2001-09-17 | 2003-03-28 | Nhk Engineering Services Inc | Reflection parameter acquisition device, reflection component separation device, reflection parameter acquisition program, and reflection component separation program |
| CN115359100A (en) * | 2022-07-13 | 2022-11-18 | 深圳市中科微光医疗器械技术有限公司 | Fusion radiography method, registration method and device for intracavity image and radiography image |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI574671B (en) * | 2016-06-27 | 2017-03-21 | 太豪生醫股份有限公司 | Analysis method for breast image and electronic apparatus thereof |
| US20230259864A1 (en) * | 2022-02-14 | 2023-08-17 | International Business Machines Corporation | Workplace enhancement via digital twin-based simulation |
| WO2023186350A1 (en) * | 2022-03-31 | 2023-10-05 | Spheron-VR AG | Unmanned aircraft for optical area detection |
-
2025
- 2025-09-29 CN CN202511404357.0A patent/CN120876517B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2003091726A (en) * | 2001-09-17 | 2003-03-28 | Nhk Engineering Services Inc | Reflection parameter acquisition device, reflection component separation device, reflection parameter acquisition program, and reflection component separation program |
| CN115359100A (en) * | 2022-07-13 | 2022-11-18 | 深圳市中科微光医疗器械技术有限公司 | Fusion radiography method, registration method and device for intracavity image and radiography image |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120876517A (en) | 2025-10-31 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7760515B2 (en) | Automated tumor detection based on image processing | |
| US5657362A (en) | Automated method and system for computerized detection of masses and parenchymal distortions in medical images | |
| US20200367853A1 (en) | Apparatus for ai-based automatic ultrasound diagnosis of liver steatosis and remote medical diagnosis method using the same | |
| CN118262875A (en) | Medical image diagnosis and contrast film reading method | |
| CN116580819B (en) | Method and system for automatically determining inspection results in an image sequence | |
| CN115830016B (en) | Medical image registration model training method and equipment | |
| CN116580068B (en) | Multi-mode medical registration method based on point cloud registration | |
| CN114343604A (en) | Tumor detection and diagnosis device based on medical image | |
| El-Baz et al. | Appearance analysis for diagnosing malignant lung nodules | |
| El-Baz et al. | Appearance analysis for the early assessment of detected lung nodules | |
| CN114529502B (en) | Deep learning-based methods and systems for automated subject anatomy and orientation recognition | |
| CN119048694A (en) | Multi-mode-based three-dimensional image post-processing method and system | |
| CN119887773B (en) | A medical image recognition and processing system and method based on multimodal image fusion | |
| EP4557235A1 (en) | Hip joint angle measurement system and method | |
| CN111340825A (en) | Method and system for generating mediastinal lymph node segmentation model | |
| CN117958970A (en) | A real-time navigation method for oral surgery based on CT and laser oral scanning | |
| WO2021117013A1 (en) | Determination of medical condition risk using thermographic images | |
| CN117351489B (en) | Head and neck tumor target area delineating system for whole-body PET/CT scanning | |
| CN119151967B (en) | A medical image analysis method and system based on plain scan CT data | |
| CN120876517B (en) | Tumor image automatic segmentation and three-dimensional reconstruction system | |
| CN119963613A (en) | A medical image and three-dimensional space registration method based on 3D printing technology | |
| CN118537699B (en) | Multi-mode oral cavity image data fusion and processing method | |
| CN119648550B (en) | A method for improving image quality based on low-resolution mammograms | |
| CN119579809B (en) | Three-dimensional image processing method for fundus examination | |
| CN120236704B (en) | Medical report generation method and system for skin burn severity |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |