[go: up one dir, main page]

CN119417821B - Gastrointestinal surgery auxiliary system and method - Google Patents

Gastrointestinal surgery auxiliary system and method Download PDF

Info

Publication number
CN119417821B
CN119417821B CN202510014592.0A CN202510014592A CN119417821B CN 119417821 B CN119417821 B CN 119417821B CN 202510014592 A CN202510014592 A CN 202510014592A CN 119417821 B CN119417821 B CN 119417821B
Authority
CN
China
Prior art keywords
gastrointestinal
semantic
scanning
feature
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510014592.0A
Other languages
Chinese (zh)
Other versions
CN119417821A (en
Inventor
李娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202510014592.0A priority Critical patent/CN119417821B/en
Publication of CN119417821A publication Critical patent/CN119417821A/en
Application granted granted Critical
Publication of CN119417821B publication Critical patent/CN119417821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/08Accessories or related features not otherwise provided for
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Radiology & Medical Imaging (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Dentistry (AREA)
  • Pulmonology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请涉及智能辅助领域,其具体地公开了一种胃肠外科手术辅助系统及方法,其通过获取目标患者对象的胃肠CT扫描图像,运用基于计算机视觉和AI的图像识别及处理算法来对所述胃肠CT扫描图像进行图像预处理,接着从预处理后的图像中提取出主干语义特征和边界语义特征,基于这两者特征间的细粒度强化融合表示来智能地生成包含被标注的肿瘤边界的图像语义分割结果。这样,能够捕捉到人眼难以辨识的微小变化,从而更准确地识别和标注肿瘤边界信息,为医生提供了直观的视觉辅助。并且,自动化处理流程显著减少了医生手动分析的时间,提高了诊断的速度和效率。

The present application relates to the field of intelligent assistance, and specifically discloses a gastrointestinal surgical assistance system and method, which obtains the gastrointestinal CT scan image of the target patient object, uses the image recognition and processing algorithm based on computer vision and AI to perform image preprocessing on the gastrointestinal CT scan image, and then extracts the main semantic features and boundary semantic features from the preprocessed image, and intelligently generates the image semantic segmentation result containing the annotated tumor boundary based on the fine-grained enhanced fusion representation between the two features. In this way, it is possible to capture subtle changes that are difficult for the human eye to discern, thereby more accurately identifying and annotating tumor boundary information, and providing doctors with intuitive visual assistance. In addition, the automated processing flow significantly reduces the time for doctors to perform manual analysis, and improves the speed and efficiency of diagnosis.

Description

Gastrointestinal surgical assistance systems and methods
Technical Field
The present application relates to the field of intelligent assistance, and more particularly, to a gastrointestinal surgical assistance system and method.
Background
With the development of medical imaging technology, particularly the development of Computed Tomography (CT) technology, doctors can obtain high-quality in-vivo structural images, which is of great importance for diagnosis and treatment of diseases. Early detection and accurate localization of digestive system diseases, particularly gastrointestinal tumors, is critical to improving patient survival and quality of life.
However, traditional CT image analysis relies primarily on the experience and subjective judgment of the radiologist, a process that is time consuming and laborious. Especially when faced with a large amount of image data, the workload of doctors increases drastically, which not only aggravates their burden, but may also affect the accuracy and consistency of diagnosis. In addition, the human eye has limited sensitivity to subtle changes, especially in the face of low contrast or microscopic lesions, some critical but subtle information may be ignored, increasing the risk of misdiagnosis and missed diagnosis.
Accordingly, a gastrointestinal surgical assistance scheme is desired to assist a physician in more effectively performing tumor detection, thereby improving the accuracy and efficiency of diagnosis.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems.
According to one aspect of the present application, there is provided a gastrointestinal surgical assistance system comprising:
The gastrointestinal CT scanning image acquisition module is used for acquiring a gastrointestinal CT scanning image of the target patient object;
The CT scanning image preprocessing module is used for carrying out image noise reduction and contrast enhancement on the gastrointestinal CT scanning image so as to obtain a gastrointestinal CT scanning enhanced image;
The CT scanning image feature extraction module is used for carrying out semantic coding and boundary feature extraction on the gastrointestinal CT scanning enhanced image so as to obtain gastrointestinal CT scanning trunk semantic coding features and gastrointestinal CT scanning boundary feature semantic coding features;
The CT scanning image multi-scale feature fusion module is used for carrying out trunk-boundary fine granularity reinforcement interaction on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding features, and comprises a CT feature dissociation unit, a CT feature compensation aggregation unit, a gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding feature, a gastrointestinal CT scanning boundary semantic reinforcement fusion coding feature and a gastrointestinal CT scanning boundary semantic feature compensation aggregation unit, wherein the CT scanning image multi-scale feature fusion module comprises a CT feature dissociation unit and a CT feature compensation aggregation unit, wherein the CT feature dissociation unit is used for carrying out feature dissociation on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain a gastrointestinal CT scanning trunk semantic local feature collection and a gastrointestinal CT scanning boundary semantic local feature collection;
The semantic segmentation module is used for obtaining an image semantic segmentation result based on the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature, wherein the image semantic segmentation result comprises a marked tumor boundary.
The CT scanning image feature extraction module is used for inputting the gastrointestinal CT scanning enhanced image into a main network and a double-branch boundary information loss compensation network of a boundary feature extraction branch to obtain a gastrointestinal CT scanning main semantic coding feature map serving as the gastrointestinal CT scanning main semantic coding feature and a gastrointestinal CT scanning boundary feature semantic coding feature map serving as the gastrointestinal CT scanning boundary feature semantic coding feature.
Further, the CT feature dissociation unit is configured to perform feature dissociation on the gastrointestinal CT scan trunk semantic coding feature map and the gastrointestinal CT scan boundary semantic coding feature map along a channel dimension of the gastrointestinal CT scan trunk semantic coding feature map and the gastrointestinal CT scan boundary semantic coding feature map to obtain a set of gastrointestinal CT scan trunk semantic local feature matrices as the set of gastrointestinal CT scan trunk semantic local features and a set of gastrointestinal CT scan boundary semantic local feature matrices as the set of gastrointestinal CT scan boundary semantic local features.
Further, the CT feature-compensated aggregation unit includes:
The main-boundary fine granularity extraction subunit is used for inputting the gastrointestinal CT scanning main semantic local feature matrix and the gastrointestinal CT scanning boundary semantic local feature matrix of each group of corresponding channel dimensions in the set of the gastrointestinal CT scanning main semantic local feature matrix and the set of the gastrointestinal CT scanning boundary semantic local feature matrix into a sharing semantic information extraction module based on a twin network structure so as to obtain a set of the gastrointestinal CT scanning main-boundary fine granularity local sharing semantic feature matrix;
The main-boundary semantic compensation feature calculation subunit is used for carrying out semantic feature compensation on the set of the gastrointestinal CT scanning main-boundary fine-granularity local sharing semantic feature matrix to obtain a set of gastrointestinal CT scanning main-boundary semantic compensation text semantic coding feature matrix;
The main-boundary fine granularity characteristic semantic enhancement subunit is used for carrying out semantic enhancement on the set of the gastrointestinal CT scanning main-boundary fine granularity local sharing semantic characteristic matrix based on the set of the gastrointestinal CT scanning main-boundary semantic compensation text semantic coding characteristic matrix to obtain a gastrointestinal CT scanning multi-scale semantic enhancement fusion coding characteristic map as the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding characteristic.
Further, the trunk-boundary semantic compensation feature computation subunit is configured to:
Inputting each gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix in the gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix set into a semantic compensation decoding module based on a large language model respectively to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text description set;
And respectively inputting each gastrointestinal CT scanning trunk-boundary semantic compensation text description in the gastrointestinal CT scanning trunk-boundary semantic compensation text description set into a semantic encoder based on a text convolutional neural network model to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text semantic encoding feature matrix set.
Further, the trunk-boundary fine granularity feature semantic enhancement subunit is configured to:
Inputting the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix and the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix corresponding to each group in the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix and the gastrointestinal CT scanning trunk-boundary semantic compensation text semantic coding feature matrix into a fine granularity semantic interaction compensation module to obtain a gastrointestinal CT scanning trunk-boundary fine granularity local interaction semantic enhancement feature matrix set;
And aggregating the set of the gastrointestinal CT scanning trunk-boundary fine granularity local interaction semantic enhancement feature matrix along the channel dimension to obtain the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map.
Further, the semantic segmentation module is used for carrying out image semantic segmentation on the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map to obtain an image semantic segmentation result, wherein the image semantic segmentation result comprises a marked tumor boundary.
Further, the semantic segmentation module is used for processing the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map by using an image semantic segmenter based on a Softmax function to obtain an image semantic segmentation result, wherein the image semantic segmentation result comprises a marked tumor boundary.
According to another aspect of the present application, there is provided a gastrointestinal surgical assistance method comprising:
acquiring a gastrointestinal CT scanning image of a target patient object;
Performing image noise reduction and contrast enhancement on the gastrointestinal CT scanning image to obtain a gastrointestinal CT scanning enhanced image;
carrying out semantic coding and boundary feature extraction on the gastrointestinal CT scanning enhanced image to obtain gastrointestinal CT scanning trunk semantic coding features and gastrointestinal CT scanning boundary feature semantic coding features;
performing trunk-boundary fine granularity reinforcement interaction on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding features, wherein the method comprises the steps of performing feature dissociation on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain a gastrointestinal CT scanning trunk semantic local feature set and a gastrointestinal CT scanning boundary semantic local feature set;
And obtaining an image semantic segmentation result based on the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature, wherein the image semantic segmentation result comprises a marked tumor boundary.
Compared with the prior art, the gastrointestinal surgery auxiliary system and the gastrointestinal surgery auxiliary method provided by the application have the advantages that the gastrointestinal CT scanning image of a target patient object is acquired, the image preprocessing is carried out on the gastrointestinal CT scanning image by using the image recognition and processing algorithm based on computer vision and AI, then the main semantic features and the boundary semantic features are extracted from the preprocessed image, and the image semantic segmentation result containing the marked tumor boundary is intelligently generated based on the fine-granularity reinforced fusion representation between the main semantic features and the boundary semantic features. Therefore, the micro-change which is difficult to identify by the human eyes can be captured, and misdiagnosis and missed diagnosis caused by the limitation of the observation of the human eyes are reduced especially under the conditions of low contrast and micro-lesions, so that the tumor boundary information is more accurately identified and marked, and visual assistance is provided for doctors. And the automatic processing flow obviously reduces the time of manual analysis of doctors and improves the diagnosis speed and efficiency.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram of a gastrointestinal surgical assist system according to an embodiment of the application.
Fig. 2 is a data flow schematic of a gastrointestinal surgical assist system according to an embodiment of the application.
Fig. 3 is a block diagram of a CT scan image multi-scale feature fusion module in a gastrointestinal surgical assistance system according to an embodiment of the application.
Fig. 4 is a flow chart of a gastrointestinal surgical assist method according to an embodiment of the application.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
It is noted that in the present application, all actions for acquiring signals, information or data are performed in compliance with the corresponding data protection regulation policy of the country of location and obtaining the authorization granted by the owner of the corresponding device.
With the development of medical imaging technology, particularly the development of Computed Tomography (CT) technology, doctors can obtain high-quality in-vivo structural images, which is of great importance for diagnosis and treatment of diseases. Early detection and accurate localization of digestive system diseases, particularly gastrointestinal tumors, is critical to improving patient survival and quality of life.
However, traditional CT image analysis relies primarily on the experience and subjective judgment of the radiologist, a process that is time consuming and laborious. Especially when faced with a large amount of image data, the workload of doctors increases drastically, which not only aggravates their burden, but may also affect the accuracy and consistency of diagnosis. In addition, the human eye has limited sensitivity to subtle changes, especially in the face of low contrast or microscopic lesions, some critical but subtle information may be ignored, increasing the risk of misdiagnosis and missed diagnosis.
Therefore, in view of the above problems, the technical idea of the present application is to acquire a gastrointestinal CT scan image of a target patient object, apply an image recognition and processing algorithm based on computer vision and AI to perform image preprocessing on the gastrointestinal CT scan image, then extract a main semantic feature and a boundary semantic feature from the preprocessed image, and intelligently generate an image semantic segmentation result including a labeled tumor boundary based on a fine-grained reinforced fusion representation between the two features. Therefore, the micro-change which is difficult to identify by the human eyes can be captured, and misdiagnosis and missed diagnosis caused by the limitation of the observation of the human eyes are reduced especially under the conditions of low contrast and micro-lesions, so that the tumor boundary information is more accurately identified and marked, and visual assistance is provided for doctors. And the automatic processing flow obviously reduces the time of manual analysis of doctors and improves the diagnosis speed and efficiency.
Fig. 1 is a block diagram of a gastrointestinal surgical assist system according to an embodiment of the application. Fig. 2 is a data flow schematic of a gastrointestinal surgical assist system according to an embodiment of the application. As shown in fig. 1 and 2, the gastrointestinal surgery assistance system 100 according to an embodiment of the application comprises a gastrointestinal CT scan image acquisition module 110 for acquiring a gastrointestinal CT scan image of a target patient object, a CT scan image preprocessing module 120 for performing image denoising and contrast enhancement on the gastrointestinal CT scan image to obtain a gastrointestinal CT scan enhancement image, a CT scan image feature extraction module 130 for performing semantic encoding and boundary feature extraction on the gastrointestinal CT scan enhancement image to obtain a gastrointestinal CT scan trunk semantic encoding feature and a gastrointestinal CT scan boundary feature semantic encoding feature, a CT scan image multi-scale feature fusion module 140 for performing trunk-boundary fine granularity enhancement interaction on the gastrointestinal CT scan trunk semantic encoding feature and the gastrointestinal CT scan boundary feature semantic encoding feature to obtain a gastrointestinal CT scan multi-scale semantic enhancement fusion encoding feature, and a semantic segmentation module 150 for obtaining an image semantic segmentation result based on the gastrointestinal CT scan multi-scale semantic enhancement fusion encoding feature, the image semantic segmentation result including an annotated tumor boundary.
In an embodiment of the present application, the gastrointestinal CT scan image acquiring module 110 is configured to acquire a gastrointestinal CT scan image of a target patient object. It should be understood that the gastrointestinal CT scan image refers to an image obtained by imaging the gastrointestinal tract of a patient by a computed tomography (Computed Tomography, CT) technique, and the gastrointestinal CT scan image can provide a high resolution image of the gastrointestinal tract and surrounding tissues, which helps a doctor accurately diagnose gastrointestinal diseases, including but not limited to inflammation, ulcers, tumors, and the like. Based on the above, in order to more accurately understand and analyze the gastrointestinal internal condition of the patient object so as to accurately judge the boundary distribution condition of the tumor, thereby better assisting the doctor in diagnosis, in the technical scheme of the application, the gastrointestinal CT scanning image of the target patient object is acquired. In particular, CT scanning is a non-invasive medical imaging technique that scans the human body over multiple angles by X-rays and processes the data using a computer to generate detailed cross-sectional images of the internal structure of the human body.
Before a CT scan is performed, it is necessary to ensure that the patient is ready for the corresponding preparation. This may involve a period of fasting to ensure that no food remains in the gastrointestinal tract, thereby avoiding interference with the imaging quality. In addition, doctors also need to interpret the entire procedure to the patient and ensure that they understand and agree to receive the examination. In some cases, to improve contrast, it may be desirable to administer the contrast agent orally or intravenously to the patient.
The patient is then positioned on a CT scanner, typically lying on a movable bed. The couch slowly passes through the annular scanning device while the X-ray source and detector rotate around the patient. In this process, X-rays pass through the patient's body from different angles and attenuated X-ray intensity information is received by the opposite detector. These data are processed by a computer to generate a series of cross-sectional images, so-called CT scan images. For the gastrointestinal region, a multi-slice helical CT technique is typically employed, which enables a large number of high quality tomographic images to be obtained quickly in a short time, thereby providing more comprehensive and detailed anatomical information.
When the CT scan is completed, the resulting raw data will be converted to a digital image format, such as the DICOM standard format. These images contain not only rich visual information, but also a series of metadata such as patient information, scan parameters, etc. These image data are then stored in a hospital information system, such as a PACS system, for access and analysis by radiologists and other medical professionals.
Once the image data is ready, it follows how to effectively import these images into the system. Typically, the system will have a specially designed interface that allows the user to select a particular patient case and automatically load the corresponding CT scan image. This interface may be part of a graphical user interface or may be the result of integration with other medical information systems. Once a case is selected, the system reads the relevant image file and converts it into an internal representation suitable for further processing.
It is noted that while the above description outlines a general procedure for acquiring gastrointestinal CT scan images, the specific implementation details may vary from medical institution to medical institution and device configuration thereof. For example, some advanced CT devices may have automated patient positioning capabilities, reducing the need for human intervention, while some hospitals may use more advanced image post-processing software, enabling real-time previewing and adjusting of image quality. In any event, the final goal is to ensure that as clear and accurate an image as possible is obtained to facilitate subsequent diagnostic and therapeutic decisions. In addition, with the development of cloud computing and big data technology, such systems may also support remote access and cross-institution collaboration in the future, so that experts may view and analyze these important medical image materials anywhere, thereby further improving the quality and accessibility of medical services. In short, obtaining high quality gastrointestinal CT scan images is a precondition for the operation of the whole gastrointestinal surgical auxiliary system, which lays a solid foundation for accurate tumor detection and boundary labeling.
In an embodiment of the present application, the CT scan image preprocessing module 120 is configured to perform image noise reduction and contrast enhancement on the gastrointestinal CT scan image to obtain a gastrointestinal CT scan enhanced image. Accordingly, considering that noise, such as random noise, streak artifacts, etc., often exists in the gastrointestinal CT scan image, the noise may be caused by the limitation of the device itself, improper setting of scan parameters, or patient movement, etc., and the noise may reduce the definition and visual effect of the image, and affect the observation and judgment of the doctor on the lesion area. Moreover, the brightness difference between different tissues can influence the contrast of the image, and further influence the identification of the lesion area. Therefore, in order to remarkably improve the image quality and provide a better foundation for the subsequent feature extraction and image segmentation, in the technical scheme of the application, the gastrointestinal CT scanning image is subjected to image noise reduction and contrast enhancement to obtain a gastrointestinal CT scanning enhanced image, so that the noise can be effectively reduced, the image is clearer and cleaner, and the boundaries of different tissues are more obvious at the same time, so that normal tissues and abnormal tissues (such as tumors) are distinguished.
After the original gastrointestinal CT scan image is acquired, there is often a varying degree of noise in the image due to various factors such as equipment limitations, improper scan parameter settings, or patient movement. These noises may take the form of random noise or streak artifacts, which can reduce the sharpness and visual effect of the image, thereby affecting the physician's view and judgment of the lesion area. Thus, the first step is to perform noise reduction processing on the image. Noise reduction techniques are chosen in a variety of ways, ranging from traditional filtering methods (e.g., median filtering, gaussian filtering) to more advanced deep learning-based methods. For CT images, one common method is to use adaptive filters, which can dynamically adjust parameters according to local image content, so as to remove noise while keeping important detailed information such as edges as much as possible. In addition, noise reduction using Convolutional Neural Networks (CNNs) is becoming increasingly popular because CNNs are able to learn different types of noise patterns in an image and eliminate them with pertinence while maintaining the structural integrity of the image.
This is followed by contrast enhancement in order to further improve the visual appearance of the image. Contrast refers to the degree of difference between the brightest and darkest portions of an image, which directly affects the distinguishability between different tissues in the image. In the field of medical imaging, effective contrast enhancement is particularly important, especially when facing low contrast lesions. Common contrast enhancement techniques include histogram equalization, adaptive histogram equalization, and transform domain based methods (e.g., wavelet transform). Among them, histogram equalization is a simple and effective method for expanding the dynamic range of an image by reassigning pixel values so that the overall brightness distribution of the image is more uniform, thereby improving contrast. However, this approach may lead to oversaturation of the image, i.e. some areas are too bright or too dark. In contrast, the adaptive histogram equalization performs the equalization operation in a local area, so that the detail information of the image can be better preserved. In addition, transform domain-based methods, particularly wavelet transforms in combination with multi-resolution analysis, can independently adjust the contrast of images at different scales, helping to highlight fine structural features.
In practical applications, noise reduction and contrast enhancement are often combined, which complement each other. For example, the contrast of the model output may be optimized by a specific loss function design while noise reduction is performed using a convolutional neural network, or the contrast enhancement step may be performed separately after the noise reduction is completed. The combined method not only can effectively remove noise, but also can remarkably improve the contrast of images, so that boundaries among different tissues are more obvious, and a high-quality basis is provided for subsequent feature extraction.
It should be noted that although the above method provides various ways to improve the image quality, the implementation needs to consider the requirements of a specific application scenario. For example, for certain specific types of tumors, special attention may be paid to some minor detail changes, and a solution that is effective in reducing noise and finely enhancing contrast is needed. In addition, with the development of technology, more and more researches are focused on developing more intelligent preprocessing algorithms which can automatically identify the type and degree of noise in an image and adjust the noise reduction intensity accordingly, and meanwhile, the contrast enhancement effect can be adaptively adjusted according to the content of the image so as to achieve optimal visual presentation.
Therefore, the image quality can be remarkably improved by carrying out image noise reduction and contrast enhancement on the gastrointestinal CT scanning image, a solid foundation is laid for the following steps of semantic coding, boundary feature extraction, multi-scale feature fusion and the like, and therefore the finally obtained image semantic segmentation result is more accurate and reliable.
In the embodiment of the present application, the CT scan image feature extraction module 130 is configured to perform semantic encoding and boundary feature extraction on the gastrointestinal CT scan enhanced image to obtain a gastrointestinal CT scan trunk semantic encoding feature and a gastrointestinal CT scan boundary feature semantic encoding feature. Specifically, in the embodiment of the present application, the CT scan image feature extraction module 130 is configured to input the gastrointestinal CT scan enhanced image into a backbone network and a dual-branch boundary information loss compensation network of a boundary feature extraction branch to obtain a gastrointestinal CT scan backbone semantic coding feature map as the gastrointestinal CT scan backbone semantic coding feature and a gastrointestinal CT scan boundary feature semantic coding feature map as the gastrointestinal CT scan boundary feature semantic coding feature. It should be appreciated that the gastrointestinal CT scan enhancement image is considered to contain major macroscopic structural information such as organ location, morphology, etc., but also implies boundary detail information between different tissues or organs. In the conventional single-channel network processing process, as the network depth increases, the detail information (especially the boundary information) of the bottom layer is easily ignored or lost, so that the effect and the precision of feature extraction are reduced, and the final segmentation result is not accurate enough. Based on the above, in the technical scheme of the application, the gastrointestinal CT scan enhancement image is input into a main network and a double-branch boundary information loss compensation network of a boundary feature extraction branch to obtain a gastrointestinal CT scan main semantic coding feature map and a gastrointestinal CT scan boundary feature semantic coding feature map. It will be appreciated that the backbone network is responsible for extracting backbone semantic features of the gastrointestinal CT scan-enhanced image, which can reflect the overall structure and content of the image, including large-scale information of organs, tissues, etc. Boundary feature extraction branches are specifically used to extract boundary information in the gastrointestinal CT scan enhancement image, i.e., edges between different tissues or structures, which is critical to accurately locating a lesion region (e.g., tumor boundary). Therefore, the double-branch network can reserve and strengthen boundary information on different layers through the cooperative work of the main network and the boundary feature extraction branches, so that information loss is effectively compensated, and the segmentation accuracy is improved.
Specifically, the gastrointestinal CT scan enhancement image is input into a main network and a dual-branch boundary information loss compensation network of a boundary feature extraction branch to obtain a gastrointestinal CT scan main semantic coding feature map as a specific processing procedure of the gastrointestinal CT scan main semantic coding feature and the gastrointestinal CT scan boundary feature semantic coding feature map as follows:
first, at the input, the preprocessed gastrointestinal CT scan enhancement image is sent to the network. This initial stage may involve several convolution layers and pooling operations to perform preliminary feature extraction and reduce the spatial resolution of the image. The primary feature maps are then split into two paths, one path entering the backbone network and the other path entering the boundary feature extraction branch.
Backbone networks are typically based on popular Convolutional Neural Network (CNN) architectures, such as res net, VGG, or DenseNet, which have multiple convolutional layers and pooled layers stacked. These networks are good at extracting high-level abstract features in the image, and can reflect the overall structure and content of the image. In each layer, feature patterns of multiple scales are detected by using convolution kernels of different sizes. As network depth increases, these features become increasingly more abstract, ultimately forming one or more backbone semantically encoded feature maps. These feature maps contain large-scale information of the gastrointestinal tract and its surrounding tissues, such as the location of organs, morphology, etc.
At the same time, the boundary feature extraction branches focus on capturing edge information in the image. This branch may employ a U-Net-like design concept, consisting of a downsampling path and an upsampling path. The downsampling path is similar to the first few layers of the backbone network, with a rolling and pooling operation to reduce the spatial dimension and extract coarse features. However, unlike the backbone network, the boundary feature extraction branches are particularly concerned with information that helps delineate the boundary during the downsampling process. To achieve this, the branch may use a special convolution kernel configuration, such as a dilation convolution, to expand the receptive field without losing resolution. In addition, attention mechanisms may be introduced to highlight important boundary regions.
Once the downsampling process is completed, the boundary feature extraction branches will begin upsampling, gradually restoring the spatial dimensions of the image. In this process, the low-level features from the downsampled path are combined with the high-level features, typically in combination with the jump connection technique, to better preserve the boundary information. Upsampling may be achieved by transpose convolution or nearest neighbor interpolation, etc. In this way, the boundary feature extraction branches ultimately generate boundary semantic coding feature graphs that clearly show edges between different organizations or structures.
Therefore, the whole double-branch network structure can be utilized to comprehensively capture macro-structure information in an image and finely draw fine boundary details, so that the accuracy and reliability of tumor detection can be effectively improved even facing complex medical image data.
In the embodiment of the present application, the multi-scale feature fusion module 140 of the CT scan image is configured to perform a trunk-boundary fine granularity enhancement interaction on the trunk semantic coding feature of the gastrointestinal CT scan and the boundary semantic coding feature of the gastrointestinal CT scan to obtain a multi-scale semantic enhancement fusion coding feature of the gastrointestinal CT scan. Further, it is considered that the main semantic coding features of the gastrointestinal CT scan and the boundary semantic coding features of the gastrointestinal CT scan respectively contain feature information of different scales. In particular, the main semantic coding features focus on global structural information of large scale, while the boundary semantic coding features focus on local detail information of small scale, which are important for segmentation of lesion areas. Therefore, in the technical scheme of the application, the main semantic coding feature of the gastrointestinal CT scan and the boundary semantic coding feature of the gastrointestinal CT scan are subjected to main-boundary fine granularity reinforcement interaction to obtain the multi-scale semantic reinforcement fusion coding feature of the gastrointestinal CT scan, so that the transmission and the supplement of information between the features of different scales, for example, the fine granularity information in the boundary features can enhance the local detail in the main feature, thereby carrying out richer and more comprehensive characterization on the gastrointestinal CT image, further identifying and segmenting the lesion region more accurately, reducing the situation of false segmentation and missed segmentation, and improving the segmentation precision and reliability.
Fig. 3 is a block diagram of a CT scan image multi-scale feature fusion module in a gastrointestinal surgical assistance system according to an embodiment of the application. Specifically, as shown in fig. 3, the multi-scale feature fusion module 140 of CT scan image includes a CT feature dissociation unit 141 configured to perform feature dissociation on the main semantic feature of gastrointestinal CT scan and the boundary semantic feature of gastrointestinal CT scan to obtain a set of main semantic local features of gastrointestinal CT scan and a set of boundary semantic local features of gastrointestinal CT scan, and a CT feature compensation aggregation unit 142 configured to perform fine-granularity semantic feature compensation aggregation on the set of main semantic local features of gastrointestinal CT scan and the set of boundary semantic local features of gastrointestinal CT scan to obtain the multi-scale semantic enhancement fusion encoding feature of gastrointestinal CT scan.
Specifically, the CT feature dissociation unit 141 is configured to perform feature dissociation on the gastrointestinal CT scan trunk semantic coding feature map and the gastrointestinal CT scan boundary semantic coding feature map along a channel dimension of the gastrointestinal CT scan trunk semantic coding feature map to obtain a set of gastrointestinal CT scan trunk semantic local feature matrices as the set of gastrointestinal CT scan trunk semantic local features and a set of gastrointestinal CT scan boundary semantic local feature matrices as the set of gastrointestinal CT scan boundary semantic local features.
The processing procedure of the CT feature dissociation unit 141 described above can be expressed as:
;
;
Wherein, AndRespectively a main semantic coding feature map of the gastrointestinal CT scan and a boundary semantic coding feature map of the gastrointestinal CT scan,In order to perform a feature dissociation operation on the feature map,,,AndRespectively the 1 st, the 2 nd and the 2 nd of the collection of the main semantic local feature matrix of the gastrointestinal CT scanAnd (b)A main semantic local feature matrix of gastrointestinal CT scan,,,AndRespectively, 1 st, 2 nd and 2 nd in the collection of boundary semantic local feature matrices of gastrointestinal CT scanAnd (b)Boundary semantic local feature matrix of each gastrointestinal CT scan. That is, each feature channel contains a specific type of feature information (such as texture, shape, color, etc.), and the feature dissociation can capture the locally refined feature information in the image more finely.
Specifically, the CT feature compensation aggregation unit 142 includes a trunk-boundary fine granularity extraction subunit, a trunk-boundary semantic compensation feature calculation subunit, and a trunk-boundary fine granularity feature enhancement subunit, wherein the trunk-boundary fine granularity extraction subunit is configured to input a gastrointestinal CT scan trunk semantic local feature matrix and a gastrointestinal CT scan boundary semantic local feature matrix of each group of corresponding channel dimensions in the gastrointestinal CT scan trunk semantic local feature matrix set into a shared semantic information extraction module based on a twin network structure to obtain a gastrointestinal CT scan trunk-boundary fine granularity local shared semantic feature matrix set, and the trunk-boundary semantic compensation feature calculation subunit is configured to perform semantic feature compensation on the gastrointestinal CT scan trunk-boundary fine granularity local shared semantic feature matrix set to obtain a gastrointestinal CT scan trunk-boundary semantic compensation text semantic coding feature matrix set, and the trunk-boundary fine granularity feature enhancement subunit is configured to perform semantic enhancement on the gastrointestinal CT scan trunk-boundary fine granularity local shared semantic feature matrix set to obtain a gastrointestinal CT scan multiscale fusion feature map as the gastrointestinal CT scan multiscale enhancement feature coding.
The processing of the trunk-boundary fine granularity extraction subunit described above can be formulated as:
;
Wherein, Is the first in the collection of the main semantic local feature matrix of the gastrointestinal CT scanA main semantic local feature matrix of gastrointestinal CT scan,Is the first in the collection of boundary semantic local feature matrixes of gastrointestinal CT scanningA boundary semantic local feature matrix of the gastrointestinal CT scan,AndRespectively adding according to the position points, multiplying according to the position points and subtracting according to the position points,In the case of a cascade of processes,In the case of a point convolution code,Is thatThe function of the function is that,Is thatAndThe backbone-boundary fine granularity local sharing semantic feature matrix of the gastrointestinal CT scan in between. That is, through the twin network structure, it is possible to simultaneously process the main semantic local feature matrix and the boundary semantic local feature matrix, and extract and mine shared semantic information between them. These shared information reflects the common nature of the backbone and boundary features across the different channels, helping to more fully understand the image content. It is worth mentioning that the twin network architecture can enable the model to master the relevance among different features, and promote the trans-domain or trans-modal knowledge transfer and learning.
More specifically, in the embodiment of the application, the trunk-boundary semantic compensation characteristic calculation subunit is used for respectively inputting each gastrointestinal CT scanning trunk-boundary fine-granularity local-sharing semantic characteristic matrix in the gastrointestinal CT scanning trunk-boundary fine-granularity local-sharing semantic characteristic matrix set into a semantic compensation decoding module based on a large language model to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text description set, and respectively inputting each gastrointestinal CT scanning trunk-boundary semantic compensation text description in the gastrointestinal CT scanning trunk-boundary semantic compensation text description set into a semantic encoder based on a text convolutional neural network model to obtain the gastrointestinal CT scanning trunk-boundary semantic compensation text encoding characteristic matrix set. It should be appreciated that if only shared semantic features are focused on during feature interactions, other important features, such as the absence of complementary features, may result. Complementary features contain some unique information that may not be apparent in the backbone or boundary features alone, but may provide additional context and detail during the fusion process, helping to improve the integrity and accuracy of the final feature representation. Therefore, each gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix in the gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix set is respectively input into a semantic compensation decoding module based on a large language model to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text description set. In particular, the semantic compensation decoding module based on the large language model is similar to an accurate detector, can convert complex fine-grained local shared semantic features into natural language descriptions, supplements and perfects semantic layers of the features by using prior knowledge in the model, and generates detailed text reports, wherein the text descriptions can comprise key information such as the position, the size, the shape, the boundary definition and the like of lesions, so that the semantic depth and the breadth of feature expression are improved. And then, respectively inputting each gastrointestinal CT scanning trunk-boundary semantic compensation text description in the gastrointestinal CT scanning trunk-boundary semantic compensation text description set into a semantic encoder based on a text convolutional neural network model to obtain a set of gastrointestinal CT scanning trunk-boundary semantic compensation text semantic coding feature matrixes, so that each compensation text description can be subjected to semantic embedded coding, and the alignment of features is realized, so that the subsequent fusion processing of semantic compensation is facilitated.
The processing procedure of the main-boundary semantic compensation feature calculation subunit can be expressed as follows:
;
;
Wherein, Is thatAndThe backbone-boundary fine granularity local sharing semantic feature matrix of the gastrointestinal CT scan between,For the semantic compensation decoding operation,Is thatThe corresponding backbone-boundary semantic compensation text description of the gastrointestinal CT scan,For the purpose of the text convolutional coding,Is thatCorresponding gastrointestinal CT scan trunk-boundary semantic compensation text semantic coding feature matrix.
More specifically, in the embodiment of the application, the trunk-boundary fine-granularity characteristic semantic enhancement subunit is configured to input each group of corresponding gastrointestinal CT scan trunk-boundary fine-granularity local-sharing semantic feature matrix and gastrointestinal CT scan trunk-boundary semantic compensation text semantic coding feature matrix in the gastrointestinal CT scan trunk-boundary fine-granularity local-sharing semantic feature matrix set and the gastrointestinal CT scan trunk-boundary semantic compensation text semantic coding feature matrix set into a fine-granularity semantic interaction compensation module to obtain a gastrointestinal CT scan trunk-boundary fine-granularity local-interaction semantic enhancement feature matrix set, and aggregate the gastrointestinal CT scan trunk-boundary fine-granularity local-interaction semantic enhancement feature matrix set along a channel dimension to obtain the gastrointestinal CT scan multiscale semantic enhancement fusion coding feature map.
The processing procedure of the main-boundary fine-grained feature semantic enhancement subunit can be expressed as follows:
;
;
Wherein, Is thatAndThe backbone-boundary fine granularity local sharing semantic feature matrix of the gastrointestinal CT scan between,Is thatThe corresponding gastrointestinal CT scan trunk-boundary semantic compensation text semantic coding feature matrix,Is thatIs used to determine the transposed matrix of (a),For the matrix multiplication to be performed,Is saidI.e., the width of the matrix multiplied by the height of the matrix,Is thatThe function of the function is that,AndRespectively, are the weight coefficients of the two,,,AndThe 1 st, the 2 nd and the 2 nd of the collection of the main-boundary fine granularity local interaction semantic enhancement feature matrix of the gastrointestinal CT scanning respectivelyAnd (b)The main-boundary fine granularity local interaction semantic enhancement feature matrix of each gastrointestinal CT scan,The number of feature matrices in the set of the main-boundary fine-granularity local interaction semantic enhancement feature matrices for the gastrointestinal CT scan,To aggregate the set of feature matrices along the channel dimension,Is the multi-scale semantic enhanced fusion coding characteristic map of the gastrointestinal CT scanning.
In this way, the image features (the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix) and the text features (the gastrointestinal CT scanning trunk-boundary semantic compensation text semantic coding feature matrix) can be subjected to fine granularity depth fusion through fine granularity semantic interaction compensation, so that complementary information in the image and the text can be fully utilized, and more comprehensive and accurate feature representation can be generated. Finally, the aggregation of the gastrointestinal CT scanning trunk-boundary fine-granularity local interaction semantic enhancement feature matrixes is aggregated along the channel dimension to obtain the gastrointestinal CT scanning multiscale semantic enhancement fusion coding feature map, so that different local information is integrated into a unified feature map, global and local features in an image can be captured, and the comprehensiveness and richness of feature representation are improved.
In the embodiment of the present application, the semantic segmentation module 150 is configured to obtain an image semantic segmentation result based on the multi-scale semantic enhancement fusion coding feature of the gastrointestinal CT scan, where the image semantic segmentation result includes a labeled tumor boundary. Specifically, in the embodiment of the present application, the semantic segmentation module 150 is configured to perform image semantic segmentation on the gastrointestinal CT scan multi-scale semantic enhanced fusion encoding feature map to obtain the image semantic segmentation result, where the image semantic segmentation result includes a labeled tumor boundary.
More specifically, in the embodiment of the present application, the semantic segmentation module 150 is configured to process the multi-scale semantic enhanced fusion encoding feature map of the gastrointestinal CT scan by using an image semantic segmenter based on a Softmax function to obtain the image semantic segmentation result, where the image semantic segmentation result includes a labeled tumor boundary. Namely, performing image segmentation processing by utilizing the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding features obtained by carrying out fine granularity enhancement on the gastrointestinal CT scanning main semantic coding feature map and the gastrointestinal CT scanning boundary semantic coding feature map, so as to intelligently generate an image semantic segmentation result containing the marked tumor boundary.
In particular, in one embodiment of the present application, the multi-scale semantic enhanced fusion encoding feature map of the gastrointestinal CT scan may be processed by an image semantic segmenter based on a Softmax function, which specifically includes the following steps:
Firstly, taking the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map as input.
Then, a convolutional layer of a Convolutional Neural Network (CNN) is used for extracting features of the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map, the step can help the model to capture local mode and structure information in the feature map, and the nonlinear capability of the model is increased through an activation function (such as ReLU), so that the model can learn complex features better.
The feature map is then restored to a size close to the original image using upsampling techniques or transpose convolution operations to facilitate pixel-level classification. In order to keep more detail information, the early low-level features and the later high-level features can be combined by combining jump connection, so that the capturing capability of the model on the details is enhanced.
After the feature map is restored to the original size, a convolution layer is applied, the number of output channels of which is equal to the number of categories to be identified, and a classification score of each pixel point is generated. And converting the classification score of each pixel point into a probability value by adopting a Softmax conversion mechanism, and representing the possibility that the pixel point belongs to each category.
And then, calculating the difference between the predicted probability distribution and the actual label, and guiding the training process of the model by using the difference, so that the model can gradually reduce errors and improve the classification accuracy.
Finally, the probability map output by the model is subjected to threshold processing, and the final segmentation result of each pixel point is determined. For example, if the threshold is 0.5, pixels with a probability greater than 0.5 are labeled as tumor, otherwise labeled as background. Meanwhile, in order to remove noise and small areas in the segmentation result, morphological operations such as erosion and dilation can be used to ensure that the segmentation boundary is smoother and more natural, resulting in an image semantic segmentation result that contains the labeled tumor boundary. Through the series of steps, the image semantic segmenter based on the Softmax function can effectively process the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map, generate a high-quality segmentation result and provide powerful support for clinical application.
In one example, considering that the main semantic coding feature map of the gastrointestinal CT scan and the boundary semantic coding feature map of the gastrointestinal CT scan respectively represent the main image semantic feature and the boundary image semantic feature of the enhanced image of the gastrointestinal CT scan, when feature enhancement interaction based on cross-domain fine granularity semantic compensation is performed, the multi-scale semantic enhancement fusion coding feature map of the gastrointestinal CT scan obtained through feature enhancement interaction has rich semantic aggregation distribution, so that unstructured probability mapping is repeated, and the accuracy of an image semantic segmentation result obtained by performing image semantic segmentation is affected.
Based on the above, when the image semantic segmentation is carried out on the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map to obtain an image semantic segmentation result, the application optimizes the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map, and comprises the following steps:
determining the number of zero eigenvalues of the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map And the number of the zero eigenvaluesSubtracting one to obtain the gastrointestinal CT scan multi-scale semantically enhanced fusion coding zero-micro value;
Encoding zero-micro value by multi-scale semantic enhanced fusion of the gastrointestinal CT scanningRespectively multiplying and dividing the number of the zero eigenvaluesObtaining a fitting value of the first gastrointestinal CT scanning multi-scale semanteme reinforced fusion coding fieldAnd a second gastrointestinal CT scan multiscale semantically enriched fusion encoded field fitting value;
Calculating square root of square sum of all feature values of the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map to obtain a gastrointestinal CT scanning multi-scale semantic enhanced fusion coding mode characterization value:
;
Wherein, Representing the gastrointestinal CT scanning multi-scale semantically enhanced fusion coding feature mapThe value of the characteristic is a value of,Representing a gastrointestinal CT scanning multi-scale semantic enhanced fusion coding mode characterization value;
Encoding zero-micro value by using the gastrointestinal CT scanning multi-scale semanteme reinforced fusion Calculating a power function of each feature value of the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map as an indexAnd multiplying the first gastrointestinal CT scanning multi-scale semantically-enhanced fusion encoding field fitting value by pointsTo obtain a first gastrointestinal CT scanning multi-scale semanteme reinforced fusion coding intermediate characteristic diagram;
Fitting the gastrointestinal CT scanning multiscale semantic enhancement fusion coding feature map and the second gastrointestinal CT scanning multiscale semantic enhancement fusion coding fieldAnd the gastrointestinal CT scanning multiscale semanteme reinforced fusion coding mode characterization valuePerforming point multiplication to obtain a second gastrointestinal CT scanning multi-scale semantic enhancement fusion coding intermediate feature map;
Calculating a multi-scale semantic enhanced fusion coding intermediate feature map of the first gastrointestinal CT scanEncoding intermediate feature map by multi-scale semantic enhancement fusion with the second gastrointestinal CT scanIs the weighted point subtraction of (2)So as to obtain an optimized multi-scale semantic enhanced fusion coding characteristic diagram of gastrointestinal CT scanning, wherein,AndThe weighted super-parameter is represented by a weighted super-parameter,Representing the multiplication by the position point,Representing the decrease by location point. And finally, performing image semantic segmentation on the optimized gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map to obtain an image semantic segmentation result.
That is, based on a vector field fitting rule of a high-dimensional manifold, an isolated zero in the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map is used as a micro dimension of the high-dimensional manifold so as to fix a normal direction pointing to a distribution center near a local coordinate represented by a feature value of the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map, thereby avoiding invalid repetition in a probability mapping process of the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map due to unstructured characteristics by realizing effective alignment of a feature distribution mode of the gastrointestinal CT scanning multi-scale semantic enhanced fusion coding feature map to a probability density micro field, and improving the accuracy of an image semantic segmentation result obtained by performing image semantic segmentation.
In summary, the gastrointestinal surgery assistance system 100 according to the embodiment of the application is explained, which performs image preprocessing on a gastrointestinal CT scan image of a target patient object by acquiring the gastrointestinal CT scan image and applying an image recognition and processing algorithm based on computer vision and AI, then extracts a main semantic feature and a boundary semantic feature from the preprocessed image, and intelligently generates an image semantic segmentation result including a labeled tumor boundary based on a fine-grained reinforced fusion representation between the two features. Therefore, the micro-change which is difficult to identify by the human eyes can be captured, and misdiagnosis and missed diagnosis caused by the limitation of the observation of the human eyes are reduced especially under the conditions of low contrast and micro-lesions, so that the tumor boundary information is more accurately identified and marked, and visual assistance is provided for doctors. And the automatic processing flow obviously reduces the time of manual analysis of doctors and improves the diagnosis speed and efficiency.
As described above, the gastrointestinal surgical assistance system 100 according to the embodiment of the present application may be implemented in various wireless terminals, such as a server or the like having a gastrointestinal surgical assistance algorithm. In one possible implementation, the gastrointestinal surgical assistance system 100 according to an embodiment of the application may be integrated into the wireless terminal as one software module and/or hardware module. For example, the gastrointestinal surgical assistance system 100 may be a software module in the operating system of the wireless terminal or may be an application developed for the wireless terminal, although the gastrointestinal surgical assistance system 100 may equally be one of the many hardware modules of the wireless terminal.
Alternatively, in another example, the gastrointestinal surgical assistance system 100 and the wireless terminal may be separate devices, and the gastrointestinal surgical assistance system 100 may be connected to the wireless terminal through a wired and/or wireless network and communicate the interactive information in a agreed-upon data format.
Fig. 4 is a flow chart of a gastrointestinal surgical assist method according to an embodiment of the application. As shown in FIG. 4, the auxiliary method for gastrointestinal surgery according to the embodiment of the application comprises the steps of S110, S120, S130, S140, and S150, wherein the S110 is used for obtaining a gastrointestinal CT scanning multiscale semantic enhancement fusion encoding feature by carrying out trunk-boundary fine granularity enhancement interaction on the gastrointestinal CT scanning trunk semantic encoding feature and the gastrointestinal CT scanning boundary feature encoding feature, the S120 is used for carrying out image noise reduction and contrast enhancement on the gastrointestinal CT scanning image to obtain a gastrointestinal CT scanning enhancement image, the S130 is used for carrying out semantic encoding and boundary feature extraction on the gastrointestinal CT scanning enhancement image to obtain a gastrointestinal CT scanning trunk semantic local feature set and a gastrointestinal CT scanning boundary semantic local feature set, the S140 is used for carrying out gradient feature compensation polymerization on the gastrointestinal CT scanning multiscale enhancement encoding feature by carrying out trunk-boundary fine granularity enhancement interaction on the gastrointestinal CT scanning trunk semantic encoding feature and the gastrointestinal CT scanning boundary feature encoding feature to obtain a gastrointestinal CT scanning multiscale semantic enhancement fusion encoding feature, and the S150 is used for obtaining a gastrointestinal CT scanning multiscale fusion enhancement encoding feature based on a segmentation result.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described gastrointestinal surgical assistance method have been described in detail in the above description of the gastrointestinal surgical assistance system with reference to fig. 1 to 3, and thus, repetitive descriptions thereof will be omitted.
Implementations of the present disclosure have been described above, the foregoing description is exemplary rather than exhaustive. And is not limited to the implementations disclosed, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the various embodiments disclosed herein.

Claims (8)

1. A gastrointestinal surgical assist system, comprising:
The gastrointestinal CT scanning image acquisition module is used for acquiring a gastrointestinal CT scanning image of the target patient object;
The CT scanning image preprocessing module is used for carrying out image noise reduction and contrast enhancement on the gastrointestinal CT scanning image so as to obtain a gastrointestinal CT scanning enhanced image;
The CT scanning image feature extraction module is used for carrying out semantic coding and boundary feature extraction on the gastrointestinal CT scanning enhanced image so as to obtain gastrointestinal CT scanning trunk semantic coding features and gastrointestinal CT scanning boundary feature semantic coding features;
The CT scanning image multi-scale feature fusion module is used for carrying out trunk-boundary fine granularity reinforcement interaction on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding features, and comprises a CT feature dissociation unit, a CT feature compensation aggregation unit, a gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding feature, a gastrointestinal CT scanning boundary semantic reinforcement fusion coding feature and a gastrointestinal CT scanning boundary semantic feature compensation aggregation unit, wherein the CT scanning image multi-scale feature fusion module comprises a CT feature dissociation unit and a CT feature compensation aggregation unit, wherein the CT feature dissociation unit is used for carrying out feature dissociation on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain a gastrointestinal CT scanning trunk semantic local feature collection and a gastrointestinal CT scanning boundary semantic local feature collection;
The semantic segmentation module is used for obtaining an image semantic segmentation result based on the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature, wherein the image semantic segmentation result comprises a marked tumor boundary;
Wherein, CT characteristic compensation polymerization unit includes:
The main-boundary fine granularity extraction subunit is used for inputting the gastrointestinal CT scanning main semantic local feature matrix and the gastrointestinal CT scanning boundary semantic local feature matrix of each group of corresponding channel dimensions in the set of the gastrointestinal CT scanning main semantic local feature matrix and the set of the gastrointestinal CT scanning boundary semantic local feature matrix into the sharing semantic information extraction module based on the twin network structure so as to obtain the set of the gastrointestinal CT scanning main-boundary fine granularity local sharing semantic feature matrix;
The main-boundary semantic compensation feature calculation subunit is used for carrying out semantic feature compensation on the set of the gastrointestinal CT scanning main-boundary fine-granularity local sharing semantic feature matrix to obtain a set of gastrointestinal CT scanning main-boundary semantic compensation text semantic coding feature matrix;
The main-boundary fine granularity characteristic semantic enhancement subunit is used for carrying out semantic enhancement on the set of the gastrointestinal CT scanning main-boundary fine granularity local sharing semantic characteristic matrix based on the set of the gastrointestinal CT scanning main-boundary semantic compensation text semantic coding characteristic matrix to obtain a gastrointestinal CT scanning multi-scale semantic enhancement fusion coding characteristic map as the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding characteristic.
2. The gastrointestinal surgical auxiliary system according to claim 1, wherein the CT scan image feature extraction module is configured to input the gastrointestinal CT scan enhancement image into a main network and a dual-branch boundary information loss compensation network of a boundary feature extraction branch to obtain a gastrointestinal CT scan main semantic coding feature map as the gastrointestinal CT scan main semantic coding feature and a gastrointestinal CT scan boundary feature semantic coding feature map as the gastrointestinal CT scan boundary feature semantic coding feature.
3. The gastrointestinal surgical auxiliary system according to claim 2, wherein the CT feature dissociation unit is configured to perform feature dissociation on the gastrointestinal CT scan trunk semantic coding feature map and the gastrointestinal CT scan boundary semantic coding feature map along a channel dimension of the gastrointestinal CT scan trunk semantic coding feature map to obtain a set of gastrointestinal CT scan trunk semantic local feature matrices as the set of gastrointestinal CT scan trunk semantic local features and a set of gastrointestinal CT scan boundary semantic local feature matrices as the set of gastrointestinal CT scan boundary semantic local features.
4. The gastrointestinal surgical assist system according to claim 3, wherein the trunk-boundary semantic compensation feature calculation subunit is configured to:
Inputting each gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix in the gastrointestinal CT scanning trunk-boundary fine-granularity local sharing semantic feature matrix set into a semantic compensation decoding module based on a large language model respectively to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text description set;
And respectively inputting each gastrointestinal CT scanning trunk-boundary semantic compensation text description in the gastrointestinal CT scanning trunk-boundary semantic compensation text description set into a semantic encoder based on a text convolutional neural network model to obtain a gastrointestinal CT scanning trunk-boundary semantic compensation text semantic encoding feature matrix set.
5. The gastrointestinal surgical assist system according to claim 4, wherein the trunk-boundary fine-grained feature semantic enhancement subunit is configured to:
Inputting the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix and the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix corresponding to each group in the gastrointestinal CT scanning trunk-boundary fine granularity local sharing semantic feature matrix and the gastrointestinal CT scanning trunk-boundary semantic compensation text semantic coding feature matrix into a fine granularity semantic interaction compensation module to obtain a gastrointestinal CT scanning trunk-boundary fine granularity local interaction semantic enhancement feature matrix set;
And aggregating the set of the gastrointestinal CT scanning trunk-boundary fine granularity local interaction semantic enhancement feature matrix along the channel dimension to obtain the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature map.
6. The gastrointestinal surgical assist system according to claim 5, wherein the semantic segmentation module is configured to perform image semantic segmentation on the gastrointestinal CT scan multi-scale semantically enriched fusion encoding feature map to obtain the image semantic segmentation result, the image semantic segmentation result including a labeled tumor boundary.
7. The gastrointestinal surgical assist system according to claim 6, wherein the semantic segmentation module is configured to process the gastrointestinal CT scan multi-scale semantically enhanced fusion encoding feature map using a Softmax function-based image semantic segmenter to obtain the image semantic segmentation result, the image semantic segmentation result including a labeled tumor boundary.
8. A gastrointestinal surgical assist method using the gastrointestinal surgical assist system of claim 1, comprising:
acquiring a gastrointestinal CT scanning image of a target patient object;
Performing image noise reduction and contrast enhancement on the gastrointestinal CT scanning image to obtain a gastrointestinal CT scanning enhanced image;
carrying out semantic coding and boundary feature extraction on the gastrointestinal CT scanning enhanced image to obtain gastrointestinal CT scanning trunk semantic coding features and gastrointestinal CT scanning boundary feature semantic coding features;
performing trunk-boundary fine granularity reinforcement interaction on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain gastrointestinal CT scanning multi-scale semantic reinforcement fusion coding features, wherein the method comprises the steps of performing feature dissociation on the gastrointestinal CT scanning trunk semantic coding features and the gastrointestinal CT scanning boundary semantic coding features to obtain a gastrointestinal CT scanning trunk semantic local feature set and a gastrointestinal CT scanning boundary semantic local feature set;
And obtaining an image semantic segmentation result based on the gastrointestinal CT scanning multi-scale semantic enhancement fusion coding feature, wherein the image semantic segmentation result comprises a marked tumor boundary.
CN202510014592.0A 2025-01-06 2025-01-06 Gastrointestinal surgery auxiliary system and method Active CN119417821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510014592.0A CN119417821B (en) 2025-01-06 2025-01-06 Gastrointestinal surgery auxiliary system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510014592.0A CN119417821B (en) 2025-01-06 2025-01-06 Gastrointestinal surgery auxiliary system and method

Publications (2)

Publication Number Publication Date
CN119417821A CN119417821A (en) 2025-02-11
CN119417821B true CN119417821B (en) 2025-03-14

Family

ID=94460192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510014592.0A Active CN119417821B (en) 2025-01-06 2025-01-06 Gastrointestinal surgery auxiliary system and method

Country Status (1)

Country Link
CN (1) CN119417821B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119672788A (en) * 2025-02-20 2025-03-21 浙江孚宝智能科技有限公司 Intelligent health care companion robot with face tracking function

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512110A (en) * 2022-09-23 2022-12-23 南京邮电大学 Medical image tumor segmentation method related to cross-modal attention mechanism
CN117911424A (en) * 2024-01-10 2024-04-19 南京工业大学 A semi-supervised intracerebral hemorrhage image segmentation method based on double teacher structure

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179298B (en) * 2019-12-12 2023-05-02 深圳市旭东数字医学影像技术有限公司 Three-dimensional lung automatic segmentation and left and right lung separation method and system based on CT image
CN118397261A (en) * 2024-03-13 2024-07-26 沈阳东软智能医疗科技研究院有限公司 Lung CT image segmentation method, device, electronic equipment and storage medium
CN118212418A (en) * 2024-04-15 2024-06-18 河南大学 A liver tumor segmentation and detection method based on multi-task learning
CN118397280B (en) * 2024-06-19 2024-08-27 吉林大学 Endoscopic gastrointestinal tract image segmentation and recognition system and method based on artificial intelligence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115512110A (en) * 2022-09-23 2022-12-23 南京邮电大学 Medical image tumor segmentation method related to cross-modal attention mechanism
CN117911424A (en) * 2024-01-10 2024-04-19 南京工业大学 A semi-supervised intracerebral hemorrhage image segmentation method based on double teacher structure

Also Published As

Publication number Publication date
CN119417821A (en) 2025-02-11

Similar Documents

Publication Publication Date Title
CN107492071B (en) Medical image processing method and equipment
Hu et al. AS-Net: Attention Synergy Network for skin lesion segmentation
CN111429474B (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
CN111368849A (en) Image processing method, image processing device, electronic equipment and storage medium
CN119417821B (en) Gastrointestinal surgery auxiliary system and method
CN118485643B (en) Medical image analysis processing system based on image analysis
CN111369562A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115908449A (en) 2.5D medical CT image segmentation method and device based on improved UNet model
CN117197166B (en) Polyp image segmentation method and imaging method based on edge and neighborhood information
CN111488912A (en) A diagnosis system for laryngeal diseases based on deep learning neural network
WO2024245469A1 (en) Three-dimensional reconstruction method and system for soft tissue
Zia et al. VANT-GAN: adversarial learning for discrepancy-based visual attribution in medical imaging
Yue et al. Deep pyramid network for low-light endoscopic image enhancement
CN118505529A (en) Medical image fusion method, device, equipment and storage medium
CN115965785A (en) Image segmentation method, device, equipment, program product and medium
Huang et al. A fatty liver diseases classification network based on adaptive coordination attention with label smoothing
CN118823344A (en) Medical image semantic segmentation method and system based on channel and spatial attention mechanism
Fu et al. SMDFnet: Saliency multiscale dense fusion network for MRI and CT image fusion
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN117853720A (en) Mammary gland image segmentation system, method and computer storage medium
CN118116576A (en) Intelligent case analysis method and system based on deep learning
Vani et al. Image enhancement of wireless capsule endoscopy frames using image fusion technique
Zhou et al. Motico: an attentional mechanism network model for smart aging disease risk prediction based on image data classification
CN117455845A (en) An intelligent image processing method and system for CT images of acute cerebral hemorrhage
Dimililer et al. Image preprocessing phase with artificial intelligence methods on medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant