[go: up one dir, main page]

CN118212459A - A method, device and medium for predicting esophageal structure, tumor contour and stage - Google Patents

A method, device and medium for predicting esophageal structure, tumor contour and stage Download PDF

Info

Publication number
CN118212459A
CN118212459A CN202410349867.1A CN202410349867A CN118212459A CN 118212459 A CN118212459 A CN 118212459A CN 202410349867 A CN202410349867 A CN 202410349867A CN 118212459 A CN118212459 A CN 118212459A
Authority
CN
China
Prior art keywords
tumor
contour
esophageal
layer
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410349867.1A
Other languages
Chinese (zh)
Other versions
CN118212459B (en
Inventor
经秉中
贺龙君
罗琳娜
李茵
冯灵德
李超峰
邓一术
陈浩华
李彬
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Hede Supply Chain Co.,Ltd.
Original Assignee
Sun Yat Sen University Cancer Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University Cancer Center filed Critical Sun Yat Sen University Cancer Center
Priority to CN202410349867.1A priority Critical patent/CN118212459B/en
Publication of CN118212459A publication Critical patent/CN118212459A/en
Application granted granted Critical
Publication of CN118212459B publication Critical patent/CN118212459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • A61B8/0833Clinical applications involving detecting or locating foreign bodies or organic structures
    • A61B8/085Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/273Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the upper alimentary canal, e.g. oesophagoscopes, gastroscopes
    • A61B1/2733Oesophagoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Software Systems (AREA)
  • Veterinary Medicine (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Optics & Photonics (AREA)
  • Gastroenterology & Hepatology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Vascular Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method, a device and a medium for predicting esophageal structure, tumor contour and stage, wherein the method comprises the following steps: extracting features of the original ultrasonic endoscope picture by using the model to obtain full-size features; carrying out structure and node processing on the full-size characteristics to obtain a three-layer esophagus structure and control points of tumor growth contours; generating a cubic B-spline curve of esophagus and tumor according to the control points; obtaining a tumor staging result by calculating the relative distance between the three B-spline curves; and forming a prediction result by the cubic B-spline curve and the tumor stage result. The invention provides a method, a device and a medium for predicting esophageal structures and tumor contours and stage, which can accurately represent the predicted results of the esophageal wall structures and the tumor contours by generating three times of B-spline curves, can ensure the interpretability of the tumor stage results by calculating the relative distance between the curves, and can solve the problem of low accuracy of tumor positioning and stage results.

Description

一种食管结构和肿瘤轮廓与分期的预测方法、装置及介质A method, device and medium for predicting esophageal structure, tumor contour and stage

技术领域Technical Field

本发明涉及人工智能技术领域,特别是涉及一种基于食管结构的肿瘤分期概率预测方法、装置及介质。The present invention relates to the field of artificial intelligence technology, and in particular to a method, device and medium for predicting tumor staging probability based on esophageal structure.

背景技术Background technique

胃癌、结直肠癌和食道癌是全球高发癌症之一,早期发现该病症对治愈率而言至关重要;超声内镜检查对于鉴别肿瘤T1-T4分期,或者是更细分的T1a和T1b分期具有特殊优势,是早期发现消化道恶性疾病的重要方法。同时,人工智能技术能够协助医护人员更准确地识别早期癌症;人工智能辅助内镜检查能够提高筛查效率,协助医护人员做出更精确的诊断,解决早筛和早诊问题,开创了医疗领域的新局面。Gastric cancer, colorectal cancer and esophageal cancer are among the most common cancers in the world. Early detection of the disease is crucial to the cure rate. Ultrasound endoscopic examination has special advantages in distinguishing tumor T1-T4 stages, or the more detailed T1a and T1b stages, and is an important method for early detection of digestive tract malignant diseases. At the same time, artificial intelligence technology can assist medical staff in more accurately identifying early cancers; artificial intelligence-assisted endoscopic examination can improve screening efficiency, assist medical staff in making more accurate diagnoses, solve the problems of early screening and early diagnosis, and create a new situation in the medical field.

然而,人工智能医疗技术在超声内镜图像的处理和诊断方面仍存在诸多问题;在使用神经网络诊断模型对诊断相关区域进行选框提取的基础上,直接采用卷积神经网络(CNN)得到分类预测结果,导致分类预测结果缺乏良好的可解释性;超声内镜图像的成像质量不稳定,且现有方法未能有效利用解剖知识进行肿瘤识别和分期鉴别,导致肿瘤定位和分期结果的准确性低。However, artificial intelligence medical technology still has many problems in the processing and diagnosis of ultrasound endoscopic images. On the basis of using the neural network diagnostic model to select and extract the diagnosis-related areas, the convolutional neural network (CNN) is directly used to obtain the classification prediction results, resulting in a lack of good interpretability of the classification prediction results. The imaging quality of ultrasound endoscopic images is unstable, and the existing methods fail to effectively use anatomical knowledge for tumor identification and staging differentiation, resulting in low accuracy of tumor localization and staging results.

发明内容Summary of the invention

本发明提供一种基于食管结构的肿瘤分期概率预测方法、装置及介质,以解决肿瘤定位和分期结果的准确性低的问题。The present invention provides a method, device and medium for predicting tumor staging probability based on esophageal structure, so as to solve the problem of low accuracy of tumor positioning and staging results.

为了解决上述问题,本发明提供了一种食管结构和肿瘤轮廓与分期的预测方法,包括:In order to solve the above problems, the present invention provides a method for predicting esophageal structure and tumor contour and staging, comprising:

获取原始超声内镜图片;Obtain original endoscopic ultrasound images;

使用预设模型对所述原始超声内镜图片进行特征提取,得到全尺寸特征;Using a preset model to extract features from the original ultrasound endoscopy image to obtain full-size features;

使用所述预设模型对所述全尺寸特征进行结构拆分和节点重构,得到三层食管结构和肿瘤生长轮廓的控制点;Use the preset model to perform structural decomposition and node reconstruction on the full-size feature to obtain control points of the three-layer esophageal structure and tumor growth contour;

通过样条插值的方式,根据所述控制点生成食管和肿瘤的三次B-样条曲线;Generating cubic B-spline curves of the esophagus and the tumor according to the control points by means of spline interpolation;

通过计算所述三次B-样条曲线之间的相对距离得到肿瘤分期结果;The tumor staging result is obtained by calculating the relative distance between the cubic B-spline curves;

由所述三次B-样条曲线和所述肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果;The cubic B-spline curve and the tumor staging result form a prediction result of the esophageal structure, tumor contour and tumor staging;

其中,所述预设模型是由不同器件和模块构成的深度学习模型。Among them, the preset model is a deep learning model composed of different devices and modules.

本发明通过对原始超声内镜图片进行特征提取,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度;通过构造三层食管结构和肿瘤生长轮廓的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估;通过样条插值的方式生成三次B-样条曲线,可以将离散点数据连接成一个曲线,用来准确表示食管和肿瘤的形状;由于曲线之间的相对距离能够表示不同膜层之间的组织侵犯程度,所以可以方便快捷地得到肿瘤分期结果。The present invention can remove redundant images by extracting features from original endoscopic ultrasound images, thereby effectively ensuring the data accuracy of endoscopic ultrasound images and reducing the complexity of data processing; by constructing control points of the three-layer esophageal structure and tumor growth contour, the characteristics of tumor growth can be fully acquired, which is conducive to the effective evaluation of the esophageal disease state; by generating a cubic B-spline curve in a spline interpolation manner, discrete point data can be connected into a curve to accurately represent the shape of the esophagus and the tumor; because the relative distance between the curves can represent the degree of tissue invasion between different membrane layers, the tumor staging result can be obtained quickly and conveniently.

相比于现有技术,本发明通过引入模型进行图片处理,能够提高模型对感兴趣区域的准确定位,通过多层多边形生成技术提供符合解剖结构的三次B-样条曲线,能够准确地表示食管壁结构轮廓预测和肿瘤轮廓预测结果,并且通过计算曲线之间的相对距离,可以得到组织的侵犯程度,保证肿瘤分期结果的可解释性,能够解决肿瘤定位和分期结果的准确性低的问题。Compared with the prior art, the present invention can improve the accurate positioning of the model on the region of interest by introducing a model for image processing, and provide a cubic B-spline curve that conforms to the anatomical structure through multi-layer polygon generation technology, which can accurately represent the esophageal wall structure contour prediction and tumor contour prediction results, and by calculating the relative distance between the curves, the degree of tissue invasion can be obtained, ensuring the interpretability of the tumor staging results, and can solve the problem of low accuracy of tumor positioning and staging results.

作为优选方案,使用所述预设模型对所述全尺寸特征进行结构拆分和节点重构,得到三层食管结构和肿瘤生长轮廓的控制点,具体为:As a preferred solution, the preset model is used to perform structural decomposition and node reconstruction on the full-size feature to obtain the control points of the three-layer esophageal structure and tumor growth contour, specifically:

使用所述预设模型的结构检测模块,对所述全尺寸特征进行维度变换与拆分,得到全局特征令牌;Using the structure detection module of the preset model, the full-size features are dimensionally transformed and split to obtain global feature tokens;

将所述全局特征令牌结合预设的食管与肿瘤控制点特征,通过坐标变换的形式,得到三层食管结构的控制点;The global feature token is combined with the preset esophageal and tumor control point features to obtain the control points of the three-layer esophageal structure through coordinate transformation;

通过对预设的肿瘤特征在不同的半径基础上进行特征映射,得到肿瘤向外和向内生长轮廓线的控制点。By performing feature mapping on the preset tumor features based on different radii, the control points of the tumor's outward and inward growth contours are obtained.

本优选方案通过在肿瘤向外生长和向内生长这两种维度上进行轮廓线的控制点的预测,得到肿瘤向外和向内生长轮廓线的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估。This preferred solution predicts the control points of the contour lines in the two dimensions of tumor outward growth and inward growth to obtain the control points of the tumor outward and inward growth contour lines, which can fully obtain the characteristics of tumor growth and is conducive to the effective evaluation of the esophageal disease status.

作为优选方案,使用所述预设模型的结构检测模块,对所述全尺寸特征进行维度变换与拆分,得到全局特征令牌,具体为:As a preferred solution, the structure detection module of the preset model is used to perform dimension transformation and splitting on the full-size features to obtain global feature tokens, specifically:

使用所述预设模型的结构检测模块,对所述全尺寸特征进行维度变换,得到第一令牌;Using the structure detection module of the preset model, performing dimension transformation on the full-size feature to obtain a first token;

根据需要预测的食管结构控制点数目,构造可学习的第二令牌;constructing a learnable second token according to the number of esophageal structure control points that need to be predicted;

对所述第一令牌和所述第二令牌进行拼接,得到第三令牌;Concatenate the first token and the second token to obtain a third token;

将所述第三令牌输入所述预设模型的Transformer模块并进行令牌拆分,得到所述全局特征令牌。The third token is input into the Transformer module of the preset model and token splitting is performed to obtain the global feature token.

本优选方案对全尺寸特征进行维度变换、令牌构造、令牌拼接和令牌拆分的步骤,就是对全尺寸特征中的特征信息进行整理和收集的过程,使所得到的全局特征令牌在拥有全局特征的同时,具有良好的不变性和表示直观的特点,有助于为后期的食管结构以及肿瘤预测提供坚实的数据基础。The steps of dimension transformation, token construction, token splicing and token splitting of the full-size features in this preferred solution are the process of organizing and collecting the feature information in the full-size features, so that the obtained global feature tokens have good invariance and intuitive representation while having global features, which helps to provide a solid data foundation for the later esophageal structure and tumor prediction.

作为优选方案,将所述全局特征令牌结合预设的食管与肿瘤控制点特征,通过坐标变换的形式,得到三层食管结构的控制点,具体为:As a preferred solution, the global feature token is combined with the preset esophageal and tumor control point features to obtain the control points of the three-layer esophageal structure through coordinate transformation, specifically:

使用第一预设公式,根据所述全局特征令牌计算得到食管与肿瘤控制点特征;Using a first preset formula, calculating the esophagus and tumor control point features according to the global feature tokens;

使用线性变换层对所述食管与肿瘤控制点特征进行映射,并将映射结果转化为极坐标的形式,得到粘膜层轮廓线的节点坐标;Mapping the esophagus and tumor control point features using a linear transformation layer, and converting the mapping results into polar coordinates to obtain node coordinates of the mucosal layer contour line;

根据所述控制点特征,使用所述线性变换层分别计算出黏膜下层和外膜层的相对半径,并根据所述节点坐标和所述相对半径计算得到黏膜下层和外膜层的实际半径;According to the control point features, the relative radii of the submucosal layer and the adventitia layer are calculated using the linear transformation layer, and the actual radii of the submucosal layer and the adventitia layer are calculated according to the node coordinates and the relative radii;

根据所述实际半径和所述节点坐标相对于食管中心的角度,将所述控制点特征对应的控制点转化为笛卡尔坐标表示的形式,得到所述三层食管结构的控制点。According to the actual radius and the angle of the node coordinates relative to the center of the esophagus, the control points corresponding to the control point features are converted into a form represented by Cartesian coordinates to obtain the control points of the three-layer esophagus structure.

本优选方案所计算出的黏膜下层和外膜层的相对半径具有一定的限制性,无法准确地表示黏膜不同层次之间的距离,由于相对半径表示相对位移,因此在相对位移的基础上,结合节点坐标计算得到实际半径,从而可以准确表示粘膜下层和外膜层相对于粘膜层的轮廓线的位移。The relative radius of the submucosal layer and the adventitia layer calculated by this preferred solution has certain limitations and cannot accurately represent the distance between different mucosal layers. Since the relative radius represents the relative displacement, the actual radius is calculated based on the relative displacement and combined with the node coordinates, thereby accurately representing the displacement of the submucosal layer and the adventitia layer relative to the contour line of the mucosal layer.

作为优选方案,通过对预设的肿瘤特征在不同的半径基础上进行特征映射,得到肿瘤向外和向内生长轮廓线的控制点,具体为:As a preferred solution, by performing feature mapping on the preset tumor features based on different radii, the control points of the tumor's outward and inward growth contours are obtained, specifically:

定义肿瘤特征;define tumor characteristics;

将所述肿瘤特征分别在第一半径尺度和第二半径尺度上进行映射,得到第一相对半径和第二相对半径;Mapping the tumor feature on a first radius scale and a second radius scale respectively to obtain a first relative radius and a second relative radius;

依次使用第一相对半径和第二相对半径,与所述节点坐标相对于食管中心的半径进行计算,分别得到肿瘤向外和向内生长轮廓线的实际半径;The first relative radius and the second relative radius are used in sequence to calculate with the radius of the node coordinate relative to the center of the esophagus, to obtain the actual radii of the outward and inward growth contours of the tumor respectively;

根据所述肿瘤向外和向内生长轮廓线的实际半径,以及粘膜层的角度,进行节点位置的坐标转化,得到所述肿瘤向外和向内生长轮廓线的控制点。According to the actual radius of the tumor's outward and inward growth contours and the angle of the mucosal layer, the coordinates of the node positions are transformed to obtain the control points of the tumor's outward and inward growth contours.

本优选方案是肿瘤生长轮廓线的控制点的构造过程,将笛卡尔坐标和极坐标之间的转换关系引用到控制点的处理上,能够利用笛卡尔坐标具有表达直观性、描述简洁性和描述精确性的优势,使肿瘤生长形状的计算方式变得相对简便,进而能够清晰地描述肿瘤生长轮廓线的控制点。This preferred solution is the construction process of the control points of the tumor growth contour. The conversion relationship between Cartesian coordinates and polar coordinates is referenced in the processing of control points. The advantages of Cartesian coordinates in terms of intuitive expression, concise description and accurate description can be utilized to make the calculation method of the tumor growth shape relatively simple, thereby being able to clearly describe the control points of the tumor growth contour.

作为优选方案,通过样条插值的方式,根据所述控制点生成食管和肿瘤的三次B-样条曲线,具体为:As a preferred solution, a cubic B-spline curve of the esophagus and the tumor is generated according to the control points by spline interpolation, specifically:

使用三次B-样条插值法分别对所述三层食管结构和肿瘤生长轮廓的控制点进行处理,得到参数化的食管结构轮廓线和肿瘤轮廓线;Using a cubic B-spline interpolation method to process the control points of the three-layer esophageal structure and tumor growth contour respectively, to obtain a parameterized esophageal structure contour line and a tumor contour line;

在所述食管结构轮廓线和肿瘤轮廓线上,计算每个曲线点的法向量,得到法线的单位向量;On the esophageal structure contour line and the tumor contour line, calculating the normal vector of each curve point to obtain a unit vector of the normal line;

由所述食管结构轮廓线和肿瘤轮廓线,以及所述法线的单位向量构成所述食管和肿瘤的三次B-样条曲线。The cubic B-spline curves of the esophagus and the tumor are constructed by the esophagus structure contour line, the tumor contour line, and the unit vector of the normal line.

本优选方案通过以控制点为基本元素,将离散点数据连接成一个曲线,进而得到精度高、收敛性且拟合准确的曲线作为结构轮廓线,并通过结合法向量,得到完整的三次B-样条曲线,能够用来准确地表示食管和肿瘤的形状。This preferred solution connects discrete point data into a curve using control points as basic elements, thereby obtaining a curve with high precision, convergence and accurate fitting as the structural contour line, and by combining the normal vector, a complete cubic B-spline curve is obtained, which can be used to accurately represent the shape of the esophagus and tumors.

作为优选方案,通过计算所述三次B-样条曲线之间的相对距离得到肿瘤分期结果,具体为:As a preferred solution, the tumor staging result is obtained by calculating the relative distance between the cubic B-spline curves, specifically:

根据所述三次B-样条曲线中肿瘤向外生长的第一轮廓线与粘膜下层的第二轮廓线,构建曲线上所有点之间的欧几里得距离,得到距离矩阵;According to the first contour line of the tumor outward growth and the second contour line of the submucosal layer in the cubic B-spline curve, constructing the Euclidean distance between all points on the curve to obtain a distance matrix;

通过索引的方式,根据所述距离矩阵对所述第一轮廓线到所述第二轮廓线进行距离计算和法向量判断,得到所述第一轮廓线上每个点到所述第二轮廓线上最近点的距离最大值,将所述距离最大值作为相对距离数值集;By indexing, performing distance calculation and normal vector judgment from the first contour line to the second contour line according to the distance matrix, obtaining the maximum distance from each point on the first contour line to the nearest point on the second contour line, and using the maximum distance as a relative distance value set;

在所述相对距离数值集中,根据肿瘤向外生长的轮廓线与粘膜下层的相对距离的数值大小,以及肿瘤向外生长的轮廓线与外膜层的相对距离的数值大小,获取肿瘤分期结果。In the relative distance value set, the tumor staging result is obtained according to the numerical value of the relative distance between the tumor outward growth contour line and the submucosal layer, and the numerical value of the relative distance between the tumor outward growth contour line and the adventitia layer.

本优选方案通过构建距离矩阵,能够准确地表示曲线上所有点之间的距离,通过索引的方式,可以让不同曲线上的点产生对应关联,便于获取第一轮廓线上每个点到第二轮廓线上最近点的距离最大值;并且,相对距离数值集中的相对距离有正负数之分,所以通过判断数值大小的方式,能够知晓不同膜层之间的组织侵犯程度,进而得到肿瘤分期结果。This preferred solution can accurately represent the distances between all points on the curve by constructing a distance matrix. By means of indexing, points on different curves can be associated with each other, so as to obtain the maximum value of the distance from each point on the first contour line to the nearest point on the second contour line. Moreover, the relative distances in the relative distance value set can be positive or negative, so by judging the size of the value, the degree of tissue invasion between different membrane layers can be known, thereby obtaining the tumor staging result.

作为优选方案,使用预设模型对所述原始超声内镜图片进行特征提取,得到全尺寸特征,具体为:As a preferred solution, a preset model is used to extract features from the original ultrasound endoscopy image to obtain full-size features, specifically:

使用所述预设模型的有效区域检测器,对所述原始超声内镜图片的有效区域边界框进行识别和剪裁,得到有效超声内镜图片;Using the effective area detector of the preset model, identifying and cutting the effective area boundary box of the original ultrasound endoscopy image to obtain a valid ultrasound endoscopy image;

使用所述预设模型的特征编码器与特征解码器,对所述有效超声内镜图片进行特征提取和上采样,得到所述全尺寸特征。The feature encoder and feature decoder of the preset model are used to perform feature extraction and upsampling on the effective ultrasound endoscopy image to obtain the full-size feature.

本优选方案通过对有效区域边界框进行识别和剪裁,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度。通过特征提取和上采样,能够从有效超声内镜图片中提取出与肿瘤相关的特征,使肿瘤的大小、形状和密度等特征包含在全尺寸特征中,可以更好地描述原始超声内镜图片的细节特征。This preferred solution can remove redundant images by identifying and cropping the effective area boundary box, effectively ensuring the data accuracy of the ultrasound endoscopic image while reducing the complexity of data processing. Through feature extraction and upsampling, tumor-related features can be extracted from the effective ultrasound endoscopic image, so that features such as tumor size, shape and density are included in the full-size features, which can better describe the detailed features of the original ultrasound endoscopic image.

作为优选方案,所述预设模型是由不同器件和模块构成的深度学习模型,还包括:As a preferred solution, the preset model is a deep learning model composed of different devices and modules, and also includes:

通过度量所述预设模型预测的分割掩码和真实分割掩码之间的一致性,得到DICE损失函数;By measuring the consistency between the segmentation mask predicted by the preset model and the actual segmentation mask, a DICE loss function is obtained;

通过度量所述预设模型的预测节点与实际节点之间的误差,得到多边形损失函数;By measuring the error between the predicted nodes and the actual nodes of the preset model, a polygon loss function is obtained;

通过量化所述预设模型预测的T分期标签与实际T分期标签之间的误差,得到分类损失函数;A classification loss function is obtained by quantifying the error between the T stage label predicted by the preset model and the actual T stage label;

使用预设的训练集,以及肿瘤与食管壁的掩膜,根据所述DICE损失函数、所述多边形损失函数和所述分类损失函数对所述预设模型进行训练,得到优化后的预设模型。Using a preset training set and masks of the tumor and the esophageal wall, the preset model is trained according to the DICE loss function, the polygon loss function and the classification loss function to obtain an optimized preset model.

本优选方案中,DICE损失函数能够增强特定目标的识别和定位的准确性,多边形损失函数能够提高食管与肿瘤轮廓节点的预测准确性,分类损失函数能够提高肿瘤分期结果的预测准确性,因此从不同维度所建立的损失函数,能够对模型的优化起到显著提升的功效,增强预设模型的预测准确性。In this preferred solution, the DICE loss function can enhance the accuracy of recognition and positioning of specific targets, the polygonal loss function can improve the prediction accuracy of esophageal and tumor contour nodes, and the classification loss function can improve the prediction accuracy of tumor staging results. Therefore, the loss functions established from different dimensions can significantly improve the optimization of the model and enhance the prediction accuracy of the preset model.

作为优选方案,所述训练集,具体为:As a preferred solution, the training set is specifically:

从食管癌肿瘤病例中获取训练数据;Obtain training data from esophageal cancer tumor cases;

对所述训练数据中粘膜层、粘膜下层和外膜层的多边形轮廓点进行转化,得到第一过渡数据;Transforming the polygonal contour points of the mucosal layer, the submucosal layer and the adventitia layer in the training data to obtain first transition data;

根据所述第一过渡数据生成包含轮廓点数据、分割位图掩码和超声内镜图片的分期标签在内的第二过渡数据;generating second transition data including contour point data, a segmentation bitmap mask and a staging label of the ultrasound endoscopy image according to the first transition data;

对所述第二过渡数据进行数据增强,得到所述训练集。Data enhancement is performed on the second transition data to obtain the training set.

本发明还提供了一种食管结构和肿瘤轮廓与分期的预测装置,包括数据模块、特征模块、节点模块、曲线模块、分期模块和集成模块;The present invention also provides a prediction device for esophageal structure and tumor contour and staging, comprising a data module, a feature module, a node module, a curve module, a staging module and an integration module;

其中,所述数据模块,用于获取原始超声内镜图片;Wherein, the data module is used to obtain the original ultrasound endoscopy image;

所述特征模块,用于使用预设模型对所述原始超声内镜图片进行特征提取,得到全尺寸特征;The feature module is used to extract features from the original ultrasound endoscopy image using a preset model to obtain full-size features;

所述节点模块,用于使用所述预设模型对所述全尺寸特征进行结构拆分和节点重构,得到三层食管结构和肿瘤生长轮廓的控制点;The node module is used to perform structural decomposition and node reconstruction on the full-size feature using the preset model to obtain control points of the three-layer esophageal structure and tumor growth contour;

所述曲线模块,用于通过样条插值的方式,根据所述控制点生成食管和肿瘤的三次B-样条曲线;The curve module is used to generate a cubic B-spline curve of the esophagus and the tumor according to the control points by means of spline interpolation;

所述分期模块,用于通过计算所述三次B-样条曲线之间的相对距离得到肿瘤分期结果;The staging module is used to obtain a tumor staging result by calculating the relative distance between the cubic B-spline curves;

所述集成模块,用于由所述三次B-样条曲线和所述肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果;The integrated module is used to construct the prediction results of esophageal structure, tumor contour and tumor staging based on the cubic B-spline curve and the tumor staging result;

其中,所述预设模型是由不同器件和模块构成的深度学习模型。Among them, the preset model is a deep learning model composed of different devices and modules.

作为优选方案,所述节点模块包括全局单元、变换单元和映射单元;As a preferred solution, the node module includes a global unit, a transformation unit and a mapping unit;

其中,所述全局单元,用于使用所述预设模型的结构检测模块,对所述全尺寸特征进行维度变换与拆分,得到全局特征令牌;The global unit is used to use the structure detection module of the preset model to perform dimension transformation and splitting on the full-size feature to obtain a global feature token;

所述变换单元,用于将所述全局特征令牌结合预设的食管与肿瘤控制点特征,通过坐标变换的形式,得到三层食管结构的控制点;The transformation unit is used to combine the global feature token with the preset esophageal and tumor control point features to obtain the control points of the three-layer esophageal structure through coordinate transformation;

所述映射单元,用于通过对预设的肿瘤特征在不同的半径基础上进行特征映射,作为优选方案,所述全局单元包括维度子单元、构造子单元、拼接子单元和拆分子单元;The mapping unit is used to perform feature mapping on preset tumor features based on different radii. As a preferred solution, the global unit includes a dimension subunit, a construction subunit, a splicing subunit and a splitting subunit;

其中,所述维度子单元,用于使用所述预设模型的结构检测模块,对所述全尺寸特征进行维度变换,得到第一令牌;The dimension subunit is used to perform dimension transformation on the full-size feature using the structure detection module of the preset model to obtain a first token;

所述构造子单元,用于根据需要预测的食管结构控制点数目,构造可学习的第二令牌;The construction subunit is used to construct a learnable second token according to the number of esophageal structure control points that need to be predicted;

所述拼接子单元,用于对所述第一令牌和所述第二令牌进行拼接,得到第三令牌;The splicing subunit is used to splice the first token and the second token to obtain a third token;

所述拆分子单元,用于将所述第三令牌输入所述预设模型的Transformer模块并进行令牌拆分,得到所述全局特征令牌。The splitting subunit is used to input the third token into the Transformer module of the preset model and perform token splitting to obtain the global feature token.

作为优选方案,所述变换单元包括特征子单元、坐标子单元、半径子单元和控制子单元;As a preferred solution, the transformation unit includes a feature subunit, a coordinate subunit, a radius subunit and a control subunit;

其中,所述特征子单元,用于使用第一预设公式,根据所述全局特征令牌计算得到食管与肿瘤控制点特征;Wherein, the feature subunit is used to calculate the esophageal and tumor control point features according to the global feature token using a first preset formula;

所述坐标子单元,用于使用线性变换层对所述食管与肿瘤控制点特征进行映射,并将映射结果转化为极坐标的形式,得到粘膜层轮廓线的节点坐标;The coordinate subunit is used to map the esophagus and tumor control point features using a linear transformation layer, and convert the mapping results into polar coordinates to obtain the node coordinates of the mucosal layer contour line;

所述半径子单元,用于根据所述控制点特征,使用所述线性变换层分别计算出黏膜下层和外膜层的相对半径,并根据所述节点坐标和所述相对半径计算得到黏膜下层和外膜层的实际半径;The radius subunit is used to calculate the relative radius of the submucosal layer and the adventitia layer respectively according to the control point features using the linear transformation layer, and calculate the actual radius of the submucosal layer and the adventitia layer according to the node coordinates and the relative radius;

所述控制子单元,用于根据所述实际半径和所述节点坐标相对于食管中心的角度,将所述控制点特征对应的控制点转化为笛卡尔坐标表示的形式,得到所述三层食管结构的控制点。The control subunit is used to convert the control points corresponding to the control point features into a form of Cartesian coordinates according to the actual radius and the angle of the node coordinates relative to the center of the esophagus, so as to obtain the control points of the three-layer esophageal structure.

作为优选方案,所述映射单元包括定义子单元、尺度子单元、生长子单元和转化子单元;As a preferred solution, the mapping unit includes a definition subunit, a scale subunit, a growth subunit and a transformation subunit;

其中,所述定义子单元,用于定义肿瘤特征;Wherein, the definition subunit is used to define tumor characteristics;

所述尺度子单元,用于将所述肿瘤特征分别在第一半径尺度和第二半径尺度上进行映射,得到第一相对半径和第二相对半径;The scale subunit is used to map the tumor feature on the first radius scale and the second radius scale respectively to obtain a first relative radius and a second relative radius;

所述生长子单元,用于依次使用第一相对半径和第二相对半径,与所述节点坐标相对于食管中心的半径进行计算,分别得到肿瘤向外和向内生长轮廓线的实际半径;The growth subunit is used to sequentially use the first relative radius and the second relative radius to calculate with the radius of the node coordinate relative to the center of the esophagus, to obtain the actual radius of the tumor's outward and inward growth contour lines respectively;

所述转化子单元,用于根据所述肿瘤向外和向内生长轮廓线的实际半径,以及粘膜层的角度,进行节点位置的坐标转化,得到所述肿瘤向外和向内生长轮廓线的控制点。The conversion subunit is used to perform coordinate conversion of the node position according to the actual radius of the tumor's outward and inward growth contour line and the angle of the mucosal layer to obtain the control points of the tumor's outward and inward growth contour line.

作为优选方案,所述曲线模块包括参数单元、向量单元和集成单元;As a preferred solution, the curve module includes a parameter unit, a vector unit and an integration unit;

其中,所述参数单元,用于使用三次B-样条插值法分别对所述三层食管结构和肿瘤生长轮廓的控制点进行处理,得到参数化的食管结构轮廓线和肿瘤轮廓线;The parameter unit is used to process the control points of the three-layer esophageal structure and tumor growth contour respectively using a cubic B-spline interpolation method to obtain a parameterized esophageal structure contour line and a tumor contour line;

所述向量单元,用于在所述食管结构轮廓线和肿瘤轮廓线上,计算每个曲线点的法向量,得到法线的单位向量;The vector unit is used to calculate the normal vector of each curve point on the esophageal structure contour line and the tumor contour line to obtain the unit vector of the normal line;

所述集成单元,用于由所述食管结构轮廓线和肿瘤轮廓线,以及所述法线的单位向量构成所述食管和肿瘤的三次B-样条曲线。The integration unit is used to construct a cubic B-spline curve of the esophagus and the tumor using the esophageal structure contour line and the tumor contour line, and the unit vector of the normal line.

作为优选方案,所述分期模块包括距离单元、索引单元和比较单元;As a preferred solution, the staging module includes a distance unit, an index unit and a comparison unit;

其中,所述距离单元,用于根据所述三次B-样条曲线中肿瘤向外生长的第一轮廓线与粘膜下层的第二轮廓线,构建曲线上所有点之间的欧几里得距离,得到距离矩阵;The distance unit is used to construct the Euclidean distances between all points on the curve according to the first contour line of the tumor growing outward and the second contour line of the submucosal layer in the cubic B-spline curve to obtain a distance matrix;

所述索引单元,用于通过索引的方式,根据所述距离矩阵对所述第一轮廓线到所述第二轮廓线进行距离计算和法向量判断,得到所述第一轮廓线上每个点到所述第二轮廓线上最近点的距离最大值,将所述距离最大值作为相对距离数值集;The index unit is used to perform distance calculation and normal vector judgment from the first contour line to the second contour line according to the distance matrix by means of index, obtain the maximum distance from each point on the first contour line to the nearest point on the second contour line, and use the maximum distance as a relative distance value set;

所述比较单元,用于在所述相对距离数值集中,根据肿瘤向外生长的轮廓线与粘膜下层的相对距离的数值大小,以及肿瘤向外生长的轮廓线与外膜层的相对距离的数值大小,获取肿瘤分期结果。The comparison unit is used to obtain the tumor staging result according to the numerical value of the relative distance between the tumor's outward growth contour and the submucosal layer, and the numerical value of the relative distance between the tumor's outward growth contour and the outer membrane layer in the relative distance value set.

作为优选方案,所述特征模块包括识别单元和提取单元;As a preferred solution, the feature module includes a recognition unit and an extraction unit;

其中,所述识别单元,用于使用所述预设模型的有效区域检测器,对所述原始超声内镜图片的有效区域边界框进行识别和剪裁,得到有效超声内镜图片;Wherein, the recognition unit is used to use the effective area detector of the preset model to recognize and cut the effective area boundary box of the original ultrasound endoscopy image to obtain a valid ultrasound endoscopy image;

所述提取单元,用于使用所述预设模型的特征编码器与特征解码器,对所述有效超声内镜图片进行提取和上采样,得到所述全尺寸特征。The extraction unit is used to extract and upsample the effective ultrasonic endoscopic image using the feature encoder and feature decoder of the preset model to obtain the full-size feature.

作为优选方案,所述集成模块还包括DICE损失单元、多边形损失单元、分类损失单元和训练单元;As a preferred solution, the integrated module further includes a DICE loss unit, a polygon loss unit, a classification loss unit and a training unit;

其中,所述DICE损失单元,用于通过度量所述预设模型预测的分割掩码和真实分割掩码之间的一致性,得到DICE损失函数;The DICE loss unit is used to obtain a DICE loss function by measuring the consistency between the segmentation mask predicted by the preset model and the true segmentation mask;

所述多边形损失单元,用于通过度量所述预设模型的预测节点与实际节点之间的误差,得到多边形损失函数;The polygon loss unit is used to obtain a polygon loss function by measuring the error between the predicted node and the actual node of the preset model;

所述分类损失单元,用于通过量化所述预设模型预测的T分期标签与实际T分期标签之间的误差,得到分类损失函数;The classification loss unit is used to obtain a classification loss function by quantifying the error between the T staging label predicted by the preset model and the actual T staging label;

所述训练单元,用于使用预设的训练集,以及肿瘤与食管壁的掩膜,根据所述DICE损失函数、所述多边形损失函数和所述分类损失函数对所述预设模型进行训练,得到优化后的预设模型。The training unit is used to use a preset training set and a mask of the tumor and the esophageal wall to train the preset model according to the DICE loss function, the polygon loss function and the classification loss function to obtain an optimized preset model.

作为优选方案,所述训练集,具体为:As a preferred solution, the training set is specifically:

从食管癌肿瘤病例中获取训练数据;Obtain training data from esophageal cancer tumor cases;

对所述训练数据中粘膜层、粘膜下层和外膜层的多边形轮廓点进行转化,得到第一过渡数据;Transforming the polygonal contour points of the mucosal layer, the submucosal layer and the adventitia layer in the training data to obtain first transition data;

根据所述第一过渡数据生成包含轮廓点数据、分割位图掩码和超声内镜图片的分期标签在内的第二过渡数据;generating second transition data including contour point data, a segmentation bitmap mask and a staging label of the ultrasound endoscopy image according to the first transition data;

对所述第二过渡数据进行数据增强,得到所述训练集。Data enhancement is performed on the second transition data to obtain the training set.

本发明还提供了一种存储介质,所述存储介质上存储有计算机程序,所述计算机程序被计算机调用并执行,实现如上所述一种食管结构和肿瘤轮廓与分期的预测方法。The present invention also provides a storage medium having a computer program stored thereon. The computer program is called and executed by a computer to implement the above-mentioned method for predicting esophageal structure, tumor contour and staging.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本发明实施例提供的一种食管结构和肿瘤轮廓与分期的预测方法的流程示意图;FIG1 is a flow chart of a method for predicting esophageal structure, tumor contour and stage provided by an embodiment of the present invention;

图2是本发明实施例提供的食管的解剖结构图;FIG2 is an anatomical diagram of the esophagus provided in an embodiment of the present invention;

图3是本发明实施例提供的食管癌的T分期结构图;FIG3 is a diagram showing the T staging structure of esophageal cancer provided by an embodiment of the present invention;

图4是本发明实施例提供的单位法向量示意图;FIG4 is a schematic diagram of a unit normal vector provided by an embodiment of the present invention;

图5是本发明实施例提供的预测流程示意图;FIG5 is a schematic diagram of a prediction process provided by an embodiment of the present invention;

图6是本发明实施例提供的一种食管结构和肿瘤轮廓与分期的预测装置的结构示意图。FIG6 is a schematic diagram of the structure of a device for predicting esophageal structure, tumor contour and staging provided by an embodiment of the present invention.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will be combined with the drawings in the embodiments of the present application to clearly and completely describe the technical solutions in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.

在本申请的描述中,需要理解的是,术语“第一”、“第二”和“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”和“第三”的特征可以明示或者隐含地包括一个或者多个该特征。在本申请的描述中,除非另有说明,“若干”的含义是两个或两个以上。In the description of the present application, it should be understood that the terms "first", "second" and "third" are used for descriptive purposes only and should not be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first", "second" and "third" may explicitly or implicitly include one or more of the features. In the description of the present application, unless otherwise specified, "several" means two or more.

在本申请的描述中,需要说明的是,mask表示掩码;yolo是You Only Look Once的缩写,表示一种基于深度神经网络的目标检测算法;mask rcnn是Mask Region-basedConvolutional Neural Network的缩写,表示一种实例分割算法。In the description of this application, it should be noted that mask means mask; yolo is the abbreviation of You Only Look Once, which means an object detection algorithm based on deep neural network; mask rcnn is the abbreviation of Mask Region-based Convolutional Neural Network, which means an instance segmentation algorithm.

本发明实施例所描述的一种食管结构和肿瘤轮廓与分期的预测方法主要应用于需要根据超声内镜图像,对食管结构和肿瘤轮廓线进行预测,以及对肿瘤分期情况进行预测的情形。The method for predicting esophageal structure, tumor contour and staging described in the embodiment of the present invention is mainly used in situations where it is necessary to predict the esophageal structure and tumor contour, as well as the tumor staging, based on ultrasound endoscopic images.

实施例一:Embodiment 1:

请参阅图1,本发明的实施例提供了一种食管结构和肿瘤轮廓与分期的预测方法,包括S1~S6,具体实施步骤如下:Please refer to FIG1 . An embodiment of the present invention provides a method for predicting esophageal structure, tumor contour and staging, including S1 to S6. The specific implementation steps are as follows:

S1、获取原始超声内镜图片。S1. Obtain the original endoscopic ultrasound image.

本发明实施例步骤S1具体为:Step S1 of the embodiment of the present invention is specifically:

获取原始超声内镜图片I;其中,原始超声内镜图片I是指将内镜和超声结合起来对消化道进行检查而得到的图片。An original endoscopic ultrasound image I is obtained; wherein the original endoscopic ultrasound image I refers to an image obtained by combining endoscopy and ultrasound to examine the digestive tract.

为应用本发明实施例,请参阅图2和图3,图2是本发明实施例提供的食管的解剖结构图,表示食管的结构在超声内镜下可观察的结构,可以简化为由粘膜层、黏膜下层和外膜层构成的三层管腔结构,呈逐层嵌套的关系;肿瘤的生成通常始于食管粘膜层,并且可以向腔内和腔外方向生长。To apply the embodiment of the present invention, please refer to Figures 2 and 3. Figure 2 is an anatomical structure diagram of the esophagus provided in the embodiment of the present invention, which shows the structure of the esophagus observable under an ultrasonic endoscope, which can be simplified to a three-layer tubular cavity structure composed of a mucosal layer, a submucosal layer, and an adventitia layer, which are nested layer by layer; the formation of tumors usually starts in the esophageal mucosal layer and can grow in the cavity and outside the cavity.

图3是本发明实施例提供的食管癌的T分期结构图,表示食管癌的T分期划分方式,肿瘤的T分期主要根据其向腔外生长的程度确定,例如T1a级别的肿瘤位于粘膜层和黏膜下层之间,T1b级别的肿瘤已突破黏膜下层向外生长,T2级别表示肿瘤进一步扩展至食管壁深层,T3级别表示肿瘤穿透外膜层,T4级别表示肿瘤已经穿透外膜层且侵犯外部器官。使用超声内镜,可以鉴别肿瘤的T1a、T1b、T2期和是否大于等于T3期。FIG3 is a T staging structure diagram of esophageal cancer provided by an embodiment of the present invention, which shows the T staging division method of esophageal cancer. The T staging of a tumor is mainly determined according to the degree of its growth outside the cavity. For example, a T1a-level tumor is located between the mucosal layer and the submucosal layer, a T1b-level tumor has broken through the submucosal layer and grown outward, a T2-level tumor further extends to the deep layer of the esophageal wall, a T3-level tumor penetrates the outer membrane layer, and a T4-level tumor has penetrated the outer membrane layer and invaded external organs. Using an ultrasonic endoscope, it is possible to identify the T1a, T1b, T2 stages of a tumor and whether it is greater than or equal to the T3 stage.

S2、使用预设模型对原始超声内镜图片进行特征提取,得到全尺寸特征。S2. Use the preset model to extract features from the original ultrasound endoscopy image to obtain full-size features.

在本发明实施例步骤S2中,S2包括S2.1~S2.2;其中,S2.1是获取有效超声内镜图片Ivalid的过程,S2.2是构建全尺寸特征的过程,具体为:In step S2 of the embodiment of the present invention, S2 includes S2.1 to S2.2; wherein S2.1 is a process of obtaining a valid ultrasonic endoscopic image I valid , and S2.2 is a process of constructing a full-size feature, specifically:

S2.1、将原始超声内镜图片I输入预设模型的有效区域检测器进行有效区域边界框进行识别,得到有效区域的边界框(bounding box);S2.1. Inputting the original ultrasound endoscopy image I into the effective area detector of the preset model to identify the effective area bounding box, thereby obtaining the bounding box of the effective area;

对有效区域的边界框(bounding box)进行剪裁,得到有效超声内镜图片IvalidThe bounding box of the valid area is cropped to obtain a valid ultrasound endoscopy image I valid .

其中,有效区域检测器可以使用yolo或者mask rcnn之类的目标检测算法来实现。Among them, the effective area detector can be implemented using target detection algorithms such as yolo or mask rcnn.

本实施例S2.1通过对有效区域边界框进行识别和剪裁,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度。通过特征提取和上采样,能够从有效超声内镜图片中提取出与肿瘤相关的特征,使肿瘤的大小、形状和密度等特征包含在全尺寸特征中,可以更好地描述原始超声内镜图片的细节特征。In this embodiment S2.1, by identifying and clipping the effective area boundary box, redundant images can be removed, effectively ensuring the data accuracy of the ultrasound endoscopic image while reducing the complexity of data processing. Through feature extraction and upsampling, tumor-related features can be extracted from the effective ultrasound endoscopic image, so that features such as the size, shape, and density of the tumor are included in the full-size features, which can better describe the detailed features of the original ultrasound endoscopic image.

S2.2、先将有效超声内镜图片Ivalid输入预设模型的特征编码器进行特征提取,并且输入的矩阵为[B,C,H,W];其中,[B,C,H,W]可以取C=3,H=W=256;S2.2, first input the valid ultrasound endoscopy image I valid into the feature encoder of the preset model for feature extraction, and the input matrix is [B, C, H, W]; wherein [B, C, H, W] can take C = 3, H = W = 256;

再使用预设模型的特征解码器,对提取所得特征进行上采样,得到[B,h,H,W]的全尺寸特征;其中,[B,h,H,W]可以取h=64。Then, the feature decoder of the preset model is used to upsample the extracted features to obtain the full-size features of [B, h, H, W]; wherein [B, h, H, W] can take h = 64.

并且,特征编码器和特征解码器,组成了采用U-net网络结构的主干网络;特征编码器可以选择卷积神经网络CNN或者Transformer作为特征提取器,特征解码器采用的是U-Net的结构。In addition, the feature encoder and feature decoder constitute the backbone network using the U-net network structure; the feature encoder can choose convolutional neural network CNN or Transformer as the feature extractor, and the feature decoder adopts the U-Net structure.

需要说明的是,预设模型是由有效区域检测器、特征编码器、特征解码器和结构检测模块组成的深度学习模型。It should be noted that the preset model is a deep learning model composed of an effective area detector, a feature encoder, a feature decoder and a structure detection module.

S3、使用预设模型对全尺寸特征进行结构拆分和节点重构,得到三层食管结构和肿瘤生长轮廓的控制点。S3. Use the preset model to perform structural decomposition and node reconstruction on the full-size features to obtain the control points of the three-layer esophageal structure and tumor growth contour.

在本发明实施例步骤S3中,S3包括S3.1~S3.4;其中,S3.1是构建全局特征令牌的过程,S3.2是定义食管与肿瘤控制点特征的过程,S3.3是构建三层食管结构的控制点的过程,S3.4是构建肿瘤向外和向内生长轮廓线的控制点的过程,具体为:In step S3 of the embodiment of the present invention, S3 includes S3.1 to S3.4; wherein S3.1 is a process of constructing a global feature token, S3.2 is a process of defining the features of the esophagus and the tumor control points, S3.3 is a process of constructing the control points of the three-layer esophagus structure, and S3.4 is a process of constructing the control points of the outward and inward growth contours of the tumor, specifically:

S3.1、使用预设模型的结构检测模块,把[B,h,H,W]的全尺寸特征转变成维度为[B,HW,h]的得到第一令牌/>并为第一令牌/>嵌入对应坐标位置的位置嵌入(positional embedding);其中,B代表批次大小,h表示特征维度,H和W表示源矩阵的高和宽;S3.1. Use the structure detection module of the preset model to transform the full-size features of [B, h, H, W] into features of dimensions [B, HW, h]. Get the first token /> And for the first token/> Embed the corresponding coordinate position into positional embedding, where B represents the batch size, h represents the feature dimension, and H and W represent the height and width of the source matrix.

根据需要预测的食管结构控制点数目,构造可学习的第二令牌其中,/>的维度为[B,ns,h],ns表示需要预测的食管结构控制点数目;Construct a learnable second token based on the number of esophageal structure control points that need to be predicted Among them,/> The dimension is [B, n s , h], n s represents the number of esophageal structure control points that need to be predicted;

对第一令牌和第二令牌进行拼接,得到第三令牌token{0};其中,第三令牌token{0}的维度为[B,ns+HW,h];The first token and the second token are concatenated to obtain a third token token {0} ; wherein the dimension of the third token token {0} is [B, ns +HW, h];

将第三令牌token{0}输入预设模型中L层的Transformer模块,得到token{L},并对token{L}进行重新拆分,得到全局特征令牌(和/>);其中,/>的维度为[B,ns,h],表示模型会输出ns个食管/肿瘤控制点特征。The third token token {0} is input into the L-layer Transformer module in the preset model to obtain token {L} , and token {L} is re-split to obtain the global feature token ( and/> ); where /> The dimension is [B, ns , h], which means that the model will output ns esophageal/tumor control point features.

本实施例S3.1对全尺寸特征进行维度变换、令牌构造、令牌拼接和令牌拆分的步骤,就是对全尺寸特征中的特征信息进行整理和收集的过程,使所得到的全局特征令牌在拥有全局特征的同时,具有良好的不变性和表示直观的特点,有助于为后期的食管结构以及肿瘤预测提供坚实的数据基础。The steps of dimension transformation, token construction, token splicing and token splitting of the full-size features in S3.1 of this embodiment are the process of organizing and collecting the feature information in the full-size features, so that the obtained global feature tokens have good invariance and intuitive representation while having global features, which helps to provide a solid data foundation for the later esophageal structure and tumor prediction.

并且,本实施例一引入了基于Transformer的多层多边形生成技术,以显式方式表示符合解剖学结构的器官,通过该表示方法,食管壁和肿瘤区域的多边形曲线更加符合解剖结构,可以有效提高分割结果的结构准确性和解释性,为医护人员提供更可靠的图像数据解读基础。In addition, this embodiment 1 introduces a Transformer-based multi-layer polygon generation technology to explicitly represent organs that conform to the anatomical structure. Through this representation method, the polygonal curves of the esophageal wall and tumor area are more consistent with the anatomical structure, which can effectively improve the structural accuracy and interpretability of the segmentation results, and provide medical staff with a more reliable basis for interpreting image data.

S3.2、令使用第一预设公式,根据全局特征令牌计算得到食管与肿瘤控制点特征Fstructure;其中,Fstructure的维度为[B,ns,h];S3.2. Order Using the first preset formula, the esophageal and tumor control point features F structure are calculated based on the global feature tokens; wherein the dimension of F structure is [B, ns , h];

其中,食管与肿瘤控制点特征为:Among them, the characteristics of esophagus and tumor control points are:

Fstructure=softmax(Qs×Kf T)×Vf F structure = softmax(Q s ×K f T )×V f

S3.3、对于食管管腔的粘膜层、黏膜下层和外膜层,需要在各自所在层面预测若干节点(控制点),这些节点将决定三次B-样条曲线的形态,从而描述出食管结构的轮廓线,因此,对于每一个食管控制点,都需要预测其粘膜层的节点坐标(以笛卡尔坐标表示)、黏膜下层和外膜层的相对半径(膜层距离之间的半径差)。具体步骤如下:S3.3. For the mucosal layer, submucosal layer and adventitia layer of the esophageal lumen, it is necessary to predict several nodes (control points) at their respective levels. These nodes will determine the shape of the cubic B-spline curve, thereby describing the contour of the esophageal structure. Therefore, for each esophageal control point, it is necessary to predict the node coordinates of the mucosal layer (expressed in Cartesian coordinates), the relative radius of the submucosal layer and the adventitia layer (the radius difference between the membrane layer distances). The specific steps are as follows:

假设当前食管结构的特征为Fstructure,维度为[B,ns,h],其中B为批量大小,ns为需要预测的食管结构控制点数,h为特征维度;Assume that the feature of the current esophageal structure is F structure , and the dimension is [B, ns , h], where B is the batch size, ns is the number of esophageal structure control points to be predicted, and h is the feature dimension;

使用线性变换层将ns个控制节点的食管与肿瘤控制点特征Fstructure映射到二维(2D),即Pmucosa=Wp*Fstructure;并将映射结果转化为极坐标的形式,得到粘膜层轮廓线的节点坐标(rmucosa,θmucosa);其中,Wp为维度为[2,h]的权重矩阵,Pmucosa对应的维度为[B,ns,2],代表ns个粘膜层轮廓线的控制点的2D坐标;并且rmucosa和θmucosa分别表示节点相对于食管中心的半径和角度;A linear transformation layer is used to map the esophageal and tumor control point features F structure of the n s control nodes to two dimensions (2D), that is, P mucosa =W p *F structure ; and the mapping result is converted into polar coordinates to obtain the node coordinates of the mucosal layer contour line (r mucosamucosa ); wherein W p is a weight matrix with a dimension of [2, h], and the dimension corresponding to P mucosa is [B, n s , 2], representing the 2D coordinates of the control points of the n s mucosal layer contour line; and r mucosa and θ mucosa represent the radius and angle of the node relative to the center of the esophagus, respectively;

根据控制点特征,使用两个线性变换层分别计算出黏膜下层的相对半径dRsubmucosa和外膜层的相对半径dRout,即dRsubmucosa=relu(WdR1*Fstructure)和dRout=relu(WdR2*Fstructure),并使用relu激活函数用于确保预测半径为正数,其中,WdR1和WdR2为维度为[1,h]的权重矩阵,dRsubmucosa和dRout对应的维度为[B,ns,1],表示这ns个控制节点对应的黏膜下层和外膜层的相对半径;According to the control point features, two linear transformation layers are used to calculate the relative radius of the submucosal layer dR submucosa and the relative radius of the adventitia dR out , that is, dR submucosa = relu(W dR1 *F structure ) and dR out = relu(W dR2 *F structure ), and the relu activation function is used to ensure that the predicted radius is a positive number, where W dR1 and W dR2 are weight matrices with dimensions [1, h], and the dimensions corresponding to dR submucosa and dR out are [B, ns , 1], which represent the relative radii of the submucosal layer and the adventitia corresponding to the n s control nodes;

并根据节点坐标和相对半径计算得到黏膜下层的实际半径Rsubmucosa和外膜层的实际半径Rout,即Rsubmucosa=rmucosa+dRsubmucosa和Rout=Rsubmucosa+dRoutThe actual radius R submucosa of the submucosal layer and the actual radius R out of the adventitia layer are calculated according to the node coordinates and the relative radius, that is, R submucosa = r mucosa + dR submucosa and R out = R submucosa + dR out ;

根据实际半径和节点坐标相对于食管中心的角度,将控制点特征对应的控制点从极坐标转化为笛卡尔坐标表示的形式,得到三层食管结构的控制点(Pmucosa、Psubmucosa和Pout)。According to the actual radius and the angle of the node coordinates relative to the esophageal center, the control points corresponding to the control point features are converted from polar coordinates to Cartesian coordinates to obtain the control points of the three-layer esophageal structure (P mucosa , P submucosa and P out ).

本实施例所计算出的黏膜下层和外膜层的相对半径具有一定的限制性,无法准确地表示黏膜不同层次之间的距离,因此在相对半径的基础上,结合节点坐标计算得到实际半径,从而可以准确表示粘膜下层和外膜层相对于粘膜层的轮廓线位移程度。The relative radius of the submucosal layer and the adventitia layer calculated in this embodiment has certain limitations and cannot accurately represent the distance between different layers of the mucosa. Therefore, on the basis of the relative radius, the actual radius is calculated in combination with the node coordinates, so that the degree of displacement of the contour line of the submucosal layer and the adventitia layer relative to the mucosal layer can be accurately represented.

S3.4、为了模拟肿瘤的生长情况,需要预测肿瘤在食管粘膜层向外和向内的生长情况,即需要得到肿瘤在各方向上的轮廓线,包括S3.41~S3.44,具体为:S3.4. In order to simulate the growth of the tumor, it is necessary to predict the outward and inward growth of the tumor in the esophageal mucosal layer, that is, it is necessary to obtain the contour lines of the tumor in all directions, including S3.41 to S3.44, specifically:

S3.41、定义肿瘤特征Ftumor=FstructureS3.41. Define tumor characteristics F tumor = F structure ;

S3.42、预测出肿瘤向外生长的轮廓线相对于粘膜层的半径差,即向外生长的相对半径具体为:S3.42. Predict the radius difference of the tumor's outward growth contour relative to the mucosal layer, i.e. the relative radius of outward growth Specifically:

通过一个线性变换层将Ftumor的特征映射到相对半径(第一相对半径)上,即并通过relu激活函数来保证相对半径的非负性;其中,是一个维度为[1,h]的权重矩阵;The features of F tumor are mapped to the relative radius (first relative radius) through a linear transformation layer, that is, And the relu activation function is used to ensure the non-negativity of the relative radius; is a weight matrix with dimension [1, h];

使用第一相对半径和节点坐标相对于食管中心的半径rmucosa,计算得到肿瘤向外生长轮廓线的实际半径即/>并且,肿瘤向外生长轮廓线的实际半径所对应的角度等于粘膜层的角度θmucosa,即/> Using the first relative radius and the radius of the node coordinate relative to the center of the esophagus, r mucosa , the actual radius of the tumor outward growth contour is calculated. That is/> Furthermore, the angle corresponding to the actual radius of the tumor's outward growth contour is equal to the angle θ mucosa of the mucosal layer, i.e./>

S3.43、为保证肿瘤向内生长的轮廓线始终在粘膜层内部,需要计算出肿瘤在内部的相对半径即:S3.43. To ensure that the contour of the tumor's inward growth is always inside the mucosal layer, it is necessary to calculate the relative radius of the tumor inside. Right now:

通过一个线性变换层将Ftumor的特征映射到基础半径(第二相对半径)上,即并通过relu激活函数来保证相对半径的非负性;其中,为一个维度为[1,h]的权重矩阵;The features of F tumor are mapped to the base radius (the second relative radius) through a linear transformation layer, that is, And the relu activation function is used to ensure the non-negativity of the relative radius; is a weight matrix with dimension [1, h];

使用第二相对半径和节点坐标相对于食管中心的半径Tmucosa,计算得到肿瘤向外生长轮廓线的实际半径即/>并且,确保实际半径大于0,且肿瘤向外生长轮廓线的实际半径所对应的角度等于粘膜层的角度θmucosa,即 Using the second relative radius and the radius T mucosa of the node coordinate relative to the center of the esophagus, the actual radius of the tumor outward growth contour is calculated. That is/> Furthermore, it is ensured that the actual radius is greater than 0, and the angle corresponding to the actual radius of the tumor outward growth contour is equal to the angle θ mucosa of the mucosal layer, that is,

S3.44、根据肿瘤向外和向内生长轮廓线的实际半径,以及粘膜层的角度,进行节点位置的坐标转化,得到肿瘤向外和向内生长轮廓线的控制点(和/>)。S3.44. According to the actual radius of the tumor's outward and inward growth contours and the angle of the mucosal layer, the coordinates of the node positions are transformed to obtain the control points of the tumor's outward and inward growth contours ( and/> ).

本实施例S3.4是肿瘤生长轮廓线的控制点的构造过程,将笛卡尔坐标和极坐标之间的转换关系引用到控制点的处理上,能够利用笛卡尔坐标具有表达直观性、描述简洁性和描述精确性的优势,使肿瘤生长形状的计算方式变得相对简便,进而能够清晰地描述肿瘤生长轮廓线的控制点。This embodiment S3.4 is the construction process of the control points of the tumor growth contour line. The conversion relationship between Cartesian coordinates and polar coordinates is referenced in the processing of control points. The advantages of Cartesian coordinates in terms of intuitive expression, concise description and accurate description can be utilized to make the calculation method of the tumor growth shape relatively simple, thereby being able to clearly describe the control points of the tumor growth contour line.

本实施例S3从整体来看,通过在肿瘤向外生长和向内生长这两种维度上进行轮廓线的控制点的预测,得到肿瘤向外和向内生长轮廓线的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估。From an overall perspective, in this embodiment S3, by predicting the control points of the contour lines in the two dimensions of tumor outward growth and inward growth, the control points of the tumor outward and inward growth contour lines are obtained, which can fully obtain the characteristics of tumor growth and is conducive to the effective evaluation of the esophageal disease status.

S4、通过样条插值的方式,根据控制点生成食管和肿瘤的三次B-样条曲线。S4. Generate cubic B-spline curves of the esophagus and the tumor according to the control points by means of spline interpolation.

在本发明实施例步骤S4中,S4包括S4.1~S4.3;其中,S4.1是构建食管结构轮廓线的过程,S4.2是构建肿瘤轮廓线的过程,S4.3是构建三次B-样条曲线的过程,具体为:In step S4 of the embodiment of the present invention, S4 includes S4.1 to S4.3; wherein S4.1 is a process of constructing the esophageal structure contour line, S4.2 is a process of constructing the tumor contour line, and S4.3 is a process of constructing a cubic B-spline curve, specifically:

S4.1、对于食管结构的每一层(包括粘膜层、黏膜下层和外膜层),都有对应的控制点坐标即三层食管结构的控制点和肿瘤向外和向内生长轮廓线的控制点;其中Pi=(xi,yi),(i=1,2,...,ns)表示第i个控制点的笛卡尔坐标;S4.1. For each layer of the esophageal structure (including the mucosal layer, submucosal layer, and adventitia layer), there are corresponding control point coordinates. That is, the control points of the three-layer esophageal structure and the control points of the outward and inward growth contours of the tumor; wherein Pi = ( xi , yi ), (i = 1, 2, ..., ns ) represents the Cartesian coordinates of the ith control point;

使用三次B-样条插值法对三层食管结构的控制点进行处理,得到参数化的食管结构轮廓线S(t)=[x(t),y(t)];函数S(t)表示由控制点生成的连续的轮廓线,假如t在[0,1]中均匀采样s个点,则每条三次B-样条曲线的节点数为nspline=ns*s个;由此得到三次B-样条曲线Smucosa、Ssubmucosa和Sout,即食管结构轮廓线;The control points of the three-layer esophageal structure are processed using the cubic B-spline interpolation method to obtain the parameterized esophageal structure contour line S(t) = [x(t), y(t)]; the function S(t) represents the continuous contour line generated by the control points. If t uniformly samples s points in [0,1], the number of nodes of each cubic B-spline curve is n spline = n s *s; thus, the cubic B-spline curves S mucosa , S submucosa and S out are obtained, namely the esophageal structure contour line;

其中,三次B-样条曲线是一种由控制点集定义的分段函数。设Ai(i=1,2,3,4)为三次B-样条的四个基函数,用于插值和拟合曲线,则曲线S(t)的数学定义为:The cubic B-spline curve is a piecewise function defined by a set of control points. Let Ai (i = 1, 2, 3, 4) be the four basis functions of the cubic B-spline, used for interpolation and fitting curves, then the mathematical definition of the curve S(t) is:

S(t)=A1P1+A2P2+A3P3+A4P4 S(t )A1P1 + A2P2 + A3P3 + A4P4

对于参数A1、A2、A3和A4有:For parameters A 1 , A 2 , A 3 and A 4 we have:

其中,S(t)为参数t∈[0,1]在曲线上的点,P1、P2、P3和P4为三次B-样条的控制点。Wherein, S(t) is a point on the curve with parameter t∈[0,1], and P 1 , P 2 , P 3 and P 4 are control points of the cubic B-spline.

S4.2、同S4.1所述,使用三次B-样条插值法对肿瘤生长轮廓的控制点进行处理(包括向外生长和向内生长),得到三次B-样条曲线和/>即肿瘤轮廓线。S4.2. As described in S4.1, the control points of the tumor growth contour are processed using the cubic B-spline interpolation method (including outward growth and inward growth) to obtain a cubic B-spline curve. and/> The tumor outline.

S4.3、在食管结构轮廓线和肿瘤轮廓线上,计算每个曲线点的法向量,得到法线的单位向量;S4.3. Calculate the normal vector of each curve point on the esophageal structure contour line and the tumor contour line to obtain the unit vector of the normal line;

由食管结构轮廓线和肿瘤轮廓线,以及法线的单位向量构成食管和肿瘤的三次B-样条曲线。The cubic B-spline curves of the esophagus and the tumor are constructed by the esophageal structure contour line, the tumor contour line, and the unit vector of the normal line.

其中,法线的单位向量获取过程具体为:The specific process of obtaining the unit vector of the normal is as follows:

切向量可以从曲线的一阶导数中推导得出,即V=dy/dx;The tangent vector can be derived from the first derivative of the curve, that is, V = dy/dx;

曲线的单位法向量可以通过逆时针旋转切线向量90度计算得出;即在二维空间中,通过翻转向量的x和y坐标,并更改其中一个分量的符号,即:The unit normal vector of a curve can be calculated by rotating the tangent vector 90 degrees counterclockwise; that is, in two dimensions, by flipping the x and y coordinates of the vector and changing the sign of one of the components, i.e.:

对于参数和/>有:For parameters and/> have:

因此,该法向量以L2范数为基准进行归一化,得到法线的单位向量。Therefore, the normal vector is normalized based on the L2 norm to obtain the unit vector of the normal line.

为应用本发明实施例,请参阅图4,图4是本发明实施例提供的单位法向量示意图,该图展示了法线的单位向量与食管的解剖结构之间的关系。To apply the embodiment of the present invention, please refer to FIG. 4 , which is a schematic diagram of a unit normal vector provided by the embodiment of the present invention, and shows the relationship between the unit vector of the normal line and the anatomical structure of the esophagus.

本实施例S4通过以控制点为基本元素,将离散点数据连接成一个曲线,进而得到精度高、收敛性且拟合准确的曲线作为结构轮廓线,并通过结合法向量,得到完整的三次B-样条曲线,能够用来准确地表示食管和肿瘤的形状。In this embodiment S4, by using control points as basic elements, discrete point data are connected into a curve, thereby obtaining a curve with high precision, convergence and accurate fitting as a structural contour line, and by combining normal vectors, a complete cubic B-spline curve is obtained, which can be used to accurately represent the shape of the esophagus and tumors.

S5、通过计算三次B-样条曲线之间的相对距离得到肿瘤分期结果。S5. The tumor staging result is obtained by calculating the relative distance between the cubic B-spline curves.

在本发明实施例步骤S5中,S5包括S5.1~S5.2;S5.1是构建肿瘤分期结果的过程,S5.2是进行模型数据返回的过程,具体为:In step S5 of the embodiment of the present invention, S5 includes S5.1 to S5.2; S5.1 is a process of constructing a tumor staging result, and S5.2 is a process of returning model data, specifically:

S5.1、食管癌的T分期是根据肿瘤所侵犯的组织结构来推断的,因此可以通过计算肿瘤向外生长轮廓线与粘膜下层(Ssubmucosa)和外膜层(Sout)轮廓线之间的相对距离Dsubmucosa和Dout来确定T分期/>包括S5.11~S5.18,具体为:S5.1. The T stage of esophageal cancer is inferred based on the tissue structure invaded by the tumor. Therefore, the outward growth contour of the tumor can be calculated. The relative distances D submucosa and D out from the submucosa (S submucosa ) and adventitia (S out ) contours determine the T stage. Includes S5.11 to S5.18, specifically:

S5.11、计算三次B-样条曲线中曲线A(肿瘤向外生长的第一轮廓线)与曲线B(粘膜下层的第二轮廓线)上所有点之间的欧几里得距离,得到距离矩阵;S5.11. Calculate the Euclidean distances between all points on curve A (the first contour line of tumor outward growth) and curve B (the second contour line of the submucosal layer) in the cubic B-spline curve to obtain a distance matrix;

S5.12、找到曲线A上每个点到曲线B上的最近点的距离及其索引;S5.12. Find the distance and index of each point on curve A to the nearest point on curve B;

S5.13、根据最近点的索引,从曲线B中取出对应的点,并将其视为与曲线A中相应点对应的点;S5.13, according to the index of the nearest point, take the corresponding point from curve B and treat it as the point corresponding to the corresponding point in curve A;

S5.14、获取曲线B在最近点处的法向量;S5.14, obtaining the normal vector of curve B at the nearest point;

S5.15、计算曲线A上每个点到曲线B上最近点的方向向量,并判断其与曲线B上最近点处的法向量是否一致;S5.15. Calculate the direction vector from each point on curve A to the nearest point on curve B, and determine whether it is consistent with the normal vector at the nearest point on curve B;

S5.16、如果判断出曲线A的点在曲线B内部,则将该距离取为负数。S5.16. If it is determined that the point of curve A is inside curve B, the distance is taken as a negative number.

S5.17、取曲线A每个点到曲线B上最近点的距离的最大值,作为相对距离的度量值,得到相对距离数值集。S5.17. Take the maximum value of the distance from each point on curve A to the nearest point on curve B as the measure of relative distance, and obtain a relative distance value set.

S5.18、在相对距离数值集中,将肿瘤向外生长的轮廓线与粘膜下层的相对距离记为d1,将肿瘤向外生长的轮廓线与外膜层的相对距离记为d2;S5.18. In the relative distance value set, the relative distance between the tumor's outward growth contour and the submucosal layer is recorded as d1, and the relative distance between the tumor's outward growth contour and the adventitia layer is recorded as d2;

根据肿瘤向外生长的轮廓线与粘膜下层的相对距离的数值大小,以及肿瘤向外生长的轮廓线与外膜层的相对距离的数值大小,获取肿瘤分期结果,即食管癌的T分期,具体为:According to the numerical value of the relative distance between the outward growth contour of the tumor and the submucosal layer, and the numerical value of the relative distance between the outward growth contour of the tumor and the outer membrane layer, the tumor staging result, that is, the T stage of esophageal cancer, is obtained, which is specifically:

1)若d1<0,则T分期为T1a,表示当肿瘤向外生长轮廓线在粘膜层内部时,食管癌被判定为T1a。1) If d1 < 0, the T stage is T1a, which means that when the tumor's outward growth contour is inside the mucosal layer, esophageal cancer is diagnosed as T1a.

2)若d1≥0,且d2<0,且(|d1|)/(|d1|+|d2|)<threshold_ratio,则T分期为T1b,表示当肿瘤向外生长轮廓线在黏膜下层内部,但已经突破了粘膜层时,食管癌被判定为T1b。2) If d1 ≥ 0, and d2 < 0, and (|d1|)/(|d1|+|d2|) < threshold_ratio, the T stage is T1b, which means that when the tumor's outward growth contour is inside the submucosal layer but has broken through the mucosal layer, esophageal cancer is diagnosed as T1b.

3)若d1≥0,且d2<0,且(|d1|)/(|d1|+|d2|)≥threshold_ratio,则T分期为T2,表示肿瘤向外生长轮廓线已突破食管肌层,但尚未扩散到邻近的淋巴结或其他结构时,食管癌被判定为T2。3) If d1≥0, and d2<0, and (|d1|)/(|d1|+|d2|)≥threshold_ratio, the T stage is T2, indicating that the tumor's outward growth outline has broken through the esophageal muscle layer but has not yet spread to adjacent lymph nodes or other structures. Esophageal cancer is diagnosed as T2.

4)若d2≥0,则T分期为T3或T4,表示当肿瘤向外生长轮廓线已突破外膜层时,食管癌被判定为T3\4。4) If d2 ≥ 0, the T stage is T3 or T4, which means that when the tumor's outward growth contour has broken through the outer membrane layer, esophageal cancer is diagnosed as T3\4.

其中,threshold_ratio表示粘膜下层的厚度,经验取值范围为0.05~0.2。Among them, threshold_ratio represents the thickness of the submucosal layer, and the empirical value range is 0.05 to 0.2.

本实施例S5.1通过构建距离矩阵,能够准确地表示曲线上所有点之间的距离,通过索引的方式,可以让不同曲线上的点产生对应关联,便于获取第一轮廓线上每个点到第二轮廓线上最近点的距离最大值;并且,相对距离数值集中的相对距离有正负数之分,所以通过判断数值大小的方式,能够知晓不同膜层之间的组织侵犯程度,进而得到肿瘤分期结果。This embodiment S5.1 can accurately represent the distances between all points on the curve by constructing a distance matrix. By means of indexing, points on different curves can be associated with each other, so as to obtain the maximum distance from each point on the first contour line to the nearest point on the second contour line. Moreover, the relative distances in the relative distance value set can be positive or negative, so by judging the size of the value, the degree of tissue invasion between different membrane layers can be known, thereby obtaining the tumor staging result.

S5.2、经过以上计算,使模型返回所有结构的三次B-样条曲线Smucosa、Ssubmucosa、Sout和/>返回各结构的可微分掩膜Ydiff-mask;返回肿瘤与粘膜下层、外膜层的相对距离Dsubmucosa和Dout,以及T分期分类结果/> S5.2. After the above calculations, the model returns the cubic B-spline curves S mucosa , S submucosa , S out , and/> Return the differentiable mask Y diff-mask of each structure; return the relative distance D submucosa and D out between the tumor and the submucosal layer and the adventitia layer, as well as the T stage classification result/>

为应用本发明实施例,请参阅图5,图5为预测流程示意图,该图展示了本实施例进行数据预测的大体过程,具体为:To apply the embodiment of the present invention, please refer to FIG. 5 , which is a schematic diagram of a prediction process, which shows the general process of data prediction in this embodiment, specifically:

1、超声内镜图像:获取原始超声内镜图片I。1. Ultrasound endoscopic image: obtain the original ultrasound endoscopic image I.

2、有效区域检测:使用有效区域检测器对原始超声内镜图片I进行有效区域检测,以获取有效超声内镜图片Ivalid2. Valid region detection: Use a valid region detector to perform valid region detection on the original ultrasound endoscopic image I to obtain a valid ultrasound endoscopic image I valid .

3、主干网络:使用由特征编码器和特征解码器组成的主干网络对超声内镜图片Ivalid进行处理,得到全尺寸特征。3. Backbone network: Use the backbone network composed of feature encoder and feature decoder to process the ultrasound endoscopy image I valid to obtain full-size features.

4、结构检测模块:使用结构检测模块对全尺寸特征进行处理,得到食管结构及肿瘤轮廓线,即食管和肿瘤的三次B-样条曲线。4. Structural detection module: The structural detection module is used to process the full-size features to obtain the esophageal structure and tumor contour, that is, the cubic B-spline curve of the esophagus and the tumor.

6、基于规则的T分期判别模块:使用结构检测模块对三次B-样条曲线进行处理,得到T分期预测结果,即肿瘤分期结果。6. Rule-based T staging discrimination module: Use the structure detection module to process the cubic B-spline curve to obtain the T staging prediction result, that is, the tumor staging result.

在本实施例S5中,传统的深度学习分割模型无法直接得到各组织结构的轮廓线,需要做大量的后处理,计算量大;或者选择使用深度学习分类模型,缺乏可解释性,而本实施例S5在得到各组织结构的轮廓线后仅需规则即可判断T分期,且有很强的临床可解释性。In this embodiment S5, the traditional deep learning segmentation model cannot directly obtain the contour lines of each tissue structure, and requires a lot of post-processing and large amount of calculation; or chooses to use a deep learning classification model, which lacks interpretability. After obtaining the contour lines of each tissue structure, this embodiment S5 only needs rules to determine the T stage, and has strong clinical interpretability.

S6、由三次B-样条曲线和肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果;S6, the prediction results of esophageal structure, tumor contour and tumor stage are constructed by cubic B-spline curve and tumor stage results;

其中,预设模型是由不同器件和模块构成的深度学习模型。Among them, the preset model is a deep learning model composed of different devices and modules.

在本发明实施例步骤S6中,S6包括S6.1~S6.5;其中,S6.1为构建预测结果的过程,S6.2为构建肿瘤与食管壁的掩膜的过程,S6.3为构建训练集的过程,S6.4为构建损失函数的过程,S6.5为模型训练的过程,具体为:In step S6 of the embodiment of the present invention, S6 includes S6.1 to S6.5; wherein S6.1 is a process of constructing a prediction result, S6.2 is a process of constructing a mask of a tumor and an esophageal wall, S6.3 is a process of constructing a training set, S6.4 is a process of constructing a loss function, and S6.5 is a process of model training, specifically:

S6.1、由三次B-样条曲线和肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果。S6.1. The prediction results of esophageal structure, tumor contour and tumor stage are constructed by cubic B-spline curve and tumor stage results.

本实施例S6.1能够实现一次训练内的多任务输出,包括对食管结构、肿瘤区域以及肿瘤T分期的预测;这一综合性的技术创新为医护人员提供了更全面、一体化的诊断信息,极大地提升了超声内镜图像处理和诊断的效能。This embodiment S6.1 can achieve multi-task output within one training, including the prediction of esophageal structure, tumor area and tumor T stage; this comprehensive technological innovation provides medical staff with more comprehensive and integrated diagnostic information, greatly improving the efficiency of ultrasound endoscopic image processing and diagnosis.

S6.2、首先,在diffvg库中创建一个空白的可微分图像设置宽度为W,高度为H,且将所有像素值均初始化为0;S6.2. First, create a blank differentiable image in the diffvg library Set the width to W, the height to H, and initialize all pixel values to 0;

然后,定义函数y=M(S)以生成一个对应的掩膜图像;其中,S表示闭合的三次B-样条曲线,y表示绘制多边形后得到的二维平面位图的掩膜;Then, a function y=M(S) is defined to generate a corresponding mask image; wherein S represents a closed cubic B-spline curve, and y represents a mask of a two-dimensional plane bitmap obtained after drawing a polygon;

其次,绘制肿瘤和食管壁的掩膜,包括S6.21~S6.24,具体为:Secondly, draw the mask of the tumor and esophageal wall, including S6.21 to S6.24, specifically:

S6.21、肿瘤掩膜:使用肿瘤的三次B-样条轮廓线和/>生成肿瘤掩膜Ytumor,即/> S6.21. Tumor mask: using cubic B-spline contour of the tumor and/> Generate tumor mask Y tumor , that is/>

S6.22、食管粘膜层掩膜:使用食管粘膜层的三次B-样条轮廓线,生成食管粘膜层掩膜Ymucosa,即Ymucosa=M(Smucosa)。S6.22. Esophageal mucosa layer mask: The esophageal mucosa layer mask Y mucosa is generated using the cubic B-spline contour of the esophageal mucosa layer, ie, Y mucosa =M(S mucosa ).

S6.23、食管黏膜下层掩膜:使用食管粘膜层和黏膜下层的三次B-样条轮廓线,生成食管黏膜下层掩膜Ysubmucosa,即Ysubmucosa=M(Psubmucosa)\M(Smucosa)。S6.23. Esophageal submucosal mask: The esophageal mucosal layer and the cubic B-spline contour line of the submucosal layer are used to generate the esophageal submucosal mask Y submucosa , ie, Y submucosa =M(P submucosa )\M(S mucosa ).

S6.24、食管黏膜下层掩膜:根据食管外膜层和粘膜下层的三次B-样条轮廓线,生成食管外膜层掩膜Yout,即Yout=M(Sout)\M(Ssubmucosa)\M(Smucosa)。S6.24. Esophageal submucosal layer mask: Generate the esophageal adventitial layer mask Y out according to the cubic B-spline contours of the esophageal adventitial layer and the submucosal layer, ie, Y out =M(S out )\M(S submucosa )\M(S mucosa ).

其中,″\″表示集合的差集,即 Among them, "\" represents the difference of sets, that is,

最后,将S6.21~S6.24中的四个掩膜合并,生成一个4通道的位图掩膜:Ydiff-mask=[Ytumor,Ymucosa,Ysubmucosa,Yout],即得到肿瘤与食管壁的掩膜。Finally, the four masks in S6.21 to S6.24 are combined to generate a 4-channel bitmap mask: Y diff-mask = [Y tumor , Y mucosa , Y submucosa , Y out ], that is, the mask of the tumor and the esophageal wall is obtained.

本实施例S6.2的所有操作均为微分操作,能够产生微分的图像。All operations in S6.2 of this embodiment are differential operations and can generate differential images.

S6.3、从食管癌肿瘤病例中获取训练数据;S6.3, obtaining training data from esophageal cancer tumor cases;

使用python的shapely库,把粘膜层、粘膜下层和外膜层的多边形轮廓点转换为shapely.geometry.polygon.LinearRing对象,并把肿瘤区域转换为shapely.geometry.polygon.Polygon对象,得到第一过渡数据;并且,在转换的过程中,使用shapely库的函数检查轮廓节点的合法性,并使用shapely库的interpolate函数计算出所需数量的节点;Using the shapely library of python, the polygonal contour points of the mucosa layer, the submucosal layer and the adventitia layer are converted into shapely.geometry.polygon.LinearRing objects, and the tumor area is converted into a shapely.geometry.polygon.Polygon object to obtain the first transition data; and in the process of conversion, the legitimacy of the contour nodes is checked using the function of the shapely library, and the required number of nodes is calculated using the interpolate function of the shapely library;

根据第一过渡数据生成包含轮廓点数据Scontour、分割位图掩码Ymask和超声内镜图片的分期标签Ylabel在内的第二过渡数据;Generate second transition data including contour point data S contour , segmentation bitmap mask Y mask and staging label Y label of the ultrasound endoscopy image according to the first transition data;

对第二过渡数据进行亮度对比度增强、旋转、水平翻转、垂直翻转和放大缩小等数据增强操作,得到训练集;其中,在进行旋转等几何形变的数据增强操作时,需要对Ycontour和Ymask进行同步操作,以保持数据的一致性;Performing data enhancement operations such as brightness contrast enhancement, rotation, horizontal flipping, vertical flipping, and zooming in and out on the second transition data to obtain a training set; wherein, when performing data enhancement operations such as rotation and other geometric deformations, it is necessary to perform synchronous operations on the Y contour and the Y mask to maintain data consistency;

其中,第二过渡数据具体包括:The second transition data specifically includes:

1)轮廓点数据:轮廓点数据Scontour为上述已通过合法性校验并重新用interpolate函数计算的多边形对象,即Scontour={Stumor,Sstructure}。1) Contour point data: The contour point data S contour is the polygon object that has passed the legality check and is recalculated using the interpolate function, that is, S contour = {S tumor , S structure }.

2)分割位图掩码:使用opencv库把四个轮廓转换为4通道的位图,转换逻辑与上述“使用diffvg库生成可微分的mask”保持一致,并作为Ymask保存。2) Segmentation bitmap mask: Use the opencv library to convert the four contours into a 4-channel bitmap. The conversion logic is consistent with the above "Use the diffvg library to generate a differentiable mask" and save it as the Y mask .

3)T分期标签:每张超声内镜图片肿瘤的T分期标签为Ylabel,包含T1a、T1b、T2和T3\4四种分期标签。3) T staging label: The T staging label of each tumor in the ultrasound endoscopy image is Y label , which includes four staging labels: T1a, T1b, T2, and T3\4.

S6.4、计算三种损失函数,包括S6.41~S6.43,具体为:S6.4. Calculate three loss functions, including S6.41 to S6.43, specifically:

S6.41、使用dice loss来度量预设模型预测的分割掩码和真实分割掩码之间的一致性,对4通道的掩膜都进行dice损失的计算,得到DICE损失函数;其中,dice系数是一种常用的度量两个样本相似性的指标。设预测的分割掩码为实际的分割掩码为Y,dice系数为:S6.41. Use dice loss to measure the consistency between the segmentation mask predicted by the preset model and the actual segmentation mask. Perform dice loss calculation on the masks of the four channels to obtain the DICE loss function. The dice coefficient is a commonly used indicator to measure the similarity between two samples. Suppose the predicted segmentation mask is The actual segmentation mask is Y and the dice coefficient is:

dice值越接近1表示预测与实际结果越一致。因此,目标是最小化1-dice,即diceloss定义为:The closer the dice value is to 1, the more consistent the prediction is with the actual result. Therefore, the goal is to minimize 1-dice, that is, diceloss is defined as:

diceloss=1-dicedice loss = 1-dice

在本实施例S6.41中,Dice Loss可以很好的反映预测轮廓线掩膜和真实轮廓线掩膜的匹配程度。In this embodiment S6.41, Dice Loss can well reflect the matching degree between the predicted contour mask and the true contour mask.

S6.42、使用多边形损失poly loss来度量预测节点(通过三次B-样条插值得到轮廓线)与实际节点之间的误差,得到多边形损失函数;其中多边形损失函数是基于均方误差(Mean Square Error,简称MSE)的损失函数。S6.42. Use polygon loss poly loss to measure the error between the predicted node (the contour line is obtained by cubic B-spline interpolation) and the actual node, and obtain the polygon loss function; the polygon loss function is a loss function based on the mean square error (MSE).

设预测的节点为实际的轮廓线节点为S,则poly loss定义为:Assume the predicted node is The actual contour node is S, then the poly loss is defined as:

其中,nspline表示需要预测的食管结构三次B-样条曲线节点数目,Si分别为第i个实际节点和预测节点的坐标。Where nspline represents the number of nodes of the cubic B-spline curve of the esophageal structure to be predicted, Si and are the coordinates of the i-th actual node and predicted node respectively.

S6.43、使用基于食管癌肿瘤的T分期标签的class loss损失函数来处理四分类问题,即食管癌的T分期标签可以为T1a、T1b、T2和T3\4中的一种;通过定义多分类版本的distance margin loss来量化模型预测的T分期标签与实际T分期标签之间的误差,并将目标设定为找出最小化Loss损失函数的模型参数,得到分类损失函数;S6.43. Use the class loss function based on the T stage label of esophageal cancer tumors to handle the four-classification problem, that is, the T stage label of esophageal cancer can be one of T1a, T1b, T2 and T3\4; define a multi-classification version of the distance margin loss to quantify the error between the T stage label predicted by the model and the actual T stage label, and set the goal to find the model parameters that minimize the Loss loss function to obtain the classification loss function;

其中,threshold为给定的边距值,具体为:Among them, threshold is a given margin value, specifically:

Loss损失函数为:The loss function is:

Loss=α1*diceloss2*polyloss3*class-lossLoss = α 1 *dice loss + α 2 *poly loss + α 3 *class-loss

其中,α1、α2和α3分别为dice loss、poly loss和class loss的权重系数。Among them, α 1 , α 2 and α 3 are the weight coefficients of dice loss, poly loss and class loss respectively.

本实施例S6.4中,DICE损失函数能够增强特定目标的识别和定位的准确性,多边形损失函数能够提高食管与肿瘤轮廓节点的预测准确性,分类损失函数能够提高肿瘤分期结果的预测准确性,因此从不同维度所建立的损失函数,能够对模型的优化起到显著提升的功效,增强预设模型的预测准确性。In this embodiment S6.4, the DICE loss function can enhance the accuracy of recognition and positioning of specific targets, the polygon loss function can improve the prediction accuracy of esophageal and tumor contour nodes, and the classification loss function can improve the prediction accuracy of tumor staging results. Therefore, the loss functions established from different dimensions can significantly improve the optimization of the model and enhance the prediction accuracy of the preset model.

S6.5、设置batch size为32,epoch为300,初始化学习率为10^-4,每100个epoch降低为之前的1/10;S6.5, set the batch size to 32, the epoch to 300, the initial learning rate to 10^-4, and reduce it to 1/10 every 100 epochs;

通过优化DICE损失函数、多边形损失函数和分类损失函数的方式,使用图形处理单元(GPU)根据训练集,以及肿瘤与食管壁的掩膜,对预设模型进行参数更新,得到优化后的预设模型;By optimizing the DICE loss function, polygon loss function, and classification loss function, a graphics processing unit (GPU) is used to update the parameters of the preset model according to the training set and the mask of the tumor and the esophageal wall to obtain an optimized preset model;

并且,最后得到的全部掩膜的平均dice为0.76,平均分类准确率为0.87。Moreover, the average dice of all masks obtained in the end is 0.76, and the average classification accuracy is 0.87.

本实施例S6.5通过可微分渲染技术,使用dice loss对轮廓线优化;相比传统方法仅用L2 loss或者L1 loss进行节点优化,联合L2 loss和dice loss可以得到更精确的轮廓线,从而提高预设模型的预测能力,以便后续得到更精准的食管结构和肿瘤轮廓与分期的预测结果;In this embodiment S6.5, dice loss is used to optimize the contour line through differentiable rendering technology; compared with the traditional method of using only L2 loss or L1 loss for node optimization, the combination of L2 loss and dice loss can obtain a more accurate contour line, thereby improving the prediction ability of the preset model, so as to obtain more accurate prediction results of esophageal structure, tumor contour and stage in the future;

此外,在传统的掩码转换过程中,把节点转成mask,会使用到开源计算机视觉库opencv,但是该转换方式不存在梯度,由于梯度作为参数评估和更新的重要指标,能够表示目标函数在某点上的方向和变化率,因此传统的掩码转换方法所得到的结果不能用于神经网络训练;而本实施例一通过diffvg库生成可微分的mask,可以用于神经网络训练,并且使用dice loss作为损失函数,能够让预测分割结果与真实分割结果之间的相似度越来越高,所以会对预设模型进行有效训练,让食管结构和肿瘤轮廓与分期的预测结果更加准确。In addition, in the traditional mask conversion process, the open source computer vision library OpenCV is used to convert the node into a mask, but this conversion method does not have a gradient. Since the gradient is an important indicator for parameter evaluation and update, it can represent the direction and rate of change of the target function at a certain point. Therefore, the results obtained by the traditional mask conversion method cannot be used for neural network training; while the first embodiment of this invention generates a differentiable mask through the diffvg library, which can be used for neural network training, and uses dice loss as the loss function, which can make the similarity between the predicted segmentation result and the actual segmentation result higher and higher, so the preset model will be effectively trained to make the prediction results of the esophageal structure, tumor contour and staging more accurate.

本实施例从整体来看,具有如下有益效果:Overall, this embodiment has the following beneficial effects:

本实施例通过对原始超声内镜图片进行特征提取,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度;通过构造三层食管结构和肿瘤生长轮廓的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估;通过样条插值的方式生成三次B-样条曲线,可以将离散点数据连接成一个曲线,用来准确表示食管和肿瘤的形状;由于曲线之间的相对距离能够表示不同膜层之间的组织侵犯程度,所以可以方便快捷地得到肿瘤分期结果。This embodiment can remove redundant images by extracting features from the original ultrasound endoscopy images, thereby effectively ensuring the data accuracy of the ultrasound endoscopy images and reducing the complexity of data processing; by constructing control points of the three-layer esophageal structure and the tumor growth contour, the characteristics of tumor growth can be fully obtained, which is conducive to the effective evaluation of the esophageal disease state; by generating a cubic B-spline curve by spline interpolation, discrete point data can be connected into a curve to accurately represent the shape of the esophagus and the tumor; because the relative distance between the curves can represent the degree of tissue invasion between different membrane layers, the tumor staging result can be obtained quickly and conveniently.

并且,本实施例通过引入定位模型进行有效区域检测,成功提高了模型对感兴趣区域的准确定位,克服了原始数据图像质量相对较差的问题。其次,基于Transformer的多层多边形生成技术,显式建模器官结构,注入解剖学先验知识,使得本实施例能够生成更符合解剖结构的食管壁结构轮廓预测和肿瘤轮廓预测结果,有效解决了现有技术在结构准确性和解释性方面的不足。此外,引入可微分渲染技术成功将多边形曲线转化为带梯度的位图掩码,可以在使用L2进行节点距离约束的同时,使用dice loss进一步提升轮廓线的精度。同时由于显式的三次B-样条曲线的使用,在能精确描述食管及肿瘤结构的同时,可以依据其相对距离加简单的规则即可得出T分期结论,这样的规则也符合临床医护人员对诊断过程的理解,能够全面提升了超声内镜图像的精准分割和分类性能,为肿瘤定位和分期提供了更为可靠和解释性强的依据。Moreover, this embodiment successfully improves the accurate positioning of the model for the region of interest by introducing a positioning model for effective region detection, and overcomes the problem of relatively poor quality of the original data image. Secondly, based on the multi-layer polygon generation technology of Transformer, the organ structure is explicitly modeled and the anatomical prior knowledge is injected, so that this embodiment can generate esophageal wall structure contour prediction and tumor contour prediction results that are more in line with the anatomical structure, effectively solving the shortcomings of the existing technology in terms of structural accuracy and interpretability. In addition, the introduction of differentiable rendering technology successfully converts polygonal curves into bitmap masks with gradients, and the accuracy of the contours can be further improved by using dice loss while using L2 for node distance constraints. At the same time, due to the use of explicit cubic B-spline curves, while being able to accurately describe the esophageal and tumor structures, the T staging conclusion can be drawn based on their relative distance and simple rules. Such rules are also in line with the understanding of the diagnostic process by clinical medical staff, and can comprehensively improve the accurate segmentation and classification performance of ultrasound endoscopic images, providing a more reliable and interpretable basis for tumor localization and staging.

实施例二:Embodiment 2:

请参阅图6,本发明的实施例提供了一种食管结构和肿瘤轮廓与分期的预测装置,包括数据模块10、特征模块20、节点模块30、曲线模块40、分期模块50和集成模块60;Please refer to FIG6 , an embodiment of the present invention provides a prediction device for esophageal structure, tumor contour and staging, including a data module 10 , a feature module 20 , a node module 30 , a curve module 40 , a staging module 50 and an integration module 60 ;

其中,数据模块10,用于获取原始超声内镜图片;Wherein, the data module 10 is used to obtain the original ultrasound endoscopy image;

特征模块20,用于使用预设模型对原始超声内镜图片进行特征提取,得到全尺寸特征;A feature module 20 is used to extract features from the original ultrasound endoscopy image using a preset model to obtain full-size features;

节点模块30,用于使用预设模型对全尺寸特征进行结构拆分和节点重构,得到三层食管结构和肿瘤生长轮廓的控制点;A node module 30 is used to perform structural decomposition and node reconstruction on the full-size features using a preset model to obtain control points of the three-layer esophageal structure and tumor growth contour;

曲线模块40,用于通过样条插值的方式,根据控制点生成食管和肿瘤的三次B-样条曲线;A curve module 40, for generating a cubic B-spline curve of the esophagus and the tumor according to the control points by means of spline interpolation;

分期模块50,用于通过计算三次B-样条曲线之间的相对距离得到肿瘤分期结果;The staging module 50 is used to obtain the tumor staging result by calculating the relative distance between the cubic B-spline curves;

集成模块60,用于由三次B-样条曲线和肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果;An integration module 60, for forming a prediction result of esophageal structure, tumor contour and tumor stage from a cubic B-spline curve and the tumor stage result;

其中,预设模型是由不同器件和模块构成的深度学习模型。Among them, the preset model is a deep learning model composed of different devices and modules.

在一个实施例中,数据模块10具体为:In one embodiment, the data module 10 is specifically:

获取原始超声内镜图片I;其中,原始超声内镜图片I是指将内镜和超声结合起来对消化道进行检查而得到的图片。An original endoscopic ultrasound image I is obtained; wherein the original endoscopic ultrasound image I refers to an image obtained by combining endoscopy and ultrasound to examine the digestive tract.

为应用本发明实施例,请参阅图2和图3,图2是本发明实施例提供的食管的解剖结构图,表示食管的结构在超声内镜下可观察的结构,可以简化为由粘膜层、黏膜下层和外膜层构成的三层管腔结构,呈逐层嵌套的关系;肿瘤的生成通常始于食管粘膜层,并且可以向腔内和腔外方向生长。To apply the embodiment of the present invention, please refer to Figures 2 and 3. Figure 2 is an anatomical structure diagram of the esophagus provided in the embodiment of the present invention, which shows the structure of the esophagus observable under an ultrasonic endoscope, which can be simplified to a three-layer tubular cavity structure composed of a mucosal layer, a submucosal layer, and an adventitia layer, which are nested layer by layer; the formation of tumors usually starts in the esophageal mucosal layer and can grow in the cavity and outside the cavity.

图3是本发明实施例提供的食管癌的T分期结构图,表示食管癌的T分期划分方式,肿瘤的T分期主要根据其向腔外生长的程度确定,例如T1a级别的肿瘤位于粘膜层和黏膜下层之间,T1b级别的肿瘤已突破黏膜下层向外生长,T2级别表示肿瘤进一步扩展至食管壁深层,T3级别表示肿瘤穿透外膜层,T4级别表示肿瘤已经穿透外膜层且侵犯外部器官。使用超声内镜,可以鉴别肿瘤的T1a、T1b、T2期和是否大于等于T3期。FIG3 is a T staging structure diagram of esophageal cancer provided by an embodiment of the present invention, which shows the T staging division method of esophageal cancer. The T staging of a tumor is mainly determined according to the degree of its growth outside the cavity. For example, a T1a-level tumor is located between the mucosal layer and the submucosal layer, a T1b-level tumor has broken through the submucosal layer and grown outward, a T2-level tumor further extends to the deep layer of the esophageal wall, a T3-level tumor penetrates the outer membrane layer, and a T4-level tumor has penetrated the outer membrane layer and invaded external organs. Using an ultrasonic endoscope, it is possible to identify the T1a, T1b, T2 stages of a tumor and whether it is greater than or equal to the T3 stage.

在一个实施例中,特征模块20包括识别单元和提取单元;其中,识别单元是获取有效超声内镜图片Ivalid的过程,提取单元是构建全尺寸特征的过程,具体为:In one embodiment, the feature module 20 includes an identification unit and an extraction unit; wherein the identification unit is a process of obtaining a valid ultrasound endoscopy image I valid , and the extraction unit is a process of constructing a full-size feature, specifically:

其中,识别单元,用于将原始超声内镜图片I输入预设模型的有效区域检测器进行有效区域边界框进行识别,得到有效区域的边界框(bounding box);The recognition unit is used to input the original ultrasound endoscopy image I into the effective area detector of the preset model to identify the effective area boundary box, so as to obtain the bounding box of the effective area;

识别单元,还用于对有效区域的边界框(bounding box)进行剪裁,得到有效超声内镜图片IvalidThe recognition unit is further used to clip a bounding box of a valid area to obtain a valid ultrasound endoscopy image I valid .

其中,有效区域检测器可以使用yolo或者mask rcnn之类的目标检测算法来实现。Among them, the effective area detector can be implemented using target detection algorithms such as yolo or mask rcnn.

本实施例识别单元通过对有效区域边界框进行识别和剪裁,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度。通过特征提取和上采样,能够从有效超声内镜图片中提取出与肿瘤相关的特征,使肿瘤的大小、形状和密度等特征包含在全尺寸特征中,可以更好地描述原始超声内镜图片的细节特征。The recognition unit of this embodiment can remove redundant images by identifying and clipping the effective area boundary box, effectively ensuring the data accuracy of the ultrasound endoscopic image while reducing the complexity of data processing. Through feature extraction and upsampling, tumor-related features can be extracted from the effective ultrasound endoscopic image, so that features such as the size, shape and density of the tumor are included in the full-size features, which can better describe the detailed features of the original ultrasound endoscopic image.

提取单元,用于先将有效超声内镜图片Ivalid输入预设模型的特征编码器进行特征提取,并且输入的矩阵为[B,C,H,W];其中,[B,C,H,W]可以取C=3,H=W=256;The extraction unit is used to first input the valid ultrasound endoscopy image I valid into the feature encoder of the preset model for feature extraction, and the input matrix is [B, C, H, W]; wherein [B, C, H, W] can take C=3, H=W=256;

再使用预设模型的特征解码器,对提取所得特征进行上采样,得到[B,h,H,W]的全尺寸特征;其中,[B,h,H,W]可以取h=64。Then, the feature decoder of the preset model is used to upsample the extracted features to obtain the full-size features of [B, h, H, W]; wherein [B, h, H, W] can take h = 64.

并且,特征编码器和特征解码器,组成了采用U-net网络结构的主干网络;特征编码器可以选择卷积神经网络CNN或者Transformer作为特征提取器,特征解码器采用的是U-Net的结构。In addition, the feature encoder and feature decoder constitute the backbone network using the U-net network structure; the feature encoder can choose convolutional neural network CNN or Transformer as the feature extractor, and the feature decoder adopts the U-Net structure.

需要说明的是,预设模型是由有效区域检测器、特征编码器、特征解码器和结构检测模块组成的深度学习模型。It should be noted that the preset model is a deep learning model composed of an effective area detector, a feature encoder, a feature decoder and a structure detection module.

在一个实施例中,节点模块30包括维度子单元、构造子单元、拼接子单元、拆分子单元、特征子单元、坐标与半径子单元、控制子单元和综合单元;In one embodiment, the node module 30 includes a dimension subunit, a construction subunit, a splicing subunit, a splitting subunit, a feature subunit, a coordinate and radius subunit, a control subunit, and a synthesis unit;

其中,维度子单元、构造子单元、拼接子单元和拆分子单元是构建全局特征令牌的过程,特征子单元是定义食管与肿瘤控制点特征的过程,坐标与半径子单元和控制子单元是构建三层食管结构的控制点的过程,综合单元是构建肿瘤向外和向内生长轮廓线的控制点的过程,具体为:Among them, the dimension subunit, construction subunit, splicing subunit and splitting subunit are the processes of constructing global feature tokens, the feature subunit is the process of defining the features of the esophagus and tumor control points, the coordinate and radius subunit and the control subunit are the processes of constructing the control points of the three-layer esophageal structure, and the comprehensive unit is the process of constructing the control points of the tumor's outward and inward growth contours, specifically:

维度子单元,用于使用预设模型的结构检测模块,把[B,h,H,W]的全尺寸特征转变成维度为[B,HW,h]的得到第一令牌/>并为第一令牌/>嵌入对应坐标位置的位置嵌入(positional embedding);其中,B代表批次大小,h表示特征维度,H和W表示源矩阵的高和宽;The dimension subunit is used to transform the full-size features of [B, h, H, W] into the dimensions of [B, HW, h] using the structure detection module of the preset model. Get the first token /> And for the first token/> Embed the corresponding coordinate position into positional embedding, where B represents the batch size, h represents the feature dimension, and H and W represent the height and width of the source matrix.

构造子单元,用于根据需要预测的食管结构控制点数目,构造可学习的第二令牌其中,/>的维度为[B,ns,h],ns表示需要预测的食管结构控制点数目;A construction subunit is used to construct a learnable second token according to the number of esophageal structure control points that need to be predicted Among them,/> The dimension is [B, n s , h], n s represents the number of esophageal structure control points that need to be predicted;

拼接子单元,用于对第一令牌和第二令牌进行拼接,得到第三令牌token{0};其中,第三令牌token{0}的维度为[B,ns+HW,h];A concatenation subunit, configured to concatenate the first token and the second token to obtain a third token token {0} ; wherein the dimension of the third token token {0} is [B, ns +HW, h];

拆分子单元,用于将第三令牌token{0}输入预设模型中L层的Transformer模块,得到token{L},并对token{L}进行重新拆分,得到全局特征令牌(和/>);其中,/>的维度为[B,ns,h],表示模型会输出ns个食管/肿瘤控制点特征。The splitting subunit is used to input the third token token {0} into the L-layer Transformer module in the preset model to obtain token {L} , and re-split token {L} to obtain the global feature token ( and/> ); where /> The dimension is [B, ns , h], which means that the model will output n s esophageal/tumor control point features.

本实施例维度子单元、构造子单元、拼接子单元和拆分子单元对全尺寸特征进行维度变换、令牌构造、令牌拼接和令牌拆分的步骤,就是对全尺寸特征中的特征信息进行整理和收集的过程,使所得到的全局特征令牌在拥有全局特征的同时,具有良好的不变性和表示直观的特点,有助于为后期的食管结构以及肿瘤预测提供坚实的数据基础。In this embodiment, the steps of dimension subunit, construction subunit, splicing subunit and splitting subunit performing dimension transformation, token construction, token splicing and token splitting on the full-size feature are the process of organizing and collecting the feature information in the full-size feature, so that the obtained global feature token has good invariance and intuitive representation while having global features, which helps to provide a solid data foundation for the later esophageal structure and tumor prediction.

并且,本实施例二引入了基于Transformer的多层多边形生成技术,以显式方式表示符合解剖学结构的器官,通过该表示方法,食管壁和肿瘤区域的多边形曲线更加符合解剖结构,可以有效提高分割结果的结构准确性和解释性,为医护人员提供更可靠的图像数据解读基础。In addition, this second embodiment introduces a Transformer-based multi-layer polygon generation technology to explicitly represent organs that conform to the anatomical structure. Through this representation method, the polygonal curves of the esophageal wall and tumor area are more consistent with the anatomical structure, which can effectively improve the structural accuracy and interpretability of the segmentation results, and provide medical staff with a more reliable basis for interpreting image data.

特征子单元,用于令使用第一预设公式,根据全局特征令牌计算得到食管与肿瘤控制点特征Fstructure;其中,Fstructure的维度为[B,ns,h];Characteristic subunit, used to make Using the first preset formula, the esophageal and tumor control point features F structure are calculated based on the global feature tokens; wherein the dimension of F structure is [B, ns , h];

其中,食管与肿瘤控制点特征为:Among them, the characteristics of esophagus and tumor control points are:

Fstructure=softmax(Qs×Kf T)×Vf F structure = softmax(Q s ×K f T )×V f

坐标与半径子单元,用于对于食管管腔的粘膜层、黏膜下层和外膜层,需要在各自所在层面预测若干节点(控制点),这些节点将决定三次B-样条曲线的形态,从而描述出食管结构的轮廓线,因此,对于每一个食管控制点,都需要预测其粘膜层的节点坐标(以笛卡尔坐标表示)、黏膜下层和外膜层的相对半径(膜层距离之间的半径差)。具体步骤如下:The coordinate and radius subunit is used for the mucosal layer, submucosal layer and adventitia layer of the esophageal lumen. It is necessary to predict several nodes (control points) at their respective levels. These nodes will determine the shape of the cubic B-spline curve, thereby describing the contour of the esophageal structure. Therefore, for each esophageal control point, it is necessary to predict the node coordinates of the mucosal layer (expressed in Cartesian coordinates), the relative radius of the submucosal layer and the adventitia layer (the radius difference between the membrane layer distances). The specific steps are as follows:

假设当前食管结构的特征为Fstructure,维度为[B,ns,h],其中B为批量大小,ns为需要预测的食管结构控制点数,h为特征维度;Assume that the feature of the current esophageal structure is F structure , and the dimension is [B, ns , h], where B is the batch size, ns is the number of esophageal structure control points to be predicted, and h is the feature dimension;

使用线性变换层将ns个控制节点的食管与肿瘤控制点特征Fstructure映射到二维(2D),即Pmucosa=Wp*Fstructure;并将映射结果转化为极坐标的形式,得到粘膜层轮廓线的节点坐标(rmucosa,θmucosa);其中,Wp为维度为[2,h]的权重矩阵,Pmucosa对应的维度为[B,ns,2],代表ns个粘膜层轮廓线的控制点的2D坐标;并且rmucosa和θmucosa分别表示节点相对于食管中心的半径和角度;A linear transformation layer is used to map the esophageal and tumor control point features F structure of the n s control nodes to two dimensions (2D), that is, P mucosa =W p *F structure ; and the mapping result is converted into polar coordinates to obtain the node coordinates of the mucosal layer contour line (r mucosamucosa ); wherein W p is a weight matrix with a dimension of [2, h], and the dimension corresponding to P mucosa is [B, n s , 2], representing the 2D coordinates of the control points of the n s mucosal layer contour line; and r mucosa and θ mucosa represent the radius and angle of the node relative to the center of the esophagus, respectively;

根据控制点特征,使用两个线性变换层分别计算出黏膜下层的相对半径dRsubmucosa和外膜层的相对半径dRout,即dRsubmucosa=relu(WdR1*Fstructure)和dRout=relu(WdR2*Fstructure),并使用relu激活函数用于确保预测半径为正数,其中,WdR1和WdR2为维度为[1,h]的权重矩阵,dRsubmucosa和dRout对应的维度为[B,ns,1],表示这ns个控制节点对应的黏膜下层和外膜层的相对半径;According to the control point features, two linear transformation layers are used to calculate the relative radius of the submucosal layer dR submucosa and the relative radius of the adventitia dR out , that is, dR submucosa = relu(W dR1 *F structure ) and dR out = relu(W dR2 *F structure ), and the relu activation function is used to ensure that the predicted radius is a positive number, where W dR1 and W dR2 are weight matrices with dimensions [1, h], and the dimensions corresponding to dR submucosa and dR out are [B, ns , 1], which represent the relative radii of the submucosal layer and the adventitia corresponding to the n s control nodes;

并根据节点坐标和相对半径计算得到黏膜下层的实际半径Rsubmucosa和外膜层的实际半径Rout,即Rsubmucosa=rmucosa+dRsubmucosa和Rout=Rsubmucosa+dRoutThe actual radius R submucosa of the submucosal layer and the actual radius R out of the adventitia layer are calculated according to the node coordinates and the relative radius, that is, R submucosa = r mucosa + dR submucosa and R out = R submucosa + dR out ;

控制子单元,用于根据实际半径和节点坐标相对于食管中心的角度,将控制点特征对应的控制点从极坐标转化为笛卡尔坐标表示的形式,得到三层食管结构的控制点(Pmucosa、Psubmucosa和Pout)。The control subunit is used to convert the control points corresponding to the control point features from polar coordinates to Cartesian coordinates according to the actual radius and the angle of the node coordinates relative to the esophageal center, and obtain the control points (P mucosa , P submucosa and P out ) of the three-layer esophageal structure.

本实施例所计算出的黏膜下层和外膜层的相对半径具有一定的限制性,无法准确地表示黏膜不同层次之间的距离,因此在相对半径的基础上,结合节点坐标计算得到实际半径,从而可以准确表示粘膜下层和外膜层相对于粘膜层的轮廓线位移程度。The relative radius of the submucosal layer and the adventitia layer calculated in this embodiment has certain limitations and cannot accurately represent the distance between different layers of the mucosa. Therefore, on the basis of the relative radius, the actual radius is calculated in combination with the node coordinates, so that the degree of displacement of the contour line of the submucosal layer and the adventitia layer relative to the mucosal layer can be accurately represented.

综合单元,用于为了模拟肿瘤的生长情况,需要预测肿瘤在食管粘膜层向外和向内的生长情况,即需要得到肿瘤在各方向上的轮廓线;该综合单元包括定义子单元、尺度子单元、生长子单元和转化子单元,具体为:The comprehensive unit is used to simulate the growth of the tumor. It is necessary to predict the outward and inward growth of the tumor in the esophageal mucosa layer, that is, it is necessary to obtain the contour lines of the tumor in all directions; the comprehensive unit includes a definition subunit, a scale subunit, a growth subunit and a transformation subunit, specifically:

定义子单元,还用于定义肿瘤特征Ftumor=FstructureDefine subunits, also used to define tumor features F tumor = F structure ;

尺度子单元,用于预测出肿瘤向外生长的轮廓线相对于粘膜层的半径差,即向外生长的相对半径具体为:The scale subunit is used to predict the radius difference of the tumor's outward growth contour relative to the mucosal layer, that is, the relative radius of outward growth Specifically:

通过一个线性变换层将Ftumor的特征映射到相对半径(第一相对半径)上,即并通过relu激活函数来保证相对半径的非负性;其中,是一个维度为[1,h]的权重矩阵;The features of F tumor are mapped to the relative radius (first relative radius) through a linear transformation layer, that is, And the relu activation function is used to ensure the non-negativity of the relative radius; is a weight matrix with dimension [1, h];

使用第一相对半径和节点坐标相对于食管中心的半径rmucosa,计算得到肿瘤向外生长轮廓线的实际半径即/>并且,肿瘤向外生长轮廓线的实际半径所对应的角度等于粘膜层的角度θmucosa,即/> Using the first relative radius and the radius of the node coordinate relative to the center of the esophagus, r mucosa , the actual radius of the tumor outward growth contour is calculated. That is/> Furthermore, the angle corresponding to the actual radius of the tumor's outward growth contour is equal to the angle θ mucosa of the mucosal layer, i.e./>

生长子单元,用于为保证肿瘤向内生长的轮廓线始终在粘膜层内部,需要计算出肿瘤在内部的相对半径即:The growth subunit is used to ensure that the contour of the tumor's inward growth is always inside the mucosal layer. It is necessary to calculate the relative radius of the tumor inside. Right now:

通过一个线性变换层将Ftumor的特征映射到基础半径(第二相对半径)上,即并通过relu激活函数来保证相对半径的非负性;其中,为一个维度为[1,h]的权重矩阵;The features of F tumor are mapped to the base radius (the second relative radius) through a linear transformation layer, that is, And the relu activation function is used to ensure the non-negativity of the relative radius; is a weight matrix with dimension [1, h];

使用第二相对半径和节点坐标相对于食管中心的半径rmucosa,计算得到肿瘤向外生长轮廓线的实际半径即/>并且,确保实际半径大于0,且肿瘤向外生长轮廓线的实际半径所对应的角度等于粘膜层的角度θmucosa,即 Using the second relative radius and the radius of the node coordinate relative to the center of the esophagus, r mucosa , the actual radius of the tumor outward growth contour is calculated. That is/> Furthermore, it is ensured that the actual radius is greater than 0, and the angle corresponding to the actual radius of the tumor outward growth contour is equal to the angle θ mucosa of the mucosal layer, that is,

转化子单元,用于根据肿瘤向外和向内生长轮廓线的实际半径,以及粘膜层的角度,进行节点位置的坐标转化,得到肿瘤向外和向内生长轮廓线的控制点()。The conversion subunit is used to convert the coordinates of the node position according to the actual radius of the tumor's outward and inward growth contours and the angle of the mucosal layer to obtain the control points of the tumor's outward and inward growth contours ( and ).

本实施例综合单元是肿瘤生长轮廓线的控制点的构造过程,将笛卡尔坐标和极坐标之间的转换关系引用到控制点的处理上,能够利用笛卡尔坐标具有表达直观性、描述简洁性和描述精确性的优势,使肿瘤生长形状的计算方式变得相对简便,进而能够清晰地描述肿瘤生长轮廓线的控制点。The comprehensive unit of this embodiment is the construction process of the control points of the tumor growth contour. The conversion relationship between Cartesian coordinates and polar coordinates is referenced in the processing of control points. The advantages of Cartesian coordinates in terms of intuitive expression, concise description and accurate description can be utilized to make the calculation method of the tumor growth shape relatively simple, thereby being able to clearly describe the control points of the tumor growth contour.

本实施例节点模块30从整体来看,通过在肿瘤向外生长和向内生长这两种维度上进行轮廓线的控制点的预测,得到肿瘤向外和向内生长轮廓线的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估。From an overall perspective, the node module 30 of this embodiment predicts the control points of the contour lines in the two dimensions of tumor outward growth and inward growth, and obtains the control points of the tumor outward and inward growth contour lines, which can fully obtain the characteristics of tumor growth and is conducive to the effective evaluation of the esophageal disease status.

在一个实施例中,曲线模块40包括参数单元、参数变形单元、向量单元和集成单元;其中,参数单元是构建食管结构轮廓线的过程,参数变形单元是构建肿瘤轮廓线的过程,向量单元和集成单元是构建三次B-样条曲线的过程,具体为:In one embodiment, the curve module 40 includes a parameter unit, a parameter deformation unit, a vector unit and an integration unit; wherein the parameter unit is a process of constructing the esophageal structure contour line, the parameter deformation unit is a process of constructing the tumor contour line, and the vector unit and the integration unit are processes of constructing a cubic B-spline curve, specifically:

参数单元,用于对于食管结构的每一层(包括粘膜层、黏膜下层和外膜层),都有对应的控制点坐标即三层食管结构的控制点和肿瘤向外和向内生长轮廓线的控制点;其中Pi=(xi,yi),(i=1,2,...,ns)表示第i个控制点的笛卡尔坐标;Parameter unit, used for each layer of the esophageal structure (including mucosal layer, submucosal layer and adventitia layer), there are corresponding control point coordinates That is, the control points of the three-layer esophageal structure and the control points of the outward and inward growth contours of the tumor; wherein Pi = ( xi , yi ), (i = 1, 2, ..., ns ) represents the Cartesian coordinates of the ith control point;

参数单元,还用于使用三次B-样条插值法对三层食管结构的控制点进行处理,得到参数化的食管结构轮廓线S(t)=[x(t),y(t)];函数S(t)表示由控制点生成的连续的轮廓线,假如t在[0,1]中均匀采样s个点,则每条三次B-样条曲线的节点数为nsplie=ns*s个;由此得到三次B-样条曲线Smucosa、Ssubmucosa和Sout,即食管结构轮廓线;The parameter unit is also used to process the control points of the three-layer esophageal structure using a cubic B-spline interpolation method to obtain a parameterized esophageal structure contour line S(t)=[x(t), y(t)]; the function S(t) represents a continuous contour line generated by the control points, and if t uniformly samples s points in [0, 1], the number of nodes of each cubic B-spline curve is n splie = ns *s; thereby, cubic B-spline curves S mucosa , S submucosa and S out are obtained, namely, the esophageal structure contour line;

其中,三次B-样条曲线是一种由控制点集定义的分段函数。设Ai(i=1,2,3,4)为三次B-样条的四个基函数,用于插值和拟合曲线,则曲线S(t)的数学定义为:The cubic B-spline curve is a piecewise function defined by a set of control points. Let Ai (i = 1, 2, 3, 4) be the four basis functions of the cubic B-spline, used for interpolation and fitting curves, then the mathematical definition of the curve S(t) is:

S(t)=A1P1+A2P2+A3P3+A4P4 S(t )A1P1 + A2P2 + A3P3 + A4P4

对于参数A1、A2、A3和A4有:For parameters A 1 , A 2 , A 3 and A 4 we have:

其中,S(t)为参数t∈[0,1]在曲线上的点,P1、P2、P3和P4为三次B-样条的控制点。Wherein, S(t) is a point on the curve with parameter t∈[0,1], and P 1 , P 2 , P 3 and P 4 are control points of the cubic B-spline.

参数变形单元,用于同参数单元所述,使用三次B-样条插值法对肿瘤生长轮廓的控制点进行处理(包括向外生长和向内生长),得到三次B-样条曲线和/>即肿瘤轮廓线。A parameter deformation unit is used to process the control points of the tumor growth contour (including outward growth and inward growth) using a cubic B-spline interpolation method as described in the parameter unit to obtain a cubic B-spline curve and/> The tumor outline.

向量单元,用于在食管结构轮廓线和肿瘤轮廓线上,计算每个曲线点的法向量,得到法线的单位向量;A vector unit is used to calculate the normal vector of each curve point on the esophageal structure contour line and the tumor contour line to obtain the unit vector of the normal line;

集成单元,用于由食管结构轮廓线和肿瘤轮廓线,以及法线的单位向量构成食管和肿瘤的三次B-样条曲线。An integrated unit is used to construct cubic B-spline curves of the esophagus and the tumor from the esophageal structure contour and the tumor contour, as well as the unit vectors of the normal line.

其中,法线的单位向量获取过程具体为:The specific process of obtaining the unit vector of the normal is as follows:

切向量可以从曲线的一阶导数中推导得出,即V=dy/dx;The tangent vector can be derived from the first derivative of the curve, that is, V = dy/dx;

曲线的单位法向量可以通过逆时针旋转切线向量90度计算得出;即在二维空间中,通过翻转向量的x和y坐标,并更改其中一个分量的符号,即:The unit normal vector of a curve can be calculated by rotating the tangent vector 90 degrees counterclockwise; that is, in two dimensions, by flipping the x and y coordinates of the vector and changing the sign of one of the components, i.e.:

对于参数和/>有:For parameters and/> have:

因此,该法向量以L2范数为基准进行归一化,得到法线的单位向量。Therefore, the normal vector is normalized based on the L2 norm to obtain the unit vector of the normal line.

为应用本发明实施例,请参阅图4,图4是本发明实施例提供的单位法向量示意图,该图展示了法线的单位向量与食管的解剖结构之间的关系。To apply the embodiment of the present invention, please refer to FIG. 4 , which is a schematic diagram of a unit normal vector provided by the embodiment of the present invention, and shows the relationship between the unit vector of the normal line and the anatomical structure of the esophagus.

本实施例S4通过以控制点为基本元素,将离散点数据连接成一个曲线,进而得到精度高、收敛性且拟合准确的曲线作为结构轮廓线,并通过结合法向量,得到完整的三次B-样条曲线,能够用来准确地表示食管和肿瘤的形状。In this embodiment S4, by using control points as basic elements, discrete point data are connected into a curve, thereby obtaining a curve with high precision, convergence and accurate fitting as a structural contour line, and by combining normal vectors, a complete cubic B-spline curve is obtained, which can be used to accurately represent the shape of the esophagus and tumors.

在一个实施例中,分期模块50包括分期单元和返回单元;分期单元是构建肿瘤分期结果的过程,返回单元是进行模型数据返回的过程,具体为:In one embodiment, the staging module 50 includes a staging unit and a return unit; the staging unit is a process of constructing tumor staging results, and the return unit is a process of returning model data, specifically:

分期单元,用于食管癌的T分期是根据肿瘤所侵犯的组织结构来推断的,因此可以通过计算肿瘤向外生长轮廓线与粘膜下层(Ssubmucosa)和外膜层(Sout)轮廓线之间的相对距离Dsubmucosa和Dout来确定T分期/>具体为:The T stage for esophageal cancer is based on the tissue structure invaded by the tumor, so the tumor can be calculated by calculating the outward growth contour of the tumor. The relative distances D submucosa and D out from the submucosa (S submucosa ) and adventitia (S out ) contours determine the T stage. Specifically:

1、计算三次B-样条曲线中曲线A(肿瘤向外生长的第一轮廓线)与曲线B(粘膜下层的第二轮廓线)上所有点之间的欧几里得距离,得到距离矩阵;1. Calculate the Euclidean distance between all points on curve A (the first contour line of tumor outward growth) and curve B (the second contour line of the submucosal layer) in the cubic B-spline curve to obtain a distance matrix;

2、找到曲线A上每个点到曲线B上的最近点的距离及其索引;2. Find the distance and index of each point on curve A to the nearest point on curve B;

3、根据最近点的索引,从曲线B中取出对应的点,并将其视为与曲线A中相应点对应的点;3. According to the index of the nearest point, take the corresponding point from curve B and treat it as the point corresponding to the corresponding point in curve A;

4、获取曲线B在最近点处的法向量;4. Get the normal vector of curve B at the nearest point;

5、计算曲线A上每个点到曲线B上最近点的方向向量,并判断其与曲线B上最近点处的法向量是否一致:5. Calculate the direction vector from each point on curve A to the nearest point on curve B, and determine whether it is consistent with the normal vector at the nearest point on curve B:

6、如果判断出曲线A的点在曲线B内部,则将该距离取为负数。6. If the point of curve A is judged to be inside curve B, then the distance is taken as a negative number.

7、取曲线A每个点到曲线B上最近点的距离的最大值,作为相对距离的度量值,得到相对距离数值集。7. Take the maximum value of the distance from each point on curve A to the nearest point on curve B as the measure of relative distance to obtain a relative distance value set.

8、在相对距离数值集中,将肿瘤向外生长的轮廓线与粘膜下层的相对距离记为d1,将肿瘤向外生长的轮廓线与外膜层的相对距离记为d2;8. In the relative distance value set, the relative distance between the tumor's outward growth contour and the submucosal layer is recorded as d1, and the relative distance between the tumor's outward growth contour and the adventitia layer is recorded as d2;

9、根据肿瘤向外生长的轮廓线与粘膜下层的相对距离的数值大小,以及肿瘤向外生长的轮廓线与外膜层的相对距离的数值大小,获取肿瘤分期结果,即食管癌的T分期,具体为:9. According to the numerical value of the relative distance between the tumor's outward growth contour and the submucosal layer, and the numerical value of the relative distance between the tumor's outward growth contour and the outer membrane layer, the tumor staging result, that is, the T stage of esophageal cancer, is obtained, which is specifically:

1)若d1<0,则T分期为T1a,表示当肿瘤向外生长轮廓线在粘膜层内部时,食管癌被判定为T1a。1) If d1 < 0, the T stage is T1a, which means that when the tumor's outward growth contour is inside the mucosal layer, esophageal cancer is diagnosed as T1a.

2)若d1≥0,且d2<0,且(|d1|)/(|d1|+|d2|)<threshold_ratio,则T分期为T1b,表示当肿瘤向外生长轮廓线在黏膜下层内部,但已经突破了粘膜层时,食管癌被判定为T1b。2) If d1 ≥ 0, and d2 < 0, and (|d1|)/(|d1|+|d2|) < threshold_ratio, the T stage is T1b, which means that when the tumor's outward growth contour is inside the submucosal layer but has broken through the mucosal layer, esophageal cancer is diagnosed as T1b.

3)若d1≥0,且d2<0,且(|d1|)/(|d1|+|d2|)≥threshold_ratio,则T分期为T2,表示肿瘤向外生长轮廓线已突破食管肌层,但尚未扩散到邻近的淋巴结或其他结构时,食管癌被判定为T2。3) If d1≥0, and d2<0, and (|d1|)/(|d1|+|d2|)≥threshold_ratio, the T stage is T2, indicating that the tumor's outward growth outline has broken through the esophageal muscle layer but has not yet spread to adjacent lymph nodes or other structures. Esophageal cancer is diagnosed as T2.

4)若d2≥0,则T分期为T3或T4,表示当肿瘤向外生长轮廓线已突破外膜层时,食管癌被判定为T3\4。4) If d2 ≥ 0, the T stage is T3 or T4, which means that when the tumor's outward growth contour has broken through the outer membrane layer, esophageal cancer is diagnosed as T3\4.

其中,threshold_ratio表示粘膜下层的厚度,经验取值范围为0.05~0.2。Among them, threshold_ratio represents the thickness of the submucosal layer, and the empirical value range is 0.05 to 0.2.

本实施例分期单元通过构建距离矩阵,能够准确地表示曲线上所有点之间的距离,通过索引的方式,可以让不同曲线上的点产生对应关联,便于获取第一轮廓线上每个点到第二轮廓线上最近点的距离最大值;并且,相对距离数值集中的相对距离有正负数之分,所以通过判断数值大小的方式,能够知晓不同膜层之间的组织侵犯程度,进而得到肿瘤分期结果。The staging unit of this embodiment can accurately represent the distances between all points on the curve by constructing a distance matrix. By means of indexing, points on different curves can be associated with each other, so as to obtain the maximum value of the distance from each point on the first contour line to the nearest point on the second contour line. Moreover, the relative distances in the relative distance value set can be positive or negative, so by judging the value, the degree of tissue invasion between different membrane layers can be known, and then the tumor staging result can be obtained.

返回单元,用于经过以上计算,使模型返回所有结构的三次B-样条曲线Smucosa、Ssubmucosa、Sout和/>返回各结构的可微分掩膜Ydiff-mask;返回肿瘤与粘膜下层、外膜层的相对距离Dsubmucosa和Dout,以及T分期分类结果/> The return unit is used to make the model return the cubic B-spline curves S mucosa , S submucosa , S out , of all structures after the above calculations. and/> Return the differentiable mask Y diff-mask of each structure; return the relative distance D submucosa and D out between the tumor and the submucosal layer and the adventitia layer, as well as the T stage classification result/>

为应用本发明实施例,请参阅图5,图5为预测流程示意图,该图展示了本实施例进行数据预测的大体过程,具体为:To apply the embodiment of the present invention, please refer to FIG. 5 , which is a schematic diagram of a prediction process, which shows the general process of data prediction in this embodiment, specifically:

1、超声内镜图像:获取原始超声内镜图片I。1. Ultrasound endoscopic image: obtain the original ultrasound endoscopic image I.

2、有效区域检测:使用有效区域检测器对原始超声内镜图片I进行有效区域检测,以获取有效超声内镜图片Ivalid2. Valid region detection: Use a valid region detector to perform valid region detection on the original ultrasound endoscopic image I to obtain a valid ultrasound endoscopic image I valid .

3、主干网络:使用由特征编码器和特征解码器组成的主干网络对超声内镜图片Ivalid进行处理,得到全尺寸特征。3. Backbone network: Use the backbone network composed of feature encoder and feature decoder to process the ultrasound endoscopy image I valid to obtain full-size features.

4、结构检测模块:使用结构检测模块对全尺寸特征进行处理,得到食管结构及肿瘤轮廓线,即食管和肿瘤的三次B-样条曲线。4. Structural detection module: The structural detection module is used to process the full-size features to obtain the esophageal structure and tumor contour, that is, the cubic B-spline curve of the esophagus and the tumor.

6、基于规则的T分期判别模块:使用结构检测模块对三次B-样条曲线进行处理,得到T分期预测结果,即肿瘤分期结果。6. Rule-based T staging discrimination module: Use the structure detection module to process the cubic B-spline curve to obtain the T staging prediction result, that is, the tumor staging result.

在本实施例分期模块50中,传统的深度学习分割模型无法直接得到各组织结构的轮廓线,需要做大量的后处理,计算量大;或者选择使用深度学习分类模型,缺乏可解释性,而本实施例分期模块50在得到各组织结构的轮廓线后仅需规则即可判断T分期,且有很强的临床可解释性。In the staging module 50 of the present embodiment, the traditional deep learning segmentation model cannot directly obtain the contours of each tissue structure, and requires a lot of post-processing and high computational complexity; or the deep learning classification model is used, which lacks interpretability. The staging module 50 of the present embodiment only needs rules to determine the T stage after obtaining the contours of each tissue structure, and has strong clinical interpretability.

在一个实施例中,集成模块60包括S6.1~S6.3;其中,组合单元为构建预测结果的过程,掩膜单元为构建肿瘤与食管壁的掩膜的过程,训练集单元为构建训练集的过程,计算单元为构建损失函数的过程,模型训练单元为模型训练的过程,具体为:In one embodiment, the integration module 60 includes S6.1 to S6.3; wherein the combination unit is a process of constructing a prediction result, the mask unit is a process of constructing a mask of a tumor and an esophageal wall, the training set unit is a process of constructing a training set, the calculation unit is a process of constructing a loss function, and the model training unit is a process of model training, specifically:

组合单元,用于由三次B-样条曲线和肿瘤分期结果构成食管结构、肿瘤轮廓线与肿瘤分期的预测结果。The combined unit is used to construct the prediction results of esophageal structure, tumor contour and tumor stage from the cubic B-spline curve and the tumor stage result.

本实施例组合单元能够实现一次训练内的多任务输出,包括对食管结构、肿瘤区域以及肿瘤T分期的预测;这一综合性的技术创新为医护人员提供了更全面、一体化的诊断信息,极大地提升了超声内镜图像处理和诊断的效能。The combined unit of this embodiment can achieve multi-task output within one training, including prediction of esophageal structure, tumor area and tumor T stage; this comprehensive technical innovation provides medical staff with more comprehensive and integrated diagnostic information, greatly improving the efficiency of ultrasound endoscopic image processing and diagnosis.

掩膜单元,用于首先,在diffvg库中创建一个空白的可微分图像设置宽度为W,高度为H,且将所有像素值均初始化为0;The mask unit is used to first create a blank differentiable image in the diffvg library Set the width to W, the height to H, and initialize all pixel values to 0;

然后,定义函数y=M(S)以生成一个对应的掩膜图像;其中,S表示闭合的三次B-样条曲线,y表示绘制多边形后得到的二维平面位图的掩膜;Then, a function y=M(S) is defined to generate a corresponding mask image; wherein S represents a closed cubic B-spline curve, and y represents a mask of a two-dimensional plane bitmap obtained after drawing a polygon;

其次,绘制肿瘤和食管壁的掩膜,包括第一单元、第二单元、第三单元和第四单元,具体为:Secondly, draw the mask of the tumor and the esophageal wall, including the first unit, the second unit, the third unit and the fourth unit, specifically:

第一单元,用于肿瘤掩膜:使用肿瘤的三次B-样条轮廓线和/>生成肿瘤掩膜Ytumor,即/> Unit 1, for tumor masking: using cubic B-spline contours of the tumor and/> Generate tumor mask Y tumor , that is/>

第二单元,用于食管粘膜层掩膜:使用食管粘膜层的三次B-样条轮廓线,生成食管粘膜层掩膜Ymucosa,即Ymucosa=M(Smucosa)。The second unit is used for esophageal mucosa layer mask: using the cubic B-spline contour line of the esophageal mucosa layer, the esophageal mucosa layer mask Y mucosa is generated, that is, Y mucosa =M(S mucosa ).

第三单元,用于食管黏膜下层掩膜:使用食管粘膜层和黏膜下层的三次B-样条轮廓线,生成食管黏膜下层掩膜Ysubmucosa,即Ysubmucosa=M(Psubmucosa)\M(Smucosa)。The third unit is used for the esophageal submucosal mask: the esophageal submucosal mask Y submucosa is generated using the cubic B-spline contours of the esophageal mucosal layer and the submucosal layer, ie, Y submucosa =M(P submucosa )\M(S mucosa ).

第四单元,用于食管黏膜下层掩膜:根据食管外膜层和粘膜下层的三次B-样条轮廓线,生成食管外膜层掩膜Yout,即Yout=M(Sout)\M(Ssubmucosa)\M(Smucosa)。The fourth unit is used for esophageal submucosal layer mask: according to the cubic B-spline contours of the esophageal adventitia layer and the submucosal layer, the esophageal adventitia layer mask Y out is generated, that is, Y out =M(S out )\M(S submucosa )\M(S mucosa ).

其中,″\″表示集合的差集,即 Among them, "\" represents the difference of sets, that is,

最后,将S6.21~S6.24中的四个掩膜合并,生成一个4通道的位图掩膜:Ydiff-mask=[Ytumor,Ymucosa,Ysubmucosa,Yout],即得到肿瘤与食管壁的掩膜。Finally, the four masks in S6.21 to S6.24 are combined to generate a 4-channel bitmap mask: Y diff-mask = [Y tumor , Y mucosa , Y submucosa , Y out ], that is, the mask of the tumor and the esophageal wall is obtained.

本实施例掩膜单元的所有操作均为微分操作,能够产生微分的图像。All operations of the mask unit in this embodiment are differential operations, and can generate differential images.

训练集单元,用于从食管癌肿瘤病例中获取训练数据;A training set unit, used for obtaining training data from esophageal cancer tumor cases;

训练集单元,还用于使用python的shapely库,把粘膜层、粘膜下层和外膜层的多边形轮廓点转换为shhapely.geometry.polygon.LinearRing对象,并把肿瘤区域转换为shapely.geometry.polygon.Polygon对象,得到第一过渡数据;并且,在转换的过程中,使用shapely库的函数检查轮廓节点的合法性,并使用shapely库的interpolate函数计算出所需数量的节点;The training set unit is also used to use the shapely library of python to convert the polygon contour points of the mucosa layer, the submucosal layer and the adventitia layer into shapely.geometry.polygon.LinearRing objects, and convert the tumor area into a shapely.geometry.polygon.Polygon object to obtain the first transition data; and, in the process of conversion, the function of the shapely library is used to check the legitimacy of the contour nodes, and the interpolate function of the shapely library is used to calculate the required number of nodes;

训练集单元,还用于根据第一过渡数据生成包含轮廓点数据Scontour、分割位图掩码Ymask和超声内镜图片的分期标签Ylabel在内的第二过渡数据;The training set unit is further used to generate second transition data including contour point data S contour , segmentation bitmap mask Y mask and staging label Y label of the ultrasonic endoscopic image according to the first transition data;

训练集单元,还用于对第二过渡数据进行亮度对比度增强、旋转、水平翻转、垂直翻转和放大缩小等数据增强操作,得到训练集;其中,在进行旋转等几何形变的数据增强操作时,需要对Ycontour和Ymask进行同步操作,以保持数据的一致性;The training set unit is further used to perform data enhancement operations such as brightness contrast enhancement, rotation, horizontal flipping, vertical flipping, and zooming in and out on the second transition data to obtain a training set; wherein, when performing data enhancement operations such as rotation and other geometric deformations, it is necessary to perform synchronous operations on the Y contour and the Y mask to maintain data consistency;

其中,第二过渡数据具体包括:The second transition data specifically includes:

1)轮廓点数据:轮廓点数据Scontour为上述已通过合法性校验并重新用interpolate函数计算的多边形对象,即Scontour={Stumor,Sstructure}。1) Contour point data: The contour point data S contour is the polygon object that has passed the legality check and is recalculated using the interpolate function, that is, S contour = {S tumor , S structure }.

2)分割位图掩码:使用opencv库把四个轮廓转换为4通道的位图,转换逻辑与上述“使用diffvg库生成可微分的mask”保持一致,并作为Ymask保存。2) Segmentation bitmap mask: Use the opencv library to convert the four contours into a 4-channel bitmap. The conversion logic is consistent with the above "Use the diffvg library to generate a differentiable mask" and save it as the Y mask .

3)T分期标签:每张超声内镜图片肿瘤的T分期标签为Ylabel,包含T1a、T1b、T2和T3\4四种分期标签。3) T staging label: The T staging label of each tumor in the ultrasound endoscopy image is Y label , which includes four staging labels: T1a, T1b, T2, and T3\4.

计算单元,用于计算三种损失函数,包括DICE损失单元、多边形损失单元和分类损失单元,具体为:The calculation unit is used to calculate three loss functions, including DICE loss unit, polygon loss unit and classification loss unit, specifically:

DICE损失单元,用于使用dice loss来度量预设模型预测的分割掩码和真实分割掩码之间的一致性,对4通道的掩膜都进行dice损失的计算,得到DICE损失函数;其中,dice系数是一种常用的度量两个样本相似性的指标。设预测的分割掩码为实际的分割掩码为Y,dice系数为:The DICE loss unit is used to use dice loss to measure the consistency between the segmentation mask predicted by the preset model and the actual segmentation mask. The dice loss is calculated for the masks of the four channels to obtain the DICE loss function. The dice coefficient is a commonly used indicator to measure the similarity between two samples. Suppose the predicted segmentation mask is The actual segmentation mask is Y and the dice coefficient is:

dice值越接近1表示预测与实际结果越一致。因此,目标是最小化1-dice,即diceloss定义为:The closer the dice value is to 1, the more consistent the prediction is with the actual result. Therefore, the goal is to minimize 1-dice, that is, diceloss is defined as:

diceloss=1-dicedice loss = 1-dice

在本实施例DICE损失单元中,Dice Loss可以很好的反映预测轮廓线掩膜和真实轮廓线掩膜的匹配程度。In the DICE loss unit of this embodiment, Dice Loss can well reflect the matching degree between the predicted contour mask and the real contour mask.

多边形损失单元,用于使用多边形损失poly loss来度量预测节点(通过三次B-样条插值得到轮廓线)与实际节点之间的误差,得到多边形损失函数;其中多边形损失函数是基于均方误差(Mean Square Error,简称MSE)的损失函数。The polygon loss unit is used to measure the error between the predicted node (the contour line is obtained by cubic B-spline interpolation) and the actual node using polygon loss poly loss to obtain a polygon loss function; wherein the polygon loss function is a loss function based on the mean square error (MSE).

设预测的节点为实际的轮廓线节点为S,则poly loss定义为:Assume the predicted node is The actual contour node is S, then the poly loss is defined as:

其中,nspline表示需要预测的食管结构三次B-样条曲线节点数目,Si分别为第i个实际节点和预测节点的坐标。Where nspline represents the number of nodes of the cubic B-spline curve of the esophageal structure to be predicted, Si and are the coordinates of the i-th actual node and predicted node respectively.

分类损失单元,用于使用基于食管癌肿瘤的T分期标签的class loss损失函数来处理四分类问题,即食管癌的T分期标签可以为T1a、T1b、T2和T3\4中的一种;通过定义多分类版本的distance margin loss来量化模型预测的T分期标签与实际T分期标签之间的误差,并将目标设定为找出最小化Loss损失函数的模型参数,得到分类损失函数;The classification loss unit is used to process the four-classification problem using a class loss function based on the T stage label of esophageal cancer tumors, that is, the T stage label of esophageal cancer can be one of T1a, T1b, T2 and T3\4; the error between the T stage label predicted by the model and the actual T stage label is quantified by defining a multi-classification version of the distance margin loss, and the goal is set to find the model parameters that minimize the Loss loss function to obtain the classification loss function;

其中,threshold为给定的边距值,具体为:Among them, threshold is a given margin value, specifically:

Loss损失函数为:The loss function is:

Loss=α1*diceloss2*polyloss3*class_lossLoss = α 1 *dice loss + α 2 *poly loss + α 3 *class_loss

其中,α1、α2和α3分别为dice loss、poly loss和class loss的权重系数。Among them, α 1 , α 2 and α 3 are the weight coefficients of dice loss, poly loss and class loss respectively.

本实施例计算单元中,DICE损失函数能够增强特定目标的识别和定位的准确性,多边形损失函数能够提高食管与肿瘤轮廓节点的预测准确性,分类损失函数能够提高肿瘤分期结果的预测准确性,因此从不同维度所建立的损失函数,能够对模型的优化起到显著提升的功效,增强预设模型的预测准确性。In the calculation unit of this embodiment, the DICE loss function can enhance the accuracy of recognition and positioning of specific targets, the polygonal loss function can improve the prediction accuracy of esophageal and tumor contour nodes, and the classification loss function can improve the prediction accuracy of tumor staging results. Therefore, the loss functions established from different dimensions can significantly improve the optimization of the model and enhance the prediction accuracy of the preset model.

模型训练单元,用于设置batch size为32,epoch为300,初始化学习率为10^-4,每100个epoch降低为之前的1/10;Model training unit, used to set batch size to 32, epoch to 300, initial learning rate to 10^-4, and reduce it to 1/10 every 100 epochs;

模型训练单元,还用于通过优化DICE损失函数、多边形损失函数和分类损失函数的方式,使用图形处理单元(GPU)根据训练集,以及肿瘤与食管壁的掩膜,对预设模型进行参数更新,得到优化后的预设模型;The model training unit is further used to update the parameters of the preset model according to the training set and the mask of the tumor and the esophageal wall by optimizing the DICE loss function, the polygon loss function and the classification loss function, so as to obtain an optimized preset model;

并且,最后得到的全部掩膜的平均dice为0.76,平均分类准确率为0.87。Moreover, the average dice of all masks obtained in the end is 0.76, and the average classification accuracy is 0.87.

本实施例模型训练单元通过可微分渲染技术,使用dice loss对轮廓线优化;相比传统方法仅用L2 loss或者L1 loss进行节点优化,联合L2 loss和dice loss可以得到更精确的轮廓线,从而提高预设模型的预测能力,以便后续得到更精准的食管结构和肿瘤轮廓与分期的预测结果;The model training unit of this embodiment uses dice loss to optimize the contour line through differentiable rendering technology; compared with the traditional method of using only L2 loss or L1 loss for node optimization, the combination of L2 loss and dice loss can obtain more accurate contour lines, thereby improving the prediction ability of the preset model, so as to obtain more accurate prediction results of esophageal structure, tumor contour and stage in the future;

此外,在传统的掩码转换过程中,把节点转成mask,会使用到开源计算机视觉库opencv,但是该转换方式不存在梯度,由于梯度作为参数评估和更新的重要指标,能够表示目标函数在某点上的方向和变化率,因此传统的掩码转换方法所得到的结果不能用于神经网络训练;而本实施例二通过diffvg库生成可微分的mask,可以用于神经网络训练,并且使用dice loss作为损失函数,能够让预测分割结果与真实分割结果之间的相似度越来越高,所以会对预设模型进行有效训练,让食管结构和肿瘤轮廓与分期的预测结果更加准确。In addition, in the traditional mask conversion process, the open source computer vision library OpenCV is used to convert the node into a mask, but this conversion method does not have a gradient. Since the gradient is an important indicator for parameter evaluation and update, it can represent the direction and rate of change of the target function at a certain point. Therefore, the results obtained by the traditional mask conversion method cannot be used for neural network training; while the second embodiment of this invention generates a differentiable mask through the diffvg library, which can be used for neural network training, and uses dice loss as the loss function, which can make the similarity between the predicted segmentation result and the actual segmentation result higher and higher, so the preset model will be effectively trained to make the prediction results of the esophageal structure, tumor contour and staging more accurate.

本实施例从整体来看,具有如下有益效果:Overall, this embodiment has the following beneficial effects:

本实施例通过对原始超声内镜图片进行特征提取,能够去除多余的图片,在有效保证超声内镜图片的数据精准度的同时,减少数据处理的复杂度;通过构造三层食管结构和肿瘤生长轮廓的控制点,能够充分获取肿瘤生长的特征,有利于对食管疾病状态的进行有效评估;通过样条插值的方式生成三次B-样条曲线,可以将离散点数据连接成一个曲线,用来准确表示食管和肿瘤的形状;由于曲线之间的相对距离能够表示不同膜层之间的组织侵犯程度,所以可以方便快捷地得到肿瘤分期结果。This embodiment can remove redundant images by extracting features from the original ultrasound endoscopy images, thereby effectively ensuring the data accuracy of the ultrasound endoscopy images and reducing the complexity of data processing; by constructing control points of the three-layer esophageal structure and the tumor growth contour, the characteristics of tumor growth can be fully obtained, which is conducive to the effective evaluation of the esophageal disease state; by generating a cubic B-spline curve by spline interpolation, discrete point data can be connected into a curve to accurately represent the shape of the esophagus and the tumor; because the relative distance between the curves can represent the degree of tissue invasion between different membrane layers, the tumor staging result can be obtained quickly and conveniently.

并且,本实施例通过引入定位模型进行有效区域检测,成功提高了模型对感兴趣区域的准确定位,克服了原始数据图像质量相对较差的问题。其次,基于Transformer的多层多边形生成技术,显式建模器官结构,注入解剖学先验知识,使得本实施例能够生成更符合解剖结构的食管壁结构轮廓预测和肿瘤轮廓预测结果,有效解决了现有技术在结构准确性和解释性方面的不足。此外,引入可微分渲染技术成功将多边形曲线转化为带梯度的位图掩码,可以在使用L2进行节点距离约束的同时,使用dice loss进一步提升轮廓线的精度。同时由于显式的三次B-样条曲线的使用,在能精确描述食管及肿瘤结构的同时,可以依据其相对距离加简单的规则即可得出T分期结论,这样的规则也符合临床医护人员对诊断过程的理解,能够全面提升了超声内镜图像的精准分割和分类性能,为肿瘤定位和分期提供了更为可靠和解释性强的依据。Moreover, this embodiment successfully improves the accurate positioning of the model for the region of interest by introducing a positioning model for effective region detection, and overcomes the problem of relatively poor quality of the original data image. Secondly, based on the multi-layer polygon generation technology of Transformer, the organ structure is explicitly modeled and the anatomical prior knowledge is injected, so that this embodiment can generate esophageal wall structure contour prediction and tumor contour prediction results that are more in line with the anatomical structure, effectively solving the shortcomings of the existing technology in terms of structural accuracy and interpretability. In addition, the introduction of differentiable rendering technology successfully converts polygonal curves into bitmap masks with gradients, and the accuracy of the contours can be further improved by using dice loss while using L2 for node distance constraints. At the same time, due to the use of explicit cubic B-spline curves, while being able to accurately describe the esophageal and tumor structures, the T staging conclusion can be drawn based on their relative distance and simple rules. Such rules are also in line with the understanding of the diagnostic process by clinical medical staff, and can comprehensively improve the accurate segmentation and classification performance of ultrasound endoscopic images, providing a more reliable and interpretable basis for tumor localization and staging.

实施例三:Embodiment three:

本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行所述的一种食管结构和肿瘤轮廓与分期的预测方法;An embodiment of the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium includes a stored computer program, wherein when the computer program is executed, the device where the computer-readable storage medium is located is controlled to execute the method for predicting the esophageal structure, tumor contour and stage;

其中,所述一种食管结构和肿瘤轮廓与分期的预测方法,如果以软件功能单元的形式实现并作为独立的产品使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。Wherein, the method for predicting esophageal structure and tumor contour and staging, if implemented in the form of a software functional unit and used as an independent product, can be stored in a computer-readable storage medium. Based on such an understanding, the present invention implements all or part of the processes in the above-mentioned embodiment method, and can also be completed by instructing the relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer program can implement the steps of the above-mentioned various method embodiments when executed by the processor. Wherein, the computer program includes computer program code, and the computer program code can be in source code form, object code form, executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signal, telecommunication signal and software distribution medium, etc.

以上是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。The above are preferred embodiments of the present invention. It should be pointed out that, for ordinary technicians in this technical field, several improvements and modifications can be made without departing from the principles of the present invention. These improvements and modifications are also considered to be within the scope of protection of the present invention.

Claims (10)

1. A method for predicting esophageal structure and tumor contour and stage, comprising:
acquiring an original ultrasonic endoscope picture;
extracting features of the original ultrasonic endoscope picture by using a preset model to obtain full-size features;
Performing structural splitting and node reconstruction on the full-size features by using the preset model to obtain a three-layer esophageal structure and control points of tumor growth contours;
Generating a cubic B-spline curve of the esophagus and the tumor according to the control point in a spline interpolation mode;
obtaining a tumor staging result by calculating the relative distance between the cubic B-spline curves;
Forming a predicted result of an esophagus structure, a tumor contour line and tumor stage by the cubic B-spline curve and the tumor stage result;
The preset model is a deep learning model formed by different devices and modules.
2. The method for predicting esophageal structure and tumor contour and stage according to claim 1, wherein the method for performing structural splitting and node reconstruction on the full-size features by using the preset model is characterized in that control points of three-layer esophageal structure and tumor growth contour are obtained, and specifically:
Using a structure detection module of the preset model to perform dimension transformation and splitting on the full-size features to obtain a global feature token;
Combining the global feature token with preset esophagus and tumor control point features, and obtaining a control point of a three-layer esophagus structure through a coordinate transformation form;
And obtaining control points of the tumor outward and inward growth contour lines by performing feature mapping on preset tumor features on the basis of different radiuses.
3. The method for predicting esophageal structure and tumor contour and stage according to claim 2, wherein the global feature token is combined with preset esophageal and tumor control point features to obtain the control points of the three-layer esophageal structure through a coordinate transformation form, specifically comprising the following steps:
Calculating to obtain the features of the esophagus and the tumor control points according to the global feature token by using a first preset formula;
mapping the features of the esophagus and the tumor control points by using a linear transformation layer, and converting a mapping result into a polar coordinate form to obtain node coordinates of a mucosal layer contour line;
According to the control point characteristics, calculating the relative radius of the submucosa and the adventitia respectively by using the linear transformation layer, and calculating the actual radius of the submucosa and the adventitia according to the node coordinates and the relative radius;
And converting the control points corresponding to the control point characteristics into a form of Cartesian coordinate representation according to the actual radius and the angle of the node coordinates relative to the center of the esophagus to obtain the control points of the three-layer esophagus structure.
4. A method for predicting esophageal structure and tumor contour and stage as defined in claim 3, wherein the control points of tumor outward and inward growth contours are obtained by feature mapping of preset tumor features on the basis of different radii, specifically:
Defining tumor characteristics;
Mapping the tumor features on a first radius scale and a second radius scale respectively to obtain a first relative radius and a second relative radius;
sequentially using a first relative radius and a second relative radius, and calculating the radius of the node coordinates relative to the center of the esophagus to obtain the actual radius of the tumor outward and inward growth contour lines respectively;
And carrying out coordinate transformation of the node positions according to the actual radius of the tumor outward and inward growth contour lines and the angles of the mucous membrane layers to obtain the control points of the tumor outward and inward growth contour lines.
5. The method for predicting esophageal structure and tumor contour and stage according to claim 1, wherein a cubic B-spline curve of esophagus and tumor is generated according to the control point by means of spline interpolation, specifically:
processing control points of the three-layer esophagus structure and the tumor growth contour by using a three-time B-spline interpolation method respectively to obtain a parameterized esophagus structure contour line and a parameterized tumor contour line;
Calculating the normal vector of each curve point on the esophageal structure contour line and the tumor contour line to obtain a unit vector of a normal;
and forming a cubic B-spline curve of the esophagus and the tumor by the esophageal structure contour line, the tumor contour line and the unit vector of the normal.
6. The method for predicting esophageal structure and tumor contour and stage according to claim 1, wherein the tumor stage result is obtained by calculating the relative distance between the cubic B-spline curves, specifically:
constructing Euclidean distances between all points on the curve according to a first contour line of tumor growth outwards in the cubic B-spline curve and a second contour line of submucosal layer to obtain a distance matrix;
Performing distance calculation and normal vector judgment on the first contour line to the second contour line according to the distance matrix in an index mode to obtain a maximum value of distances from each point on the first contour line to the nearest point on the second contour line, and taking the maximum value of distances as a relative distance value set;
and in the numerical value set of the relative distance, acquiring a tumor stage result according to the numerical value of the relative distance between the outline of the tumor outward growth and the submucosa and the numerical value of the relative distance between the outline of the tumor outward growth and the adventitia layer.
7. The method for predicting esophageal structure and tumor contour and stage according to claim 1, wherein the feature extraction is performed on the original ultrasonic endoscopic picture by using a preset model to obtain full-size features, specifically:
identifying and cutting an effective area boundary box of the original ultrasonic endoscope picture by using an effective area detector of the preset model to obtain an effective ultrasonic endoscope picture;
And performing feature extraction and up-sampling on the effective ultrasonic endoscope picture by using a feature encoder and a feature decoder of the preset model to obtain the full-size feature.
8. The method for predicting esophageal structure and tumor contour and stage of claim 1, wherein said pre-set model is a deep learning model composed of different devices and modules, further comprising:
obtaining a DICE loss function by measuring the consistency between the segmentation mask predicted by the preset model and the real segmentation mask;
Obtaining a polygon loss function by measuring the error between a predicted node and an actual node of the preset model;
obtaining a classification loss function by quantifying the error between the T stage label predicted by the preset model and the actual T stage label;
and training the preset model according to the DICE loss function, the polygonal loss function and the classification loss function by using a preset training set and a mask of tumor and food tube wall to obtain an optimized preset model.
9. The device for predicting the esophagus structure, the tumor outline and the stage is characterized by comprising a data module, a characteristic module, a node module, a curve module, a stage module and an integration module;
the data module is used for acquiring an original ultrasonic endoscope picture;
the feature module is used for carrying out feature extraction on the original ultrasonic endoscope picture by using a preset model to obtain full-size features;
The node module is used for carrying out structural splitting and node reconstruction on the full-size characteristics by using the preset model to obtain a three-layer esophagus structure and a control point of a tumor growth contour;
the curve module is used for generating a cubic B-spline curve of the esophagus and the tumor according to the control point in a spline interpolation mode;
The staging module is used for obtaining a tumor staging result by calculating the relative distance between the cubic B-spline curves;
the integration module is used for forming a predicted result of an esophagus structure, a tumor contour line and tumor stage by the cubic B-spline curve and the tumor stage result;
The preset model is a deep learning model formed by different devices and modules.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program, which is called and executed by a computer to implement a method for predicting esophageal structure and tumor contour and stage according to any one of the preceding claims 1-8.
CN202410349867.1A 2024-03-26 2024-03-26 Esophageal structure, tumor contour and stage prediction method, device and medium Active CN118212459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410349867.1A CN118212459B (en) 2024-03-26 2024-03-26 Esophageal structure, tumor contour and stage prediction method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410349867.1A CN118212459B (en) 2024-03-26 2024-03-26 Esophageal structure, tumor contour and stage prediction method, device and medium

Publications (2)

Publication Number Publication Date
CN118212459A true CN118212459A (en) 2024-06-18
CN118212459B CN118212459B (en) 2024-10-18

Family

ID=91446417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410349867.1A Active CN118212459B (en) 2024-03-26 2024-03-26 Esophageal structure, tumor contour and stage prediction method, device and medium

Country Status (1)

Country Link
CN (1) CN118212459B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220319002A1 (en) * 2021-04-05 2022-10-06 Nec Laboratories America, Inc. Tumor cell isolines

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650520B1 (en) * 2017-06-06 2020-05-12 PathAI, Inc. Systems and methods for training a statistical model to predict tissue characteristics for a pathology image
US20220020496A1 (en) * 2018-11-21 2022-01-20 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
CN115311268A (en) * 2022-10-10 2022-11-08 武汉楚精灵医疗科技有限公司 Esophagus endoscope image identification method and device
CN117427286A (en) * 2023-10-30 2024-01-23 南京鼓楼医院 Tumor radiotherapy target area identification method, system and equipment based on energy spectrum CT

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10650520B1 (en) * 2017-06-06 2020-05-12 PathAI, Inc. Systems and methods for training a statistical model to predict tissue characteristics for a pathology image
US20220020496A1 (en) * 2018-11-21 2022-01-20 Ai Medical Service Inc. Diagnostic assistance method, diagnostic assistance system, diagnostic assistance program, and computer-readable recording medium storing therein diagnostic assistance program for disease based on endoscopic image of digestive organ
CN115311268A (en) * 2022-10-10 2022-11-08 武汉楚精灵医疗科技有限公司 Esophagus endoscope image identification method and device
CN117427286A (en) * 2023-10-30 2024-01-23 南京鼓楼医院 Tumor radiotherapy target area identification method, system and equipment based on energy spectrum CT

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王士旭: "人工智能技术在上消化道内镜检查部位识别及早期食管癌性质识别中的相关研究", 《中国博士学位论文全文数据库》(电子期刊), 15 January 2023 (2023-01-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220319002A1 (en) * 2021-04-05 2022-10-06 Nec Laboratories America, Inc. Tumor cell isolines

Also Published As

Publication number Publication date
CN118212459B (en) 2024-10-18

Similar Documents

Publication Publication Date Title
Almajalid et al. Development of a deep-learning-based method for breast ultrasound image segmentation
CN113554669B (en) Unet network brain tumor MRI image segmentation method with improved attention module
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
CN112258514B (en) Segmentation method of pulmonary blood vessels of CT (computed tomography) image
CN113112609A (en) Navigation method and system for lung biopsy bronchoscope
WO2024082441A1 (en) Deep learning-based multi-modal image registration method and system, and medium
CN111179237A (en) A kind of liver and liver tumor image segmentation method and device
CN118212459B (en) Esophageal structure, tumor contour and stage prediction method, device and medium
WO2024169341A1 (en) Registration method for multimodality image-guided radiotherapy
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN117911418B (en) Lesion detection method, system and storage medium based on improved YOLO algorithm
CN116228792A (en) A medical image segmentation method, system and electronic device
CN116258933A (en) Medical image segmentation device based on global information perception
CN117975002A (en) Weak supervision image segmentation method based on multi-scale pseudo tag fusion
CN118229981A (en) CT image tumor segmentation method, device and medium combining convolutional network and transducer
CN112634265A (en) Method and system for constructing and segmenting fully-automatic pancreas segmentation model based on DNN (deep neural network)
CN115619706A (en) Pulmonary nodule detection method based on deep learning
CN118918117B (en) Multi-sequence image segmentation method based on geometric iteration optimization fusion
CN118552728A (en) Deep learning skin lesion segmentation method based on multi-pooling fusion and boundary perception
CN111667488B (en) A medical image segmentation method based on multi-angle U-Net
WO2024169353A1 (en) Intelligent delineation method for cone-beam ct image target volume based on regional narrow-band propagation
CN118037791A (en) Construction method and application of multi-mode three-dimensional medical image segmentation registration model
CN115409837B (en) Endometrial cancer CTV automatic delineation method based on multi-modal CT image
CN116630531A (en) A rib 3D reconstruction method and system based on point cloud upsampling
CN114240844A (en) An unsupervised method for keypoint localization and object detection in medical images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20250228

Address after: Room 103, No. 87-1 Dongpu Avenue, Tianhe District, Guangzhou City, Guangdong Province 510000

Patentee after: Guangdong Hede Supply Chain Co.,Ltd.

Country or region after: China

Address before: No. 651, Dongfeng East Road, Yuexiu District, Guangzhou, Guangdong 510060

Patentee before: SUN YAT SEN University CANCER CENTER (SUN YAT SEN University AFFILIATED TO CANCER CENTER SUN YAT SEN UNIVERSITY CANCER INSTITUTE)

Country or region before: China

TR01 Transfer of patent right