[go: up one dir, main page]

CN115222746B - Space-time fusion-based multi-task heart substructure segmentation method - Google Patents

Space-time fusion-based multi-task heart substructure segmentation method Download PDF

Info

Publication number
CN115222746B
CN115222746B CN202210981835.4A CN202210981835A CN115222746B CN 115222746 B CN115222746 B CN 115222746B CN 202210981835 A CN202210981835 A CN 202210981835A CN 115222746 B CN115222746 B CN 115222746B
Authority
CN
China
Prior art keywords
segmentation
substructure
layer
slice
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210981835.4A
Other languages
Chinese (zh)
Other versions
CN115222746A (en
Inventor
卢山富
袁博
颜子夜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Perception Vision Medical Technology Co ltd
Zhejiang Boshi Medical Technology Co ltd
Original Assignee
Perception Vision Medical Technology Co ltd
Zhejiang Boshi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Perception Vision Medical Technology Co ltd, Zhejiang Boshi Medical Technology Co ltd filed Critical Perception Vision Medical Technology Co ltd
Priority to CN202210981835.4A priority Critical patent/CN115222746B/en
Publication of CN115222746A publication Critical patent/CN115222746A/en
Application granted granted Critical
Publication of CN115222746B publication Critical patent/CN115222746B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-task heart substructure segmentation method based on space-time fusion, which is used for extracting and fusing the characteristics of a target segmentation slice and adjacent slices so as to obtain time sequence association and strengthen the characteristics of the target segmentation slice; after being processed by a time sequence feature fusion module, a Unet segmentation network is constructed to carry out rough segmentation on the target slice after time sequence fusion, and a segmentation result is obtained; using a substructure association module to search the correlation relationship among the structures and outputting the substructure segmentation result; and constructing a multi-task system for the roughly segmented structure through an image reconstruction auxiliary task module, and enhancing the feature extraction and discrimination capabilities of the model, so that the segmentation performance of the heart substructure is improved. The invention can combine the time and space information of CT image, and adopts a multitask constraint mode at the same time, which is used for automatic division of heart substructure of patients and lays a foundation for the establishment of subsequent treatment plans.

Description

Space-time fusion-based multi-task heart substructure segmentation method
Technical Field
The invention relates to the technical field of image segmentation, in particular to a multi-task heart substructure segmentation method based on space-time fusion.
Background
Radiotherapy is a common modality for cancer treatment, but the side effects of radiotherapy can manifest after years, while reducing the effectiveness of the treatment. Cardiac toxicity is found, for example, in patients treated for lung cancer, lymphoma and breast cancer. In lung cancer patients, the prognosis of the heart in both radiation and chemotherapy can be negatively affected. The damage to the heart is not limited to the heart muscle, but radiation may also cause peripheral arterial, coronary and carotid artery disease. Most current radiotherapy reports use the entire pericardium as a region of interest, typically DVH (dose volume histogram) of the pericardium. However, the relationship between dose and subsequent toxicity of cardiac substructure is not well-defined, and has led to great interest among researchers. There is a need to study the correlation between dose and cardiac substructure and subsequent toxicity, provided that accurate segmentation of cardiac substructure is possible.
In conventional procedures, a physician typically makes a radiation therapy plan based on the delineation of cardiac substructure by non-enhanced CT. The segmentation of the heart substructure requires manual delineation on the CT image by a clinician with clinical knowledge, but since the non-enhanced CT does not allow clear visualization of the heart substructure, the physician takes a lot of time and effort. Although higher definition than non-enhanced CT for enhanced CT and MR (magnetic resonance) images, they are generally not used for treatment planning due to modality issues. In addition, by the effects of cardiac and respiratory motion, manually delineating heart substructures with poor contrast presents a significant challenge.
With the development of machine learning and deep learning, researchers have attempted to apply new techniques to the segmentation work of heart substructures. The cardiac substructure is segmented, for example, using a common deep-learning image-based segmentation network Unet, VNet, DUnet, attention _ Unet, and the like. Common methods have some disadvantages, in particular, the interlayer relation of CT slices is easily lost for the segmentation of the heart substructure for a 2D network; for 3D networks, because of the 3D volume data, the current hardware conditions are limited, and the CT image of the whole patient cannot be input into the network at one time, the current common method is to randomly cut the 3D volume data, and then take the cut voxel block as input to perform feature learning. Although this can solve the hardware computation problem, the random segmentation approach can destroy the structural integrity and cannot express the structural features of each heart substructure well. Furthermore, due to the close anatomical relationship between the heart substructures, the existing methods do not take into account the interdependence between the individual substructures.
Disclosure of Invention
In order to solve the problems, the invention provides a space-time fusion-based multi-task heart substructure segmentation method which can combine the time and space information of CT images, and simultaneously adopts a multi-task constraint mode to automatically segment the heart substructure of a patient so as to lay a foundation for the establishment of a subsequent treatment plan.
For this purpose, the technical scheme of the invention is as follows: a multi-task heart substructure segmentation method based on space-time fusion comprises the following steps:
1) Acquiring a heart CT image of a patient, and randomly extracting a target segmentation slice and an adjacent auxiliary slice;
2) Extracting features of the extracted target segmentation slice layer and the extracted auxiliary slice layer by using ResNet network, and performing convolution operation treatment on the extracted multi-layer slice features;
3) Based on an attention mechanism, respectively carrying out convolution processing on the multi-layer slice features extracted by the auxiliary slice layer and the target segmentation slice layer to respectively obtain a multi-dimensional vector representation key dictionary and a query dictionary of the image features;
4) Performing matrix product operation on the calculated key dictionary and the query dictionary to obtain a relevance result;
5) Connecting the relevance result in the step 4) with the characteristic extracted target segmentation slice characteristic in the step 2), and outputting the image characteristic fused by the time sequence characteristic after convolution fusion;
6) Performing rough segmentation on the image features output in the step 5) by utilizing Unet network structures, and outputting rough segmentation results;
7) Adjusting the segmentation result to Form, representing different substructure segmentation feature maps;
8) Inputting the characteristic diagrams after the coarse segmentation of different substructures into an LSTM network for characteristic learning;
9) Outputting the anatomical correlation sub-structure segmentation feature map
10 To achieve image reconstruction assistance tasks: extracting anatomical factors of the target segmentation slice according to the image characteristic rough segmentation result obtained in the step 6); extracting a modal decomposition vector by using the image features fused by the anatomical factors and the time sequence features through a modal encoder; reconstructing a target segmented slice image from the modal decomposition vector and the anatomical factor.
Preferably, said step 10) comprises the steps of:
i) Obtaining the rough segmentation result of the image features obtained in the step 6);
ii) performing anatomical decomposition on the input rough segmentation result by using an anatomical factor encoder to generate an anatomical decomposition factor;
iii) Inputting the time sequence fusion image obtained in the step 5) and the anatomical decomposition factor obtained in the step ii) into a substructure modal decomposition encoder for modal decomposition to generate modal decomposition vectors used for measuring component coefficients of each substructure to the original image;
iv) weighting and summing the modal decomposition vector and the decomposition solution factor to obtain a fused image feature map;
v) the fused image feature images are subjected to multi-layer convolution operation to obtain a final reconstructed image.
Preferably, the anatomic factor encoder and the sub-structural modal decomposition encoder are each composed of a plurality of CNN modules including a 3x3 convolutional layer, a Batch Norm algorithm, and Relu activation functions.
Preferably, the step 1) includes the steps of:
a1 Acquiring cardiac CT data of a patient;
a2 According to the data label, only selecting a slice layer containing a heart substructure as training data;
A3 Randomly extracting continuous 7-layer slices from each CT data as the input of a model, wherein the 4 th layer is a target segmentation slice, and 1-3 and 5-7 are adjacent auxiliary slice layers, which are used for extracting time sequence features and improving the feature expression of the 4 th layer slice.
Preferably, the left side of the Unet network structure is alternately composed of a first convolution layer and a pooling layer, wherein the 3x3 convolution layer is used for feature extraction, and the pooling layer is used for dimension reduction; the right side of Unet network structure is composed of second convolution layer and up sampling layer alternately, up sampling layer is used for recovering dimension, and finally output result through 1x1 convolution layer.
Compared with the prior art, the invention has the beneficial effects that:
1. under the condition of using a 2D network, the interrelation among three-dimensional slices is utilized to improve the learning capacity of the model and reduce the dependence on hardware calculation;
2. the substructure anatomical association module is added, so that the interdependence relationship of the substructure in the space dimension can be explored, and the constraint capacity of the model is increased;
3.The multi-task system is constructed for the roughly segmented structure through the image reconstruction auxiliary task module, namely the segmentation task and the reconstruction task are combined and mutually promoted, and the feature extraction and discrimination capacity of the model is enhanced, so that the segmentation performance of the heart substructure is improved, and the segmentation work of the heart substructure is promoted.
Drawings
The following is a further detailed description of embodiments of the invention with reference to the drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a block diagram of a multi-slice sequential fusion module;
FIG. 3 is a block diagram of a coarse segmentation module;
FIG. 4 is a block diagram of a sub-structural anatomic correlation module;
fig. 5 is a block diagram of an image reconstruction assist task module.
Detailed Description
As shown in fig. 1, in the multi-task heart substructure segmentation method based on space-time fusion according to the present embodiment, a multi-layer slice time sequence fusion module is used to extract and fuse features of a target segmentation slice and adjacent slices, so as to obtain time sequence association and enhance features of the target segmentation slice; after being processed by a time sequence feature fusion module, a Unet segmentation network is constructed to carry out rough segmentation on the target slice after time sequence fusion, and a segmentation result is obtained; and using a substructure anatomical association module to explore the interrelationship among the structures for the segmentation result, and outputting the segmentation result of the substructure. In addition, a multi-task system is built for the roughly segmented structure through an image reconstruction auxiliary task module, namely, the segmentation task and the reconstruction task are combined and mutually promoted, and the feature extraction and discrimination capacity of the model is enhanced, so that the segmentation performance of the heart substructure is improved.
The structure of the multi-layer slice time sequence fusion module is shown in fig. 2, and specifically comprises the following steps:
a1 Acquiring cardiac CT data of a patient;
a2 According to the data label, only selecting a slice layer containing a heart substructure as training data;
A3 Randomly extracting continuous 7-layer slices from each CT data as the input of a model, wherein the 4 th layer is a target segmentation slice, and 1-3 and 5-7 are adjacent auxiliary slice layers, which are used for extracting time sequence characteristics and improving the characteristic expression of the 4 th layer slice;
A4 Using two ResNet networks with the same structure to respectively extract characteristics of the extracted target segmentation slice layer and the extracted auxiliary slice layer, and processing the extracted multi-layer slice characteristics by a plurality of Conv (convolution operations);
a5 After multi-level convolution processing is performed on the extracted multi-layer slice characteristics, adjacent auxiliary slice layers are used based on an attention mechanism (Convolution) to compute a key dictionary while using(Convolution) calculating a query dictionary for the target segmentation slice layer; the key dictionary and the query dictionary are respectively multidimensional vector representations of image features of an auxiliary slice layer and a target segmentation slice layer;
a6 Performing matrix product operation on the calculated key dictionary and the query dictionary, and expressing a relevance result;
A7 And (3) connecting the correlation result with the characteristics of the target segmentation slice layer after the characteristic extraction, and outputting the image characteristics after the time sequence characteristic fusion through convolution fusion.
The structure of the rough segmentation module is shown in fig. 3, and specifically comprises the following steps:
B1 Acquiring image characteristics output by the multi-layer slice time sequence fusion module;
B2 Performing coarse segmentation on the image through a Unet network structure, wherein the output of the Unet network structure is of n+1 types, and n is the number of heart substructures; the left side of the Unet network structure is formed by alternately 3 times of a first convolution layer and a pooling layer, wherein the 3x3 convolution layer is used for extracting features, and the pooling layer is used for reducing dimensionality; the left side and the right side of the Unet network structure are subjected to convolution operation for n+1 times, the right side of the Unet network structure is alternately composed of an up-sampling layer and a second convolution layer, the up-sampling layer is used for recovering dimensionality, and finally, a result is output through the 1x1 convolution layer;
b3 Outputting a rough segmentation result; the division results are the sub-structure diagrams extracted from the original image respectively.
The substructure anatomical association module square structure is shown in fig. 4, and specifically comprises the following steps:
c1 Obtaining a rough segmentation result of the heart substructure;
C2 Through network learning, the segmentation result is adjusted to be Form, representing different substructure segmentation feature maps;
C3 Inputting the characteristic diagrams after the coarse segmentation of different substructures into a defined LSTM (long-short term memory network) network for characteristic learning; the LSTM network consists of a number rnn (recurrent neural network);
c4 Outputting the anatomical related sub-structure segmentation feature map This is the final output result of this embodiment.
In order to improve the segmentation performance of the heart substructure, an image reconstruction auxiliary task module is added, and the structure of the image reconstruction auxiliary task module is shown in fig. 5, and specifically includes the following steps:
d1 Obtaining a rough segmentation result of the heart substructure;
D2 The input is subjected to anatomical decomposition through an anatomical factor encoder to generate anatomical decomposition factors, wherein the form of the anatomical factors is the image characteristics of each substructure with the same size as the input original image; the anatomic factor encoder comprises 7 CNN modules, wherein each CNN module comprises a 3x3 convolution layer, a Batch Norm algorithm and Relu activation functions;
Inputting the rough segmentation result into a first CNN module, and performing linear correction on the input value by Relu activation functions after the input value is processed by a 3x3 convolution layer and a Batch Norm algorithm in sequence; the output result of the first CNN module is respectively input into a second CNN module and a third CNN module; the output result of the third CNN module is respectively input into a fourth CNN module and a fifth CNN module; the output result of the fifth CNN module is input into a sixth CNN module; the output result of the sixth CNN module and the output result of the third CNN module are input into the fourth CNN module together; the output result of the fourth CNN module and the output result of the first CNN module are input into the second CNN module together; and the output result of the second CNN module is input into a seventh CNN module, and the seventh CNN module outputs the anatomical decomposition factor. In clinical anatomy, each organ is independent, so that each organ function can be decomposed from the original image in the medical image, and can be combined to form the original image;
D3 Inputting the time sequence fusion image and the dissection solution factor into a substructure modal decomposition encoder for modal decomposition to generate modal decomposition vectors which are used for measuring component coefficients of each substructure to the original image; the substructure modal decomposition encoder is composed of 3 CNN modules, wherein the CNN modules comprise a 3x3 convolution layer, a Batch Norm algorithm and Relu activation functions; inputting the output result of the previous CNN module, repeating for 3 times, and outputting a modal decomposition vector, wherein the expression form of the modal decomposition vector is [0.2,0.2,0.1 … ];
D4 Weighting and summing the modal decomposition vector and the decomposition solution factor to obtain a fused image feature map;
D5 Processing the fused feature images by a slice image reconstruction decoder to obtain a final reconstructed image; the slice image reconstruction decoder includes 3 convolutional layer (Conv) operations.
The image reconstruction auxiliary task and the sub-structure anatomical association module are both based on the rough segmentation module, so that the result after rough segmentation is needed to be obtained. The two are connected together through a rough segmentation module; the task of the reconstruction module can help the rough segmentation module to obtain a better rough segmentation result, and the better rough segmentation result can help the sub-structural anatomical association module to generate better association, thereby promoting the final segmentation.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (4)

1. A multi-task heart substructure segmentation method based on space-time fusion is characterized by comprising the following steps of: the method comprises the following steps:
1) Acquiring a heart CT image of a patient, and randomly extracting a target segmentation slice and an adjacent auxiliary slice;
2) Extracting features of the extracted target segmentation slice layer and the extracted auxiliary slice layer by using ResNet network, and performing convolution operation treatment on the extracted multi-layer slice features;
3) Based on an attention mechanism, respectively carrying out convolution processing on the multi-layer slice features extracted by the auxiliary slice layer and the target segmentation slice layer to respectively obtain a multi-dimensional vector representation key dictionary and a query dictionary of the image features;
4) Performing matrix product operation on the calculated key dictionary and the query dictionary to obtain a relevance result;
5) Connecting the relevance result obtained in the step 4) with the characteristic extracted target segmentation slice characteristic obtained in the step 2), and outputting the image characteristic fused by the time sequence characteristic after convolution fusion;
6) Performing rough segmentation on the image features output in the step 5) by utilizing Unet network structures, and outputting rough segmentation results;
7) The segmentation result is adjusted to be in a T 1、T2、...Tn form, and different substructure segmentation feature graphs are represented;
8) Inputting the characteristic diagrams after the coarse segmentation of different substructures into an LSTM network for characteristic learning;
9) Outputting each sub-structure segmentation feature map Y 1、Y2、...Yn after anatomical association;
10 To achieve image reconstruction assistance tasks: extracting anatomical factors of the target segmentation slice according to the image characteristic rough segmentation result obtained in the step 6); extracting a modal decomposition vector by using the image features fused by the anatomical factors and the time sequence features through a modal encoder; reconstructing a target segmentation slice image by the modal decomposition vector and the anatomical factor; said step 10) comprises the steps of:
i) Obtaining the rough segmentation result of the image features obtained in the step 6);
ii) performing anatomical decomposition on the input rough segmentation result by using an anatomical factor encoder to generate an anatomical decomposition factor;
iii) Inputting the time sequence fusion image obtained in the step 5) and the anatomical decomposition factor obtained in the step ii) into a substructure modal decomposition encoder for modal decomposition to generate modal decomposition vectors used for measuring component coefficients of each substructure to the original image;
iv) weighting and summing the modal decomposition vector and the decomposition solution factor to obtain a fused image feature map;
v) the fused image feature images are subjected to multi-layer convolution operation to obtain a final reconstructed image.
2. A method for multi-task cardiac substructure segmentation based on spatio-temporal fusion according to claim 1, wherein: the anatomic factor encoder and the substructure modal decomposition encoder are each composed of a plurality of CNN modules, wherein each CNN module comprises a 3x3 convolution layer, a Batch Norm algorithm and Relu activation functions.
3. A method for multi-task cardiac substructure segmentation based on spatio-temporal fusion according to claim 1, wherein: said step 1) comprises the steps of:
a1 Acquiring cardiac CT data of a patient;
A2 According to the data label, only selecting a slice layer containing a heart substructure as training data;
A3 Randomly extracting continuous 7-layer slices from each CT data as the input of a model, wherein the 4 th layer is a target segmentation slice, and 1-3 and 5-7 are adjacent auxiliary slice layers, which are used for extracting time sequence features and improving the feature expression of the 4 th layer slice.
4. A method for multi-task cardiac substructure segmentation based on spatio-temporal fusion according to claim 1, wherein: the left side of the Unet network structure is alternately composed of a first convolution layer and a pooling layer, wherein the 3x3 convolution layer is used for feature extraction, and the pooling layer is used for reducing dimensionality; the right side of Unet network structure is composed of second convolution layer and up sampling layer alternately, up sampling layer is used for recovering dimension, and finally output result through 1x1 convolution layer.
CN202210981835.4A 2022-08-16 2022-08-16 Space-time fusion-based multi-task heart substructure segmentation method Active CN115222746B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210981835.4A CN115222746B (en) 2022-08-16 2022-08-16 Space-time fusion-based multi-task heart substructure segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210981835.4A CN115222746B (en) 2022-08-16 2022-08-16 Space-time fusion-based multi-task heart substructure segmentation method

Publications (2)

Publication Number Publication Date
CN115222746A CN115222746A (en) 2022-10-21
CN115222746B true CN115222746B (en) 2024-08-06

Family

ID=83616147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210981835.4A Active CN115222746B (en) 2022-08-16 2022-08-16 Space-time fusion-based multi-task heart substructure segmentation method

Country Status (1)

Country Link
CN (1) CN115222746B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115713535B (en) * 2022-11-07 2024-05-14 阿里巴巴(中国)有限公司 Image segmentation model determination method and image segmentation method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947609A (en) * 2021-10-12 2022-01-18 中南林业科技大学 Deep learning network structure and multi-label aortic dissection CT image segmentation method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020026223A1 (en) * 2018-07-29 2020-02-06 Zebra Medical Vision Ltd. Systems and methods for automated detection of visual objects in medical images
EP3977360A1 (en) * 2019-05-29 2022-04-06 F. Hoffmann-La Roche AG Integrated neural networks for determining protocol configurations
WO2021030629A1 (en) * 2019-08-14 2021-02-18 Genentech, Inc. Three dimensional object segmentation of medical images localized with object detection
US20220076133A1 (en) * 2020-09-04 2022-03-10 Nvidia Corporation Global federated training for neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947609A (en) * 2021-10-12 2022-01-18 中南林业科技大学 Deep learning network structure and multi-label aortic dissection CT image segmentation method

Also Published As

Publication number Publication date
CN115222746A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN112465827B (en) Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation
Li et al. DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion
Lin et al. BATFormer: Towards boundary-aware lightweight transformer for efficient medical image segmentation
CN109584244B (en) Hippocampus segmentation method based on sequence learning
JP2024143991A (en) Image segmentation method and system in a multitask learning network
CN114897780B (en) MIP sequence-based mesenteric artery blood vessel reconstruction method
Harouni et al. Universal multi-modal deep network for classification and segmentation of medical images
CN110070540A (en) Image generating method, device, computer equipment and storage medium
CN115830041A (en) 3D medical image segmentation method based on cross fusion convolution and deformable attention transducer
CN114782384B (en) A method and device for segmenting cardiac chamber images based on semi-supervised method
CN109801268B (en) CT radiography image renal artery segmentation method based on three-dimensional convolution neural network
CN116612174A (en) Three-dimensional reconstruction method and system for soft tissue and computer storage medium
Liu et al. Automatic segmentation algorithm of ultrasound heart image based on convolutional neural network and image saliency
CN116645380A (en) Automatic segmentation method of tumor area in CT images of esophageal cancer based on two-stage progressive information fusion
CN115222746B (en) Space-time fusion-based multi-task heart substructure segmentation method
CN116934965A (en) Cerebral blood vessel three-dimensional image generation method and system based on controllable generation and diffusion model
Brahim et al. A 3D network based shape prior for automatic myocardial disease segmentation in delayed-enhancement MRI
Li et al. CC-DenseUNet: Densely connected U-Net with criss-cross attention for liver and tumor segmentation in CT volumes
Yu et al. 3D Medical Image Segmentation based on multi-scale MPU-Net
Xia et al. Automatic liver segmentation from CT volumes based on multi-view information fusion and condition random fields
Feng et al. MMIF-VAEFusion: An end-to-end multi-modal medical image fusion network using vector quantized variational auto-encoder
CN118628737A (en) Multimodal stroke lesion segmentation method based on 2.5D cross-modal collaborative learning network
Durrani et al. An internet of medical things based liver tumor detection system using semantic segmentation
Huang et al. Multi-residual 2D network integrating spatial correlation for whole heart segmentation
CN116030043A (en) A Multimodal Medical Image Segmentation Method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant