[go: up one dir, main page]

CN116563224B - Image histology placenta implantation prediction method and device based on depth semantic features - Google Patents

Image histology placenta implantation prediction method and device based on depth semantic features Download PDF

Info

Publication number
CN116563224B
CN116563224B CN202310393731.6A CN202310393731A CN116563224B CN 116563224 B CN116563224 B CN 116563224B CN 202310393731 A CN202310393731 A CN 202310393731A CN 116563224 B CN116563224 B CN 116563224B
Authority
CN
China
Prior art keywords
features
feature extraction
placenta
feature
deep semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310393731.6A
Other languages
Chinese (zh)
Other versions
CN116563224A (en
Inventor
郑昌业
钟健
黄炳升
曹康养
张畅
岳沛言
吕捷耿
邹玉坚
刘碧华
许晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Dongguan Peoples Hospital
Original Assignee
Shenzhen University
Dongguan Peoples Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University, Dongguan Peoples Hospital filed Critical Shenzhen University
Priority to CN202310393731.6A priority Critical patent/CN116563224B/en
Publication of CN116563224A publication Critical patent/CN116563224A/en
Application granted granted Critical
Publication of CN116563224B publication Critical patent/CN116563224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种基于深度语义特征的影像组学胎盘植入预测方法及装置,所述方法包括获取胎盘影像数据;提取所述胎盘影像数据的深度语义特征以及影像组学特征;对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征;基于所述目标特征预测所述胎盘影像数据对应的植入类别。本申请通过将影像组学特征和深度语义特征来进行联合,可以得到维度层次丰富的特征信息,然后再基于特征信息进行胎盘植入预测,可以提高胎盘植入预测的准确性。

The present application discloses a method and device for predicting placenta implantation based on imaging genomics with deep semantic features, the method comprising obtaining placenta image data; extracting deep semantic features and imaging genomics features of the placenta image data; screening the deep semantic features and the imaging genomics features to obtain target features; and predicting the implantation category corresponding to the placenta image data based on the target features. The present application combines imaging genomics features with deep semantic features to obtain feature information with rich dimensionality, and then predicts placenta implantation based on the feature information, thereby improving the accuracy of placenta implantation prediction.

Description

Image histology placenta implantation prediction method and device based on depth semantic features
Technical Field
The application relates to the technical field of biomedical engineering, in particular to an image histology placenta implantation prediction method and device based on deep semantic features.
Background
Placental implantable disease (PLACENTA ACCRETA Spectrum, PAS) refers to a group of diseases that are related to the hypoplasia of the decidua between the placenta and uterus, abnormal invasion of trophoblasts, and myometrium. Pregnant women with PAS are at serious risk after delivery, such as residual placenta, life threatening bleeding, and even death. The disease is also associated with premature birth, low body weight infants, etc.
Ultrasonic imaging is widely used for PAS identification because of its advantages of flexibility, low cost, and no harm to mother and infant. However, in the case of PAS identification using an ultrasonic image, the PAS detection rate is lowered due to the high degree of dependence of ultrasonic imaging on the technical ability of the operator, poor reproducibility, and susceptibility to interference by factors such as obesity of pregnant women, intestinal gas artifact of pregnant women, and fetal skull.
There is thus a need for improvements and improvements in the art.
Disclosure of Invention
The application aims to solve the technical problem of providing an image histology placenta implantation prediction method and device based on depth semantic features aiming at the defects of the prior art.
In order to solve the above technical problems, a first aspect of the embodiments of the present application provides an image histology placenta implantation prediction method based on depth semantic features, the method comprising:
extracting depth semantic features and image histology features of the placenta image data;
Screening the depth semantic features and the image histology features to obtain target features;
And predicting an implantation category corresponding to the placenta image data based on the target feature.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the placenta image data is placenta MRI data.
The method for predicting placenta implantation of image group based on depth semantic features, wherein the step of screening the depth semantic features and the image group features to obtain target features specifically comprises the following steps:
Performing variance alignment verification on the depth semantic features and the image group chemical features respectively to obtain verification values of all the depth semantic features in the depth semantic features and verification values of all the image group chemical features in the image group chemical features;
Screening the depth semantic features based on the check values of the depth semantic features to obtain target depth semantic features, and screening the image histology features based on the check values of the image histology features to obtain target image histology features;
and splicing the target depth semantic features and the target image histology features to obtain target features.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the extracting the deep semantic features of the placenta image data specifically comprises the following steps:
Inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module;
The semantic feature extraction module comprises an encoder and an adaptive average pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected with the second feature extraction unit positioned at the forefront, and the second feature extraction unit positioned at the last is connected with the adaptive average pooling layer; the second feature extraction unit comprises a maximum pooling layer and a first feature extraction unit; the first feature extraction unit comprises two cascaded convolution blocks, wherein each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence.
The depth semantic feature-based image histology placenta implantation prediction method specifically comprises the following steps of:
training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model comprises the encoder;
And extracting an encoder of the segmentation network model, and connecting the encoder with an adaptive average pooling layer to form the semantic feature extraction module.
The depth semantic feature-based image histology placenta implantation prediction method comprises the steps that the segmentation network model further comprises a decoder; the decoder comprises a first up-sampling unit, a plurality of cascaded second up-sampling units and a convolution unit; the first up-sampling unit is connected with the second feature extraction unit positioned at the last; the first up-sampling unit is connected with a second up-sampling unit positioned at the forefront, and a second up-sampling unit positioned at the last is connected with the convolution unit; each second feature extraction unit except the last second feature extraction unit in the plurality of second feature extraction units corresponds to each second up-sampling unit one by one and is connected in a jumping manner; the first feature extraction unit is connected with the convolution unit in a jumping manner; the second upsampling unit comprises a first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the predicting the implantation category corresponding to the placenta image data based on the target features specifically comprises:
Inputting the target characteristics into a trained classifier, and predicting implantation categories corresponding to the placenta image data through the classifier;
the training process of the classifier specifically comprises the following steps:
For each classification training sample in a preset classification training set, extracting depth semantic features corresponding to the classification training samples based on the semantic feature extraction module, and extracting image histology features of the classification training samples;
Screening the depth semantic features and the image histology features to obtain target features;
and inputting a second preset network model based on the target characteristics, determining a predicted implantation category through the second preset network model, and training the second preset network model based on the predicted implantation category and a labeling implantation category corresponding to the classifying training sample to obtain the classifier.
A second aspect of the embodiments of the present application provides an image histology placenta implantation prediction apparatus based on depth semantic features, the apparatus comprising:
the feature extraction module is used for extracting depth semantic features and image histology features of the placenta image data;
the screening module is used for screening the depth semantic features and the image histology features to obtain target features;
and the classification module is used for predicting the implantation category corresponding to the placenta image data based on the target characteristics.
A third aspect of the embodiments of the present application provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in the depth semantic feature based image histology placenta implantation prediction method as described in any one of the above.
A fourth aspect of an embodiment of the present application provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
The communication bus realizes connection communication between the processor and the memory;
The processor, when executing the computer readable program, implements the steps in the depth semantic feature based image histology placenta implantation prediction method as described in any one of the above.
The beneficial effects are that: compared with the prior art, the application provides an image histology placenta implantation prediction method and device based on depth semantic features, wherein the method comprises the steps of obtaining placenta image data; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without creative effort for a person of ordinary skill in the art.
Fig. 1 is a flowchart of an image histology placenta implantation prediction method based on depth semantic features.
Fig. 2 is a schematic structural diagram of the deep semantic feature extraction module.
Fig. 3 is a schematic structural view of the first feature extraction unit.
Fig. 4 is a schematic diagram of a training process for classifying a network model.
Fig. 5 is a schematic structural diagram of a split network model.
Fig. 6 is a schematic structural diagram of an image group placenta implantation prediction device based on depth semantic features.
Fig. 7 is a schematic structural diagram of a terminal device provided by the present application.
Detailed Description
The application provides an image histology placenta implantation prediction method and device based on deep semantic features, which are used for making the purposes, technical schemes and effects of the application clearer and more definite. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the sequence number and the size of each step in this embodiment do not mean the sequence of execution, and the execution sequence of each process is determined by the function and the internal logic of each process, and should not be construed as limiting the implementation process of the embodiment of the present application.
It has been found that placenta-implanted disease (PLACENTA ACCRETA Spectrum, PAS) refers to a group of diseases in which the decidua between the placenta and uterus is underdeveloped, and trophoblast abnormal invasion and myometrium are abnormal. Pregnant women with PAS are at serious risk after delivery, such as residual placenta, life threatening bleeding, and even death. According to recent report statistics, the incidence rate of placenta implantation is as high as 0.78%, and the incidence rate of placenta implantation is obviously increased along with the increase of operation times of pregnant women subjected to caesarean section, uterine curettage, hysteromyoma rejection, hysteroscopy and the like.
Placenta implantation can be classified into placenta adhesion type, placenta implantation type, and placenta penetration type according to the depth of placenta implantation, wherein penetration type placenta implantation may cause major bleeding of a lying-in woman and related complications such as multiple organ failure or functional failure, and serious death of the lying-in woman and neonate. Therefore, if the patient suffering from PAS can be timely and accurately predicted, a proper operation scheme can be formulated before the operation of the patient, the operation can be performed more safely, the childbirth risk is reduced, more effective nursing is provided for the puerpera, and the clinical outcome is improved.
Imaging examinations such as ultrasonic imaging and magnetic resonance imaging (Magnetic Resonance Imaging, MRI) are currently commonly used as a method for identifying PAS. Among them, ultrasonic imaging has the advantages of flexibility, low cost, no harm to mother and infant, etc., so ultrasonic imaging is the preferred imaging method for diagnosing PAS. However, ultrasonic imaging has high dependency on the technical ability of operators and poor repeatability, and is also easily interfered by factors such as obesity of pregnant women, intestinal gas artifact of pregnant women, fetal skull and the like, so that the PAS detection rate is reduced. MRI has the advantage of providing panoramic images across different acquisition planes, is not interfered by the size of the mother, the gas in the intestinal tract or the position of the placenta, can well evaluate the position of the region involved in peripheral atherosclerosis and the damaged condition of adjacent organs, and has unique advantages in identifying PAS.
However, in the case of PAS prediction using MRI, subjective prediction by a doctor is generally performed, and thus the predicted result is subject to images of various subjective factors (e.g., experience, state, etc. of a clinician), which results in problems of poor objectivity and low accuracy of PAS prediction.
In order to solve the above problems, in an embodiment of the present application, placenta image data is acquired; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
The application will be further described by the description of embodiments with reference to the accompanying drawings.
The embodiment provides an image histology placenta implantation prediction method based on depth semantic features, as shown in fig. 1, which specifically includes:
S10, extracting depth semantic features and image histology features of the placenta image data.
Specifically, the placenta image data may be placenta image data with or without placenta implantation. The placenta image data may be a real-time image acquired by the image acquisition device, a non-real-time image acquired by the reading storage device, a non-real-time image transmitted by the external device, or the like. In one implementation, the placenta image data is placenta MRI (Magnetic Resonance Imaging ) image data, i.e. placenta image data is a magnetic resonance image acquired by a magnetic resonance imaging apparatus.
The deep semantic features are high-level features extracted through a deep learning mode, and the image group features are image features extracted from medical images and used for completing conversion from image data to clinical data information. It can be understood that after placenta image data is obtained, feature extraction is performed on the placenta image data twice, wherein, once deep semantic features are extracted by a deep learning mode, and image histology features are extracted by placenta regions based on the placenta image data.
The image histology features include one or more of the size, volume, shape, texture features, histogram data distribution, and entropy of information of the placenta region. The image histology feature can be extracted by an image histology feature extraction module, wherein the image histology feature extraction module can be determined by parameter configuration. In addition, when the image histology features are extracted, the region of interest of the placenta image data can be predetermined, wherein the region of interest can be sketched by a doctor or can be obtained by segmentation through a trained segmentation network model.
The depth semantic features are used for reflecting detailed information carried by placenta image data, wherein the depth semantic features can be obtained through a trained feature extraction module. Correspondingly, the extracting the deep semantic features of the placenta image data specifically comprises:
inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module.
Specifically, the feature extraction module is pre-trained and is used for extracting deep semantic features of placenta image data, that is, the input item of the semantic feature extraction module is placenta image data and the input item is the deep semantic features. The feature extraction module can comprise an encoder or an encoder and an adaptive average pooling layer, the encoder is used for extracting semantic features, the adaptive average pooling layer is used for reducing dimensions of the semantic features extracted by the encoder so as to reduce unable features and redundant features in deep semantic features, reduce parameter calculation amount of the semantic feature extraction module, integrate global information and improve robustness of the classification network model.
In one implementation manner, the semantic feature extraction module comprises an encoder and an adaptive average pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected with a second feature extraction unit positioned at the forefront, and a second feature extraction unit positioned at the last is connected with the adaptive average pooling layer; the second feature extraction unit comprises a maximum pooling layer and a first feature extraction unit; the first feature extraction unit comprises two cascaded convolution blocks, wherein each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence. Wherein, the batch normalization layer can adopt Batch Normalization, and the activation function layer can adopt Relu. The feature extraction module in the implementation mode adopts a plurality of convolution layers with small convolution kernels (3 multiplied by 3) to replace the convolution layers with large convolution kernels, so that network parameters are reduced, and meanwhile, more nonlinear mapping is performed, and fitting and expression capacity of a network are improved.
As illustrated in fig. 2 and 3, the encoder includes a first feature extraction unit and four cascaded second feature extraction units, wherein the image scale of the placenta image data is 128×128×96, the image scale of the output item of the first feature extraction unit is 128×128×96, the image scale of the output item of the second feature extraction unit located at the forefront is 64×64×48, and the image scale of the output item of the second feature extraction unit located at the second position is 32×32×24; the image scale of the output item of the second feature extraction unit positioned at the third position is 16 x 12; the image scale of the output item of the second feature extraction unit at the fourth bit is 8×8×6 to obtain an 8×8×6×512-dimensional feature vector. In addition, the adaptive average pooling layer reduces the feature vector of 8×8×6×512 dimensions to 1×512 dimensions, so that the adaptive average pooling layer takes the average value of the feature map of each channel as output, reduces the calculation amount of parameters, integrates global information, and has more robustness.
In one implementation manner, the determining process of the semantic feature extraction module specifically includes:
training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model comprises the encoder;
And extracting an encoder of the segmentation network model, and connecting the encoder with an adaptive average pooling layer to form the semantic feature extraction module.
Specifically, the segmentation training set comprises a plurality of segmentation training samples, each segmentation training sample is placenta image data and carries placenta labels, a first preset network model is trained through the segmentation training set to obtain a segmentation network model, the model structure of the first preset network model is identical to that of the segmentation network model, and the model structure of the segmentation network model are different in model parameters, wherein the first preset network model adopts initial model parameters, and the model parameters of the segmentation network model adopt trained network model parameters. Furthermore, in one implementation, the split network model includes an encoder and a decoder, wherein a network structure of the encoder is the same as the network structure of the encoder; the decoder comprises a first up-sampling unit, a plurality of cascaded second up-sampling units and a convolution unit; the first up-sampling unit is connected with the second feature extraction unit positioned at the last; the first up-sampling unit is connected with a second up-sampling unit positioned at the forefront, and a second up-sampling unit positioned at the last is connected with the convolution unit; each second feature extraction unit except the last second feature extraction unit in the plurality of second feature extraction units corresponds to each second up-sampling unit one by one and is connected in a jumping manner; the first feature extraction unit is connected with the convolution unit in a jumping manner; the second upsampling unit comprises a first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit. The split network model in this embodiment adopts a U-Net split network based on an encoder-decoder structure, and the split network model uses a plurality of convolution layers with small convolution kernels (3×3) to replace the convolution layers with large convolution kernels, so that network parameters are reduced, and meanwhile, more nonlinear mapping is performed, and fitting and expression capabilities of the network are improved. The encoder consists of 10 convolution layers, 4 pooling layers, a plurality of BatchNorm layers and Relu activation layers, and the feature map size is gradually reduced through the encoder; the decoder restores the feature map output by the encoder to a size close to the original map through upsampling and skip connection (skip connection).
Further, each network layer in the encoder and the decoder randomly sets a weight initial value, so that the weight initial value meets normal distribution, and the convergence of the network is facilitated to be quickened, wherein the convolution layer is directly and randomly initialized, and the BN normalization layer sets the weight to be 1 and the deviation to be 0; in the error back propagation training process, DICE loss is adopted as a loss function, so that the deep learning model is ensured to extract more image features.
Illustrating: as shown in fig. 3 and 5, the decoder includes three cascaded second upsampling units, where the output term of the first upsampling unit has an image scale of 16×16×12; the image scale of the output item of the second up-sampling unit positioned at the first bit is 32 x 24; the image scale of the output item of the second up-sampling unit positioned at the second bit is 64×64×48; the image scale of the output item of the second up-sampling unit positioned at the third bit is 128 x 96, and the image scale of the output item of the convolution unit is 128 x 96, wherein the second up-sampling unit positioned at the first bit is in jump connection with the second feature extraction unit positioned at the third bit, and the second up-sampling unit positioned at the second bit is in jump connection with the second feature extraction unit positioned at the second bit; the second up-sampling unit positioned at the third bit is in jump connection with the second feature extraction unit positioned at the first bit; the convolution unit is in jump connection with the first feature extraction unit.
And S20, screening the depth semantic features and the image histology features to obtain target features.
Specifically, after the deep semantic features and the image histology features are obtained, redundant information is carried in the deep semantic features and the image histology features, so that the deep semantic features and the image histology features can be respectively screened and spliced to generate target features, wherein the target features comprise partial deep semantic features and partial image histology features.
Based on this, the filtering the depth semantic feature and the image histology feature to obtain the target feature specifically includes:
Performing variance alignment verification on the depth semantic features and the image group chemical features respectively to obtain verification values of all the depth semantic features in the depth semantic features and verification values of all the image group chemical features in the image group chemical features;
Screening the depth semantic features based on the check values of the depth semantic features to obtain target depth semantic features, and screening the image histology features based on the check values of the image histology features to obtain target image histology features;
and splicing the target depth semantic features and the target image histology features to obtain target features.
Specifically, the target depth semantic features are included in the depth semantic features, the target image histology features include image histology features, and it can be understood that the depth semantic features and the image histology features each include a plurality of features, the target depth semantic features include part of the features in the depth semantic features, and the target image histology features include part of the features of the image histology features. For example, the target depth semantic features include 2 of the depth semantic features and the target image histology features include 5 of the image histology features. On the one hand, the feature with high significance in the depth semantic feature and the image histology feature can be reserved, and meanwhile, the feature quantity can be reduced, so that the model parameter quantity is reduced.
And checking the variance Ji Xing as F, namely sorting after obtaining the ratio of the inter-group variance and the intra-group variance of the features, and selecting the objective image histology features and the objective depth semantic features according to the sorting order. The screening process of the target depth semantic features and the target image group chemical features is the same, and the target depth semantic features are taken as an example for illustration.
The extracted depth semantic features and the implanted gold standard are respectively brought into a formula for calculating standard deviation, so that square values S x and implanted gold standards S label of the standard deviation of the depth semantic features are obtained, wherein the formula for calculating the standard deviation is as follows:
where x is the sample, x is the average of the samples, and n is the number of samples.
Substituting the square value S x of the standard deviation and the implanted gold standard S label into an F value calculation formula to obtain an F value corresponding to the depth semantic feature, wherein the F value calculation formula is as follows:
And obtaining F values corresponding to the depth semantic features based on the process, wherein the smaller the F value is, the more obvious the obvious difference between the depth semantic features and the implanted gold standard is proved. Based on this, the F values are ordered from small to large and a certain number (e.g., 150) of candidate depth semantic features are selected from front to back. Finally, the candidate deep semantic features are put into a Lasso algorithm for screening to obtain target deep semantic features (for example, 2 deep learning semantic features are obtained after 150 deep learning semantic features are screened).
Further, after the target depth semantic feature and the target image histology feature are obtained, the target depth semantic feature and the target image histology feature may be spliced according to the sequence of the target depth semantic feature and the target image histology feature, or the target image histology feature and the target depth semantic feature may be spliced according to the sequence of the target image histology feature and the target image histology feature. In this embodiment, stitching is performed in order of the target depth semantic feature-target image histology feature.
S30, predicting the implantation category corresponding to the placenta image data based on the target feature.
Specifically, the implantation categories are classified into a placenta-adhesion type, a placenta-implantation type, and a placenta-penetration type according to the depth of placenta implantation, and thus, the implantation category corresponding to placenta image data may be one of a placenta-adhesion type, a placenta-implantation type, and a placenta-penetration type. In addition, placenta implantation may not occur in placenta image data, and thus, the implantation category corresponding to the placenta image data may also be that placenta implantation does not occur.
In one implementation manner, the implantation category may be determined by a classifier, and correspondingly, the predicting, based on the target feature, the implantation category corresponding to the placenta image data specifically includes:
Inputting the target features into a trained classifier, and predicting implantation categories corresponding to the placenta image data through the classifier.
Specifically, the classifier is trained, as shown in fig. 4, and the training process of the classifier specifically includes:
For each classification training sample in a preset classification training set, extracting depth semantic features corresponding to the classification training samples based on the semantic feature extraction module, and extracting image histology features of the classification training samples;
Screening the depth semantic features and the image histology features to obtain target features;
and inputting a second preset network model based on the target characteristics, determining a predicted implantation category through the second preset network model, and training the second preset network model based on the predicted implantation category and a labeling implantation category corresponding to the classifying training sample to obtain the classifier.
Specifically, the model structure of the second preset network model is the same as that of the classifier, and the model parameters are different from the model structure of the classifier, the second preset network model adopts an initial network model, and the classifier adopts a network model for training. In this embodiment, the classifier may employ a support vector classifier.
The preset classified training sets comprise a plurality of classified training sample images, each classified training sample image in the classified training sample images is placenta image data, part of training sample images in the classified training sample images are placenta image data with placenta implantation, and part of training sample images are placenta image data with placenta implantation. In this embodiment, the preset segmentation training set and the preset classification training set may be collected synchronously, where the collection process of the preset segmentation training set and the preset classification training set may be: the method comprises the steps of firstly collecting placenta MRI image data, then dividing the collected placenta MRI image data into a segmentation data set and a classification data set, finally dividing the segmentation data set into a segmentation training set and a segmentation test set, and dividing the classification data set into a classification training set and a classification test set.
In one implementation, the determination of the segmentation data set and the classification data set may be:
First step, data collection
Collecting a first preset number (for example, 241 cases) of placenta MRI image data subjected to pathological examination in operation as an internal modeling data set, wherein each placenta MRI image data in the internal modeling data set carries an implantation category, the positive and negative sample ratio in the internal modeling data set is 169:72, the positive sample is placenta MRI image data subjected to placenta implantation, and the negative sample is placenta MRI image data not subjected to placenta implantation. At the same time, a second predetermined number (e.g., 122) is collected as an external independent test set, wherein each placental MRI image data in the external independent test set carries an implant class, and the positive-negative sample ratio in the external independent test set is 106:16.
The patients corresponding to the placenta MRI image data of the internal modeling data set and the external independent test set are the patients with clinical labor examination, prenatal MRI examination and delivery, and having the history of caesarean section or uterine cavity operation, and have complete clinical and imaging data, and the patients have the advantages of correcting the diseases of the placenta with gestational week of more than 18 weeks, single pregnancy without gestational diabetes mellitus, gestational hypertension, blood system abnormality and the like. Meanwhile, the pregnant women have basic diseases (such as diabetes, thalassemia and the like); cases of fetal growth and development limitation or fetal disease with placenta-related images; pregnant women with MRI examination contraindications; neither the presence of motion artifacts due to imaging blurring nor the inability of pregnant women to cooperate with related examinations is incorporated into the internal modeling dataset and the external independent test set.
Second step, sketch of interest
After the training sample set is acquired, the placenta image data is delineated and divided, wherein the placenta MRI data region of interest (Region of Interest, ROI) is positioned and delineated on the MRI image of the patient by a radiologist having MRI image diagnostics experience using open source software ITK-SNAP (version 3.4.0; https:// www.itksnap.org). However, a specified number (e.g., 40 cases, etc.) is randomly selected as a preset segmentation dataset in the internal modeling dataset, and the remaining placenta MRI image data in the internal modeling dataset and placenta MRI image data in the external independent test set are used as preset classification datasets.
Third step, data preprocessing
1) Unification of image size
Because the MRI data of each patient is not uniform in size along the z-axis, the training effect of the segmentation model is not ideal, and interpolation resampling is performed on all placenta image data and the image size is unified to 128×128×96.
2) Resampling
In order to be able to extract the image histology features in the ROI region of the classification dataset, a resampling operation is performed on the MRI data, wherein the average resolution of the training samples of the classification dataset comprised in the internal modeling dataset in the classification dataset is 0.87×0.87×5.8mm 3, by statistical knowledge of the training samples of the classification dataset comprised in the internal modeling dataset in the classification dataset. To avoid resampling the excessive data, the data in the class training set is uniformly resampled to 0.87×0.87×5.8mm 3. Of course, the same resampling operation may be used for ROI data delineated by the physician.
In summary, the present embodiment provides a method for predicting placenta implantation in image group based on deep semantic features, which includes obtaining placenta image data; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
Based on the method for predicting the implantation of placenta in image group based on depth semantic features, the embodiment provides a prediction device for implantation of placenta in image group based on depth semantic features, as shown in fig. 6, the device comprises:
The feature extraction module 100 is used for extracting depth semantic features and image histology features of the placenta image data;
The screening module 200 is configured to screen the depth semantic feature and the image histology feature to obtain a target feature;
The classification module 300 is configured to predict an implantation category corresponding to the placenta image data based on the target feature.
Based on the depth semantic feature-based image histology placenta implantation prediction method described above, the present embodiment provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the depth semantic feature-based image histology placenta implantation prediction method described in the above embodiment.
Based on the depth semantic feature-based image histology placenta implantation prediction method, the application also provides a terminal device, as shown in fig. 7, which comprises at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and a bus 24. Wherein the processor 20, the display 21, the memory 22 and the communication interface 23 may communicate with each other via a bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may invoke logic instructions in the memory 22 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 22, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 22 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (6)

1.一种基于深度语义特征的影像组学胎盘植入预测方法,其特征在于,所述方法包括:1. A radiomics placenta accreta prediction method based on deep semantic features, characterized in that the method comprises: 提取胎盘影像数据的深度语义特征以及影像组学特征;Extract deep semantic features and radiomics features of placental imaging data; 对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征;Screening the deep semantic features and the radiomics features to obtain target features; 基于所述目标特征预测所述胎盘影像数据对应的植入类别;Predicting the implantation category corresponding to the placenta image data based on the target feature; 所述对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征具体包括:The screening of the deep semantic features and the radiomics features to obtain the target features specifically includes: 分别对所述深度语义特征和所述影像组学特征进行方差齐性校验,以得到深度语义特征中的各深度语义特征的校验值和影像组学特征中的各影像组学特征的校验值;Performing variance homogeneity checks on the deep semantic features and the radiomics features respectively to obtain a check value of each deep semantic feature in the deep semantic features and a check value of each radiomics feature in the radiomics features; 基于深度语义特征的校验值对深度语义特征进行筛选以得到目标深度语义特征,并基于各影像组学特征的校验值对影像组学特征进行筛选以得到目标影像组学特征;The deep semantic features are screened based on the verification values of the deep semantic features to obtain target deep semantic features, and the radiomics features are screened based on the verification values of each radiomics feature to obtain the target radiomics feature; 将所述目标深度语义特征和所述目标影像组学特征进行拼接,以得到目标特征;splicing the target deep semantic feature and the target radiomics feature to obtain a target feature; 所述提取所述胎盘影像数据的深度语义特征具体包括:The extracting of the deep semantic features of the placenta image data specifically includes: 将所述胎盘影像数据输入预设的语义特征提取模块,通过所述特征提取模块确定所述胎盘影像数据的深度语义特征;Inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module; 其中,所述语义特征提取模块包括编码器和自适应平均池化层,所述编码器包括第一特征提取单元以及若干级联的第二特征提取单元,第一特征提取单元与位于最前的第二特征提取单元相连接,位于最后的第二特征提取单元与所述自适应平均池化层相连接;所述第二特征提取单元包括最大池化层和第一特征提取单元;所述第一特征提取单元包括两个级联的卷积块,所述卷积块包括依次级联的卷积层、批归一化层和激活函数层;The semantic feature extraction module includes an encoder and an adaptive average pooling layer, the encoder includes a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected to the second feature extraction unit located at the front, and the second feature extraction unit located at the back is connected to the adaptive average pooling layer; the second feature extraction unit includes a maximum pooling layer and a first feature extraction unit; the first feature extraction unit includes two cascaded convolution blocks, and the convolution block includes a convolution layer, a batch normalization layer, and an activation function layer cascaded in sequence; 所述语义特征提取模块的确定过程具体包括:The determination process of the semantic feature extraction module specifically includes: 基于预设的分割训练集对第一预设网络模型进行训练得到分割网络模型,其中,所述分割网络模型包括所述编码器;Training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model includes the encoder; 提取所述分割网络模型的编码器,并将所述编码器与自适应平均池化层连接,以形成所述语义特征提取模块;Extracting the encoder of the segmentation network model, and connecting the encoder with the adaptive average pooling layer to form the semantic feature extraction module; 所述分割网络模型还包括解码器;所述解码器包括第一上采样单元、若干级联的第二上采样单元以及卷积单元;所述第一上采样单元与位于最后的第二特征提取单元相连接;所述第一上采样单元与位于最前的第二上采样单元相连接,位于最后的第二上采样单元与卷积单元相连接;若干第二特征提取单元中除位于最后的第二特征提取单元外的各第二特征提取单元与各第二上采样单元一一对应且跳跃连接;第一特征提取单元与所述卷积单元跳跃连接;其中,所述第二上采样单元包括第一特征提取单元和上采样层;所述卷积单元包括卷积层和第一特征提取单元。The segmentation network model also includes a decoder; the decoder includes a first upsampling unit, several cascaded second upsampling units and a convolution unit; the first upsampling unit is connected to the second feature extraction unit located at the end; the first upsampling unit is connected to the second upsampling unit located at the front, and the second upsampling unit located at the end is connected to the convolution unit; each second feature extraction unit among the several second feature extraction units except the second feature extraction unit located at the end corresponds to each second upsampling unit one by one and is jump-connected; the first feature extraction unit is jump-connected to the convolution unit; wherein the second upsampling unit includes the first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit. 2.根据权利要求1所述基于深度语义特征的影像组学胎盘植入预测方法,其特征在于,所述胎盘影像数据为胎盘MRI数据。2. According to the imaging genomics placenta implantation prediction method based on deep semantic features in claim 1, it is characterized in that the placenta imaging data is placenta MRI data. 3.根据权利要求1所述基于深度语义特征的影像组学胎盘植入预测方法,其特征在于,所述基于所述目标特征预测所述胎盘影像数据对应的植入类别具体包括:3. According to the imaging omics placenta implantation prediction method based on deep semantic features in claim 1, it is characterized in that the step of predicting the implantation category corresponding to the placenta image data based on the target feature specifically comprises: 将所述目标特征输入经过训练的分类器,通过所述分类器预测所述胎盘影像数据对应的植入类别;Inputting the target feature into a trained classifier, and predicting the implantation category corresponding to the placenta image data by the classifier; 其中,所述分类器的训练过程具体包括:The training process of the classifier specifically includes: 对于预设的分类训练集中的每个分类训练样本,基于所述语义特征提取模块提取所述分类训练样本对应的深度语义特征,并提取所述分类训练样本的影像组学特征;For each classification training sample in the preset classification training set, extracting the deep semantic features corresponding to the classification training sample based on the semantic feature extraction module, and extracting the radiomics features of the classification training sample; 对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征;Screening the deep semantic features and the radiomics features to obtain target features; 基于所述目标特征输入第二预设网络模型,通过第二预设网络模型确定预测植入类别,并基于所述预测植入类别及所述分类训练样本对应的标注植入类别对所述第二预设网络模型进行训练,以得到所述分类器。A second preset network model is input based on the target feature, a predicted implantation category is determined by the second preset network model, and the second preset network model is trained based on the predicted implantation category and the labeled implantation category corresponding to the classification training sample to obtain the classifier. 4.一种基于深度语义特征的影像组学胎盘植入预测装置,其特征在于,所述装置包括:4. A radiomics placenta implantation prediction device based on deep semantic features, characterized in that the device comprises: 特征提取模块,用于提取胎盘影像数据的深度语义特征以及影像组学特征;Feature extraction module, used to extract deep semantic features and radiomics features of placental image data; 筛选模块,用于对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征;A screening module, used for screening the deep semantic features and the radiomics features to obtain target features; 分类模块,用于基于所述目标特征预测所述胎盘影像数据对应的植入类别;A classification module, configured to predict the implantation category corresponding to the placenta image data based on the target feature; 所述对所述深度语义特征和所述影像组学特征进行筛选,以得到目标特征具体包括:The screening of the deep semantic features and the radiomics features to obtain the target features specifically includes: 分别对所述深度语义特征和所述影像组学特征进行方差齐性校验,以得到深度语义特征中的各深度语义特征的校验值和影像组学特征中的各影像组学特征的校验值;Performing variance homogeneity checks on the deep semantic features and the radiomics features respectively to obtain a check value of each deep semantic feature in the deep semantic features and a check value of each radiomics feature in the radiomics features; 基于深度语义特征的校验值对深度语义特征进行筛选以得到目标深度语义特征,并基于各影像组学特征的校验值对影像组学特征进行筛选以得到目标影像组学特征;The deep semantic features are screened based on the verification values of the deep semantic features to obtain target deep semantic features, and the radiomics features are screened based on the verification values of each radiomics feature to obtain the target radiomics feature; 将所述目标深度语义特征和所述目标影像组学特征进行拼接,以得到目标特征;splicing the target deep semantic feature and the target radiomics feature to obtain the target feature; 所述提取所述胎盘影像数据的深度语义特征具体包括:The extracting of the deep semantic features of the placenta image data specifically includes: 将所述胎盘影像数据输入预设的语义特征提取模块,通过所述特征提取模块确定所述胎盘影像数据的深度语义特征;Inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module; 其中,所述语义特征提取模块包括编码器和自适应平均池化层,所述编码器包括第一特征提取单元以及若干级联的第二特征提取单元,第一特征提取单元与位于最前的第二特征提取单元相连接,位于最后的第二特征提取单元与所述自适应平均池化层相连接;所述第二特征提取单元包括最大池化层和第一特征提取单元;所述第一特征提取单元包括两个级联的卷积块,所述卷积块包括依次级联的卷积层、批归一化层和激活函数层;The semantic feature extraction module includes an encoder and an adaptive average pooling layer, the encoder includes a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected to the second feature extraction unit located at the front, and the second feature extraction unit located at the back is connected to the adaptive average pooling layer; the second feature extraction unit includes a maximum pooling layer and a first feature extraction unit; the first feature extraction unit includes two cascaded convolution blocks, and the convolution block includes a convolution layer, a batch normalization layer, and an activation function layer cascaded in sequence; 所述语义特征提取模块的确定过程具体包括:The determination process of the semantic feature extraction module specifically includes: 基于预设的分割训练集对第一预设网络模型进行训练得到分割网络模型,其中,所述分割网络模型包括所述编码器;Training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model includes the encoder; 提取所述分割网络模型的编码器,并将所述编码器与自适应平均池化层连接,以形成所述语义特征提取模块;Extracting the encoder of the segmentation network model, and connecting the encoder with the adaptive average pooling layer to form the semantic feature extraction module; 所述分割网络模型还包括解码器;所述解码器包括第一上采样单元、若干级联的第二上采样单元以及卷积单元;所述第一上采样单元与位于最后的第二特征提取单元相连接;所述第一上采样单元与位于最前的第二上采样单元相连接,位于最后的第二上采样单元与卷积单元相连接;若干第二特征提取单元中除位于最后的第二特征提取单元外的各第二特征提取单元与各第二上采样单元一一对应且跳跃连接;第一特征提取单元与所述卷积单元跳跃连接;其中,所述第二上采样单元包括第一特征提取单元和上采样层;所述卷积单元包括卷积层和第一特征提取单元。The segmentation network model also includes a decoder; the decoder includes a first upsampling unit, several cascaded second upsampling units and a convolution unit; the first upsampling unit is connected to the second feature extraction unit located at the end; the first upsampling unit is connected to the second upsampling unit located at the front, and the second upsampling unit located at the end is connected to the convolution unit; each second feature extraction unit among the several second feature extraction units except the second feature extraction unit located at the end corresponds to each second upsampling unit one by one and is jump-connected; the first feature extraction unit is jump-connected to the convolution unit; wherein the second upsampling unit includes the first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit. 5.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1-3任意一项所述的基于深度语义特征的影像组学胎盘植入预测方法中的步骤。5. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement the steps in the imaging genomics placenta accreta prediction method based on deep semantic features as described in any one of claims 1-3. 6.一种终端设备,其特征在于,包括:处理器、存储器及通信总线;所述存储器上存储有可被所述处理器执行的计算机可读程序;6. A terminal device, comprising: a processor, a memory and a communication bus; the memory stores a computer-readable program executable by the processor; 所述通信总线实现处理器和存储器之间的连接通信;The communication bus realizes the connection and communication between the processor and the memory; 所述处理器执行所述计算机可读程序时实现如权利要求1-3任意一项所述的基于深度语义特征的影像组学胎盘植入预测方法中的步骤。When the processor executes the computer-readable program, the steps in the imaging genomics placenta accreta prediction method based on deep semantic features as described in any one of claims 1 to 3 are implemented.
CN202310393731.6A 2023-04-12 2023-04-12 Image histology placenta implantation prediction method and device based on depth semantic features Active CN116563224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310393731.6A CN116563224B (en) 2023-04-12 2023-04-12 Image histology placenta implantation prediction method and device based on depth semantic features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310393731.6A CN116563224B (en) 2023-04-12 2023-04-12 Image histology placenta implantation prediction method and device based on depth semantic features

Publications (2)

Publication Number Publication Date
CN116563224A CN116563224A (en) 2023-08-08
CN116563224B true CN116563224B (en) 2024-08-06

Family

ID=87490723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310393731.6A Active CN116563224B (en) 2023-04-12 2023-04-12 Image histology placenta implantation prediction method and device based on depth semantic features

Country Status (1)

Country Link
CN (1) CN116563224B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117153343B (en) * 2023-08-16 2024-04-05 丽水瑞联医疗科技有限公司 Placenta multiscale analysis system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392145A (en) * 2014-12-10 2015-03-04 福州大学 Placental implantation prediction method based on hidden Markov model
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765368A (en) * 2018-04-20 2018-11-06 平安科技(深圳)有限公司 MRI lesion locations detection method, device, computer equipment and storage medium
EP3838162A1 (en) * 2019-12-16 2021-06-23 Koninklijke Philips N.V. Systems and methods for assessing a placenta
CN113160256B (en) * 2021-03-09 2023-08-18 宁波大学 A multi-task generative adversarial model based method for MR image placenta segmentation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392145A (en) * 2014-12-10 2015-03-04 福州大学 Placental implantation prediction method based on hidden Markov model
CN109903271A (en) * 2019-01-29 2019-06-18 福州大学 Placenta implantation B ultrasonic image feature extraction and verification method

Also Published As

Publication number Publication date
CN116563224A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
Sengupta et al. An Empirical Analysis on Detection and Recognition of Intra-Cranial Hemorrhage (ICH) using 3D Computed Tomography (CT) images
CN111598867B (en) Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome
CN111415361B (en) Fetal brain age estimation and abnormal detection method and device based on deep learning
US20240395408A1 (en) Pancreatic postoperative diabetes prediction system based on supervised deep subspace learning
WO2023020366A1 (en) Medical image information computing method and apparatus, edge computing device, and storage medium
CN113077900B (en) Diabetes early risk assessment method, device, computer equipment and medium
CN112582076B (en) A method, device, system, and storage medium for placental pathology inspection evaluation
CN110969616B (en) Method and device for evaluating oocyte quality
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN110555846A (en) full-automatic bone age assessment method based on convolutional neural network
CN116563224B (en) Image histology placenta implantation prediction method and device based on depth semantic features
CN116524248A (en) Medical data processing device, method and classification model training device
Lin et al. A deep learning model for screening computed tomography imaging for thyroid eye disease and compressive optic neuropathy
EP4294267A1 (en) System and method for evaluating or predicting a condition of a fetus
CN113889229A (en) Construction method of medical image diagnosis standard based on human-computer combination
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
Hu et al. Automatic placenta abnormality detection using convolutional neural networks on ultrasound texture
CN117132573A (en) Method for predicting pulmonary nodule multiplication time based on single chest CT examination
Keerthi et al. Intelligent diagnosis of fetal organs abnormal growth in ultrasound images using an ensemble CNN-TLFEM model
CN114067154B (en) Crohn's disease fibrosis classification method based on multi-sequence MRI and related equipment
Parvathavarthini et al. Performance analysis of squeezenet and densenet on fetal brain mri dataset
Devisri et al. Fetal growth analysis from ultrasound videos based on different biometrics using optimal segmentation and hybrid classifier
Ahmed et al. Intracranial Hemorrhage Detection using CNN-LSTM Fusion Model
CN113962983A (en) Cerebrovascular template generation method based on 2D-CNN and electronic equipment
CN116649923A (en) Method and related device for predicting blood loss in cesarean section based on radiomics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant