Disclosure of Invention
The application aims to solve the technical problem of providing an image histology placenta implantation prediction method and device based on depth semantic features aiming at the defects of the prior art.
In order to solve the above technical problems, a first aspect of the embodiments of the present application provides an image histology placenta implantation prediction method based on depth semantic features, the method comprising:
extracting depth semantic features and image histology features of the placenta image data;
Screening the depth semantic features and the image histology features to obtain target features;
And predicting an implantation category corresponding to the placenta image data based on the target feature.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the placenta image data is placenta MRI data.
The method for predicting placenta implantation of image group based on depth semantic features, wherein the step of screening the depth semantic features and the image group features to obtain target features specifically comprises the following steps:
Performing variance alignment verification on the depth semantic features and the image group chemical features respectively to obtain verification values of all the depth semantic features in the depth semantic features and verification values of all the image group chemical features in the image group chemical features;
Screening the depth semantic features based on the check values of the depth semantic features to obtain target depth semantic features, and screening the image histology features based on the check values of the image histology features to obtain target image histology features;
and splicing the target depth semantic features and the target image histology features to obtain target features.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the extracting the deep semantic features of the placenta image data specifically comprises the following steps:
Inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module;
The semantic feature extraction module comprises an encoder and an adaptive average pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected with the second feature extraction unit positioned at the forefront, and the second feature extraction unit positioned at the last is connected with the adaptive average pooling layer; the second feature extraction unit comprises a maximum pooling layer and a first feature extraction unit; the first feature extraction unit comprises two cascaded convolution blocks, wherein each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence.
The depth semantic feature-based image histology placenta implantation prediction method specifically comprises the following steps of:
training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model comprises the encoder;
And extracting an encoder of the segmentation network model, and connecting the encoder with an adaptive average pooling layer to form the semantic feature extraction module.
The depth semantic feature-based image histology placenta implantation prediction method comprises the steps that the segmentation network model further comprises a decoder; the decoder comprises a first up-sampling unit, a plurality of cascaded second up-sampling units and a convolution unit; the first up-sampling unit is connected with the second feature extraction unit positioned at the last; the first up-sampling unit is connected with a second up-sampling unit positioned at the forefront, and a second up-sampling unit positioned at the last is connected with the convolution unit; each second feature extraction unit except the last second feature extraction unit in the plurality of second feature extraction units corresponds to each second up-sampling unit one by one and is connected in a jumping manner; the first feature extraction unit is connected with the convolution unit in a jumping manner; the second upsampling unit comprises a first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit.
The method for predicting placenta implantation in image group based on deep semantic features, wherein the predicting the implantation category corresponding to the placenta image data based on the target features specifically comprises:
Inputting the target characteristics into a trained classifier, and predicting implantation categories corresponding to the placenta image data through the classifier;
the training process of the classifier specifically comprises the following steps:
For each classification training sample in a preset classification training set, extracting depth semantic features corresponding to the classification training samples based on the semantic feature extraction module, and extracting image histology features of the classification training samples;
Screening the depth semantic features and the image histology features to obtain target features;
and inputting a second preset network model based on the target characteristics, determining a predicted implantation category through the second preset network model, and training the second preset network model based on the predicted implantation category and a labeling implantation category corresponding to the classifying training sample to obtain the classifier.
A second aspect of the embodiments of the present application provides an image histology placenta implantation prediction apparatus based on depth semantic features, the apparatus comprising:
the feature extraction module is used for extracting depth semantic features and image histology features of the placenta image data;
the screening module is used for screening the depth semantic features and the image histology features to obtain target features;
and the classification module is used for predicting the implantation category corresponding to the placenta image data based on the target characteristics.
A third aspect of the embodiments of the present application provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps in the depth semantic feature based image histology placenta implantation prediction method as described in any one of the above.
A fourth aspect of an embodiment of the present application provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
The communication bus realizes connection communication between the processor and the memory;
The processor, when executing the computer readable program, implements the steps in the depth semantic feature based image histology placenta implantation prediction method as described in any one of the above.
The beneficial effects are that: compared with the prior art, the application provides an image histology placenta implantation prediction method and device based on depth semantic features, wherein the method comprises the steps of obtaining placenta image data; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
Detailed Description
The application provides an image histology placenta implantation prediction method and device based on deep semantic features, which are used for making the purposes, technical schemes and effects of the application clearer and more definite. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be understood that the sequence number and the size of each step in this embodiment do not mean the sequence of execution, and the execution sequence of each process is determined by the function and the internal logic of each process, and should not be construed as limiting the implementation process of the embodiment of the present application.
It has been found that placenta-implanted disease (PLACENTA ACCRETA Spectrum, PAS) refers to a group of diseases in which the decidua between the placenta and uterus is underdeveloped, and trophoblast abnormal invasion and myometrium are abnormal. Pregnant women with PAS are at serious risk after delivery, such as residual placenta, life threatening bleeding, and even death. According to recent report statistics, the incidence rate of placenta implantation is as high as 0.78%, and the incidence rate of placenta implantation is obviously increased along with the increase of operation times of pregnant women subjected to caesarean section, uterine curettage, hysteromyoma rejection, hysteroscopy and the like.
Placenta implantation can be classified into placenta adhesion type, placenta implantation type, and placenta penetration type according to the depth of placenta implantation, wherein penetration type placenta implantation may cause major bleeding of a lying-in woman and related complications such as multiple organ failure or functional failure, and serious death of the lying-in woman and neonate. Therefore, if the patient suffering from PAS can be timely and accurately predicted, a proper operation scheme can be formulated before the operation of the patient, the operation can be performed more safely, the childbirth risk is reduced, more effective nursing is provided for the puerpera, and the clinical outcome is improved.
Imaging examinations such as ultrasonic imaging and magnetic resonance imaging (Magnetic Resonance Imaging, MRI) are currently commonly used as a method for identifying PAS. Among them, ultrasonic imaging has the advantages of flexibility, low cost, no harm to mother and infant, etc., so ultrasonic imaging is the preferred imaging method for diagnosing PAS. However, ultrasonic imaging has high dependency on the technical ability of operators and poor repeatability, and is also easily interfered by factors such as obesity of pregnant women, intestinal gas artifact of pregnant women, fetal skull and the like, so that the PAS detection rate is reduced. MRI has the advantage of providing panoramic images across different acquisition planes, is not interfered by the size of the mother, the gas in the intestinal tract or the position of the placenta, can well evaluate the position of the region involved in peripheral atherosclerosis and the damaged condition of adjacent organs, and has unique advantages in identifying PAS.
However, in the case of PAS prediction using MRI, subjective prediction by a doctor is generally performed, and thus the predicted result is subject to images of various subjective factors (e.g., experience, state, etc. of a clinician), which results in problems of poor objectivity and low accuracy of PAS prediction.
In order to solve the above problems, in an embodiment of the present application, placenta image data is acquired; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
The application will be further described by the description of embodiments with reference to the accompanying drawings.
The embodiment provides an image histology placenta implantation prediction method based on depth semantic features, as shown in fig. 1, which specifically includes:
S10, extracting depth semantic features and image histology features of the placenta image data.
Specifically, the placenta image data may be placenta image data with or without placenta implantation. The placenta image data may be a real-time image acquired by the image acquisition device, a non-real-time image acquired by the reading storage device, a non-real-time image transmitted by the external device, or the like. In one implementation, the placenta image data is placenta MRI (Magnetic Resonance Imaging ) image data, i.e. placenta image data is a magnetic resonance image acquired by a magnetic resonance imaging apparatus.
The deep semantic features are high-level features extracted through a deep learning mode, and the image group features are image features extracted from medical images and used for completing conversion from image data to clinical data information. It can be understood that after placenta image data is obtained, feature extraction is performed on the placenta image data twice, wherein, once deep semantic features are extracted by a deep learning mode, and image histology features are extracted by placenta regions based on the placenta image data.
The image histology features include one or more of the size, volume, shape, texture features, histogram data distribution, and entropy of information of the placenta region. The image histology feature can be extracted by an image histology feature extraction module, wherein the image histology feature extraction module can be determined by parameter configuration. In addition, when the image histology features are extracted, the region of interest of the placenta image data can be predetermined, wherein the region of interest can be sketched by a doctor or can be obtained by segmentation through a trained segmentation network model.
The depth semantic features are used for reflecting detailed information carried by placenta image data, wherein the depth semantic features can be obtained through a trained feature extraction module. Correspondingly, the extracting the deep semantic features of the placenta image data specifically comprises:
inputting the placenta image data into a preset semantic feature extraction module, and determining the deep semantic features of the placenta image data through the feature extraction module.
Specifically, the feature extraction module is pre-trained and is used for extracting deep semantic features of placenta image data, that is, the input item of the semantic feature extraction module is placenta image data and the input item is the deep semantic features. The feature extraction module can comprise an encoder or an encoder and an adaptive average pooling layer, the encoder is used for extracting semantic features, the adaptive average pooling layer is used for reducing dimensions of the semantic features extracted by the encoder so as to reduce unable features and redundant features in deep semantic features, reduce parameter calculation amount of the semantic feature extraction module, integrate global information and improve robustness of the classification network model.
In one implementation manner, the semantic feature extraction module comprises an encoder and an adaptive average pooling layer, wherein the encoder comprises a first feature extraction unit and a plurality of cascaded second feature extraction units, the first feature extraction unit is connected with a second feature extraction unit positioned at the forefront, and a second feature extraction unit positioned at the last is connected with the adaptive average pooling layer; the second feature extraction unit comprises a maximum pooling layer and a first feature extraction unit; the first feature extraction unit comprises two cascaded convolution blocks, wherein each convolution block comprises a convolution layer, a batch normalization layer and an activation function layer which are cascaded in sequence. Wherein, the batch normalization layer can adopt Batch Normalization, and the activation function layer can adopt Relu. The feature extraction module in the implementation mode adopts a plurality of convolution layers with small convolution kernels (3 multiplied by 3) to replace the convolution layers with large convolution kernels, so that network parameters are reduced, and meanwhile, more nonlinear mapping is performed, and fitting and expression capacity of a network are improved.
As illustrated in fig. 2 and 3, the encoder includes a first feature extraction unit and four cascaded second feature extraction units, wherein the image scale of the placenta image data is 128×128×96, the image scale of the output item of the first feature extraction unit is 128×128×96, the image scale of the output item of the second feature extraction unit located at the forefront is 64×64×48, and the image scale of the output item of the second feature extraction unit located at the second position is 32×32×24; the image scale of the output item of the second feature extraction unit positioned at the third position is 16 x 12; the image scale of the output item of the second feature extraction unit at the fourth bit is 8×8×6 to obtain an 8×8×6×512-dimensional feature vector. In addition, the adaptive average pooling layer reduces the feature vector of 8×8×6×512 dimensions to 1×512 dimensions, so that the adaptive average pooling layer takes the average value of the feature map of each channel as output, reduces the calculation amount of parameters, integrates global information, and has more robustness.
In one implementation manner, the determining process of the semantic feature extraction module specifically includes:
training a first preset network model based on a preset segmentation training set to obtain a segmentation network model, wherein the segmentation network model comprises the encoder;
And extracting an encoder of the segmentation network model, and connecting the encoder with an adaptive average pooling layer to form the semantic feature extraction module.
Specifically, the segmentation training set comprises a plurality of segmentation training samples, each segmentation training sample is placenta image data and carries placenta labels, a first preset network model is trained through the segmentation training set to obtain a segmentation network model, the model structure of the first preset network model is identical to that of the segmentation network model, and the model structure of the segmentation network model are different in model parameters, wherein the first preset network model adopts initial model parameters, and the model parameters of the segmentation network model adopt trained network model parameters. Furthermore, in one implementation, the split network model includes an encoder and a decoder, wherein a network structure of the encoder is the same as the network structure of the encoder; the decoder comprises a first up-sampling unit, a plurality of cascaded second up-sampling units and a convolution unit; the first up-sampling unit is connected with the second feature extraction unit positioned at the last; the first up-sampling unit is connected with a second up-sampling unit positioned at the forefront, and a second up-sampling unit positioned at the last is connected with the convolution unit; each second feature extraction unit except the last second feature extraction unit in the plurality of second feature extraction units corresponds to each second up-sampling unit one by one and is connected in a jumping manner; the first feature extraction unit is connected with the convolution unit in a jumping manner; the second upsampling unit comprises a first feature extraction unit and an upsampling layer; the convolution unit includes a convolution layer and a first feature extraction unit. The split network model in this embodiment adopts a U-Net split network based on an encoder-decoder structure, and the split network model uses a plurality of convolution layers with small convolution kernels (3×3) to replace the convolution layers with large convolution kernels, so that network parameters are reduced, and meanwhile, more nonlinear mapping is performed, and fitting and expression capabilities of the network are improved. The encoder consists of 10 convolution layers, 4 pooling layers, a plurality of BatchNorm layers and Relu activation layers, and the feature map size is gradually reduced through the encoder; the decoder restores the feature map output by the encoder to a size close to the original map through upsampling and skip connection (skip connection).
Further, each network layer in the encoder and the decoder randomly sets a weight initial value, so that the weight initial value meets normal distribution, and the convergence of the network is facilitated to be quickened, wherein the convolution layer is directly and randomly initialized, and the BN normalization layer sets the weight to be 1 and the deviation to be 0; in the error back propagation training process, DICE loss is adopted as a loss function, so that the deep learning model is ensured to extract more image features.
Illustrating: as shown in fig. 3 and 5, the decoder includes three cascaded second upsampling units, where the output term of the first upsampling unit has an image scale of 16×16×12; the image scale of the output item of the second up-sampling unit positioned at the first bit is 32 x 24; the image scale of the output item of the second up-sampling unit positioned at the second bit is 64×64×48; the image scale of the output item of the second up-sampling unit positioned at the third bit is 128 x 96, and the image scale of the output item of the convolution unit is 128 x 96, wherein the second up-sampling unit positioned at the first bit is in jump connection with the second feature extraction unit positioned at the third bit, and the second up-sampling unit positioned at the second bit is in jump connection with the second feature extraction unit positioned at the second bit; the second up-sampling unit positioned at the third bit is in jump connection with the second feature extraction unit positioned at the first bit; the convolution unit is in jump connection with the first feature extraction unit.
And S20, screening the depth semantic features and the image histology features to obtain target features.
Specifically, after the deep semantic features and the image histology features are obtained, redundant information is carried in the deep semantic features and the image histology features, so that the deep semantic features and the image histology features can be respectively screened and spliced to generate target features, wherein the target features comprise partial deep semantic features and partial image histology features.
Based on this, the filtering the depth semantic feature and the image histology feature to obtain the target feature specifically includes:
Performing variance alignment verification on the depth semantic features and the image group chemical features respectively to obtain verification values of all the depth semantic features in the depth semantic features and verification values of all the image group chemical features in the image group chemical features;
Screening the depth semantic features based on the check values of the depth semantic features to obtain target depth semantic features, and screening the image histology features based on the check values of the image histology features to obtain target image histology features;
and splicing the target depth semantic features and the target image histology features to obtain target features.
Specifically, the target depth semantic features are included in the depth semantic features, the target image histology features include image histology features, and it can be understood that the depth semantic features and the image histology features each include a plurality of features, the target depth semantic features include part of the features in the depth semantic features, and the target image histology features include part of the features of the image histology features. For example, the target depth semantic features include 2 of the depth semantic features and the target image histology features include 5 of the image histology features. On the one hand, the feature with high significance in the depth semantic feature and the image histology feature can be reserved, and meanwhile, the feature quantity can be reduced, so that the model parameter quantity is reduced.
And checking the variance Ji Xing as F, namely sorting after obtaining the ratio of the inter-group variance and the intra-group variance of the features, and selecting the objective image histology features and the objective depth semantic features according to the sorting order. The screening process of the target depth semantic features and the target image group chemical features is the same, and the target depth semantic features are taken as an example for illustration.
The extracted depth semantic features and the implanted gold standard are respectively brought into a formula for calculating standard deviation, so that square values S x and implanted gold standards S label of the standard deviation of the depth semantic features are obtained, wherein the formula for calculating the standard deviation is as follows:
where x is the sample, x is the average of the samples, and n is the number of samples.
Substituting the square value S x of the standard deviation and the implanted gold standard S label into an F value calculation formula to obtain an F value corresponding to the depth semantic feature, wherein the F value calculation formula is as follows:
And obtaining F values corresponding to the depth semantic features based on the process, wherein the smaller the F value is, the more obvious the obvious difference between the depth semantic features and the implanted gold standard is proved. Based on this, the F values are ordered from small to large and a certain number (e.g., 150) of candidate depth semantic features are selected from front to back. Finally, the candidate deep semantic features are put into a Lasso algorithm for screening to obtain target deep semantic features (for example, 2 deep learning semantic features are obtained after 150 deep learning semantic features are screened).
Further, after the target depth semantic feature and the target image histology feature are obtained, the target depth semantic feature and the target image histology feature may be spliced according to the sequence of the target depth semantic feature and the target image histology feature, or the target image histology feature and the target depth semantic feature may be spliced according to the sequence of the target image histology feature and the target image histology feature. In this embodiment, stitching is performed in order of the target depth semantic feature-target image histology feature.
S30, predicting the implantation category corresponding to the placenta image data based on the target feature.
Specifically, the implantation categories are classified into a placenta-adhesion type, a placenta-implantation type, and a placenta-penetration type according to the depth of placenta implantation, and thus, the implantation category corresponding to placenta image data may be one of a placenta-adhesion type, a placenta-implantation type, and a placenta-penetration type. In addition, placenta implantation may not occur in placenta image data, and thus, the implantation category corresponding to the placenta image data may also be that placenta implantation does not occur.
In one implementation manner, the implantation category may be determined by a classifier, and correspondingly, the predicting, based on the target feature, the implantation category corresponding to the placenta image data specifically includes:
Inputting the target features into a trained classifier, and predicting implantation categories corresponding to the placenta image data through the classifier.
Specifically, the classifier is trained, as shown in fig. 4, and the training process of the classifier specifically includes:
For each classification training sample in a preset classification training set, extracting depth semantic features corresponding to the classification training samples based on the semantic feature extraction module, and extracting image histology features of the classification training samples;
Screening the depth semantic features and the image histology features to obtain target features;
and inputting a second preset network model based on the target characteristics, determining a predicted implantation category through the second preset network model, and training the second preset network model based on the predicted implantation category and a labeling implantation category corresponding to the classifying training sample to obtain the classifier.
Specifically, the model structure of the second preset network model is the same as that of the classifier, and the model parameters are different from the model structure of the classifier, the second preset network model adopts an initial network model, and the classifier adopts a network model for training. In this embodiment, the classifier may employ a support vector classifier.
The preset classified training sets comprise a plurality of classified training sample images, each classified training sample image in the classified training sample images is placenta image data, part of training sample images in the classified training sample images are placenta image data with placenta implantation, and part of training sample images are placenta image data with placenta implantation. In this embodiment, the preset segmentation training set and the preset classification training set may be collected synchronously, where the collection process of the preset segmentation training set and the preset classification training set may be: the method comprises the steps of firstly collecting placenta MRI image data, then dividing the collected placenta MRI image data into a segmentation data set and a classification data set, finally dividing the segmentation data set into a segmentation training set and a segmentation test set, and dividing the classification data set into a classification training set and a classification test set.
In one implementation, the determination of the segmentation data set and the classification data set may be:
First step, data collection
Collecting a first preset number (for example, 241 cases) of placenta MRI image data subjected to pathological examination in operation as an internal modeling data set, wherein each placenta MRI image data in the internal modeling data set carries an implantation category, the positive and negative sample ratio in the internal modeling data set is 169:72, the positive sample is placenta MRI image data subjected to placenta implantation, and the negative sample is placenta MRI image data not subjected to placenta implantation. At the same time, a second predetermined number (e.g., 122) is collected as an external independent test set, wherein each placental MRI image data in the external independent test set carries an implant class, and the positive-negative sample ratio in the external independent test set is 106:16.
The patients corresponding to the placenta MRI image data of the internal modeling data set and the external independent test set are the patients with clinical labor examination, prenatal MRI examination and delivery, and having the history of caesarean section or uterine cavity operation, and have complete clinical and imaging data, and the patients have the advantages of correcting the diseases of the placenta with gestational week of more than 18 weeks, single pregnancy without gestational diabetes mellitus, gestational hypertension, blood system abnormality and the like. Meanwhile, the pregnant women have basic diseases (such as diabetes, thalassemia and the like); cases of fetal growth and development limitation or fetal disease with placenta-related images; pregnant women with MRI examination contraindications; neither the presence of motion artifacts due to imaging blurring nor the inability of pregnant women to cooperate with related examinations is incorporated into the internal modeling dataset and the external independent test set.
Second step, sketch of interest
After the training sample set is acquired, the placenta image data is delineated and divided, wherein the placenta MRI data region of interest (Region of Interest, ROI) is positioned and delineated on the MRI image of the patient by a radiologist having MRI image diagnostics experience using open source software ITK-SNAP (version 3.4.0; https:// www.itksnap.org). However, a specified number (e.g., 40 cases, etc.) is randomly selected as a preset segmentation dataset in the internal modeling dataset, and the remaining placenta MRI image data in the internal modeling dataset and placenta MRI image data in the external independent test set are used as preset classification datasets.
Third step, data preprocessing
1) Unification of image size
Because the MRI data of each patient is not uniform in size along the z-axis, the training effect of the segmentation model is not ideal, and interpolation resampling is performed on all placenta image data and the image size is unified to 128×128×96.
2) Resampling
In order to be able to extract the image histology features in the ROI region of the classification dataset, a resampling operation is performed on the MRI data, wherein the average resolution of the training samples of the classification dataset comprised in the internal modeling dataset in the classification dataset is 0.87×0.87×5.8mm 3, by statistical knowledge of the training samples of the classification dataset comprised in the internal modeling dataset in the classification dataset. To avoid resampling the excessive data, the data in the class training set is uniformly resampled to 0.87×0.87×5.8mm 3. Of course, the same resampling operation may be used for ROI data delineated by the physician.
In summary, the present embodiment provides a method for predicting placenta implantation in image group based on deep semantic features, which includes obtaining placenta image data; extracting depth semantic features and image histology features of the placenta image data; screening the depth semantic features and the image histology features to obtain target features; and predicting an implantation category corresponding to the placenta image data based on the target feature. According to the application, the image histology characteristics and the depth semantic characteristics are combined to obtain the characteristic information with rich dimension layers, and then the placenta implantation prediction is performed based on the characteristic information, so that the accuracy of the placenta implantation prediction can be improved.
Based on the method for predicting the implantation of placenta in image group based on depth semantic features, the embodiment provides a prediction device for implantation of placenta in image group based on depth semantic features, as shown in fig. 6, the device comprises:
The feature extraction module 100 is used for extracting depth semantic features and image histology features of the placenta image data;
The screening module 200 is configured to screen the depth semantic feature and the image histology feature to obtain a target feature;
The classification module 300 is configured to predict an implantation category corresponding to the placenta image data based on the target feature.
Based on the depth semantic feature-based image histology placenta implantation prediction method described above, the present embodiment provides a computer readable storage medium storing one or more programs executable by one or more processors to implement the steps in the depth semantic feature-based image histology placenta implantation prediction method described in the above embodiment.
Based on the depth semantic feature-based image histology placenta implantation prediction method, the application also provides a terminal device, as shown in fig. 7, which comprises at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, which may also include a communication interface (Communications Interface) 23 and a bus 24. Wherein the processor 20, the display 21, the memory 22 and the communication interface 23 may communicate with each other via a bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may invoke logic instructions in the memory 22 to perform the methods of the embodiments described above.
Further, the logic instructions in the memory 22 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product.
The memory 22, as a computer readable storage medium, may be configured to store a software program, a computer executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 performs functional applications and data processing, i.e. implements the methods of the embodiments described above, by running software programs, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the terminal device, etc. In addition, the memory 22 may include high-speed random access memory, and may also include nonvolatile memory. For example, a plurality of media capable of storing program codes such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or a transitory storage medium may be used.
In addition, the specific processes that the storage medium and the plurality of instruction processors in the terminal device load and execute are described in detail in the above method, and are not stated here.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.