[go: up one dir, main page]

CN117174282A - Complications prediction method, model training method, device, equipment and storage medium - Google Patents

Complications prediction method, model training method, device, equipment and storage medium Download PDF

Info

Publication number
CN117174282A
CN117174282A CN202310878347.5A CN202310878347A CN117174282A CN 117174282 A CN117174282 A CN 117174282A CN 202310878347 A CN202310878347 A CN 202310878347A CN 117174282 A CN117174282 A CN 117174282A
Authority
CN
China
Prior art keywords
training
image
model
prediction
complication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310878347.5A
Other languages
Chinese (zh)
Inventor
董敏
石鑫
顾畅
陈雷
王越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Rulin Biotechnology Co ltd
Original Assignee
Zhejiang Rulin Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Rulin Biotechnology Co ltd filed Critical Zhejiang Rulin Biotechnology Co ltd
Priority to CN202310878347.5A priority Critical patent/CN117174282A/en
Publication of CN117174282A publication Critical patent/CN117174282A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a complication prediction method, a model training method, a complication prediction device, a model training device, a computer device and a non-volatile computer readable storage medium. The complication prediction method comprises the steps of acquiring preoperative images, clinical information and operation schemes of a patient; segmenting a focal region of the preoperative image to generate a focal image; and inputting the focus image, the clinical information and the operation scheme into a preset prediction model to output a complication prediction result of the patient. The lesion image is generated by acquiring preoperative images, clinical information and surgical schemes of a patient and dividing a lesion area of the preoperative images, and the lesion image, the clinical information and the surgical schemes are input into a prediction model, so that a complication prediction result can be rapidly output. And as the lesion image, the clinical information and the information of multiple dimensions of the operation scheme are comprehensively considered when the complication prediction result is determined, the accuracy of the complication prediction result can be further improved.

Description

Complications prediction method, model training method, device, equipment and storage medium
Technical Field
The present application relates to the field of data processing technology, and more particularly, to a complications prediction method, a complications prediction apparatus, a model training method, a model training apparatus, a computer device, and a non-volatile computer readable storage medium.
Background
At present, in clinical practice on a patient, a doctor predicts a possible complication after a patient operation, usually based on the expertise and experience of the doctor, but because the experience and the background of the expertise of each doctor are different, a certain difference exists in the prediction result of the possible complication after the patient operation, and moreover, it is difficult for the doctor to accurately analyze the possible complication after the patient operation in a short time, and the possible complication after the patient operation and the postoperative guidance cannot be provided quickly, so that a more accurate and rapid postoperative complication prediction method is needed.
Disclosure of Invention
The embodiment of the application provides a complication prediction method, a complication prediction device, a model training method, a model training device, a computer device and a non-volatile computer readable storage medium.
The method for predicting the complications comprises the steps of obtaining preoperative images, clinical information and operation schemes of a patient; segmenting a focal region of the preoperative image to generate a focal image; inputting the focus image, the clinical information and the surgical scheme to a preset prediction model to output a complication prediction result of the patient.
The complication prediction device comprises a first acquisition module, a first segmentation module and a prediction module; the first acquisition module is used for acquiring preoperative images, clinical information and operation schemes of a patient; the first segmentation module is used for segmenting a focus area of the preoperative image to generate a focus image; the prediction module is used for inputting the focus image, the clinical information and the operation scheme to a preset prediction model so as to output a complication prediction result of the patient.
The model training method of the embodiment of the application comprises the steps of obtaining a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise training preoperative images, training clinical information, training surgical schemes and complication information; dividing a focus region of the training pre-operative image of the training sample to generate a training focus image; and training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample to obtain a prediction model trained to be converged.
The model training device comprises a second acquisition module, a second segmentation module and a first training module; the second acquisition module is used for acquiring a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise training preoperative images, training clinical information, training surgical schemes and complication information; the second segmentation module is used for segmenting focus areas of the training pre-operation images of the training samples so as to generate training focus images; the first training module is used for training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample so as to obtain a prediction model trained to be converged.
The computer device of the embodiment of the application comprises a processor and a memory; the memory stores a computer program, the computer program being executed by the processor, the computer program comprising instructions for performing any of the above-described complication prediction methods, the complication prediction methods comprising obtaining a pre-operative image, clinical information, and a surgical plan of a patient; segmenting a focal region of the preoperative image to generate a focal image; inputting the focus image, the clinical information and the surgical scheme to a preset prediction model to output a complication prediction result of the patient. Alternatively, the computer program is executed by the processor, the computer program comprising instructions for performing the model training method of any of the above, the model training method comprising obtaining a training set comprising a plurality of training samples, the training samples comprising training preoperative images, training clinical information, training surgical protocols, and complications information; dividing a focus region of the training pre-operative image of the training sample to generate a training focus image; and training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample to obtain a prediction model trained to be converged.
A non-transitory computer-readable storage medium containing a computer program according to an embodiment of the present application, which when executed by a processor, causes the processor to execute any one of the above-described complications prediction methods, the complications prediction method including acquiring a preoperative image, clinical information, and a surgical plan of a patient; segmenting a focal region of the preoperative image to generate a focal image; inputting the focus image, the clinical information and the surgical scheme to a preset prediction model to output a complication prediction result of the patient. Or when the computer program is executed by a processor, the processor executes the model training method, the model training method comprises the steps of obtaining a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise preoperative images, clinical information, surgical scheme training and complication information; dividing a focus region of the training pre-operative image of the training sample to generate a training focus image; and training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample to obtain a prediction model trained to be converged.
The complication prediction method, the complication prediction device, the model training method, the model training device, the computer equipment and the nonvolatile computer readable storage medium of the embodiment of the application can be used for generating a focus image by acquiring preoperative images, clinical information and operation schemes of a patient and dividing focus areas of the preoperative images of the patient, and compared with the method for predicting by directly using the preoperative images, the accuracy of a complication prediction result can be improved by using the focus image for a subsequent prediction process. After the lesion image is obtained by segmentation, the lesion image, clinical information and the operation scheme are input into a preset prediction model, so that a complication prediction result is rapidly output. And because the lesion images, the clinical information and the information of multiple dimensions of the operation scheme are comprehensively considered when the complication prediction result is determined, the accuracy of the complication prediction result can be further improved.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow diagram of a method of complication prediction of certain embodiments of the present application;
FIG. 2 is a schematic plan view of a computer device in accordance with certain embodiments of the application;
FIG. 3 is a schematic diagram of the structure of a predictive model of some embodiments of the application;
FIG. 4 is a flow chart of a method of complication prediction of certain embodiments of the present application;
FIG. 5 is a flow chart of a method of complication prediction of certain embodiments of the present application;
FIG. 6 is a flow chart of a method of complication prediction of certain embodiments of the present application;
FIG. 7 is a schematic illustration of a scenario of a method of complication prediction of certain embodiments of the present application;
FIG. 8 is a flow chart of a model training method of some embodiments of the application;
FIG. 9 is a flow chart of a model training method of some embodiments of the application;
FIG. 10 is a schematic view of a scenario of a model training method of some embodiments of the application;
FIG. 11 is a block diagram of a complication prediction apparatus of certain embodiments of the present application;
FIG. 12 is a block diagram of a model training apparatus according to some embodiments of the application;
FIG. 13 is a schematic plan view of a computer device in accordance with certain embodiments of the application;
FIG. 14 is a schematic diagram of a connection state of a non-transitory computer readable storage medium and a processor according to some embodiments of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1,2 and 3, an embodiment of the present application provides a method for predicting complications, where the method for predicting complications includes:
step 011: acquiring preoperative images, clinical information and a surgical scheme of a patient;
wherein the pre-operative image of the patient includes at least one of data generated by magnetic resonance imaging (Magnetic Resonance Imaging, MRI) of a whole or diseased portion of the patient and data generated by electronic computed tomography (Computed Tomography, CT) of the whole or diseased portion of the patient; the clinical information of the patient includes at least one of basic information of the patient and preoperative examination information, for example, the basic information of the patient includes at least one of age, sex, height, weight, blood pressure, blood sugar, blood fat, and the preoperative examination information of the patient includes at least one of blood biochemical examination information, blood coagulation function examination information, and tumor marker level information of the patient; the surgical plan of the patient includes at least one of a surgical mode, a surgical site, a surgical duration, a surgical type, a surgical procedure, and a surgical instrument of the patient, and the surgical plan of the patient is formed through preoperative analysis discussions of a plurality of surgical specialists.
Specifically, the preoperative image, clinical information and surgical plan of the patient may be acquired by the computer device 100, wherein the computer device 100 includes the processor 30 and the memory 40, and the computer device 100 may include, but is not limited to, a server, a smart phone, a tablet computer, a notebook computer, a desktop computer, and other smart terminal devices. The processor 30 may acquire preoperative images, clinical information, and surgical plans for a plurality of patients by accessing an electronic medical record system of a plurality of hospitals and store the acquired preoperative images, clinical information, and surgical plans in the memory 40.
Step 012: segmenting a focal region of the preoperative image to generate a focal image;
the focus area is the body area of the patient body where lesions appear.
Specifically, after processor 30 acquires the pre-operative image of the patient, a plurality of experienced radiologists may use specialized marking software to mark the lesion area on the body part of the patient where the lesion appears in the pre-operative image, and the pre-operative image with the lesion area marked thereon may be checked by other radiologists. The processor 30 can then segment the preoperative image of the lesion area that has been marked so that a lesion image can be generated and stored in the memory 40. Alternatively, without the physician marking the lesion area, the processor 30 may be able to implement segmentation of the lesion area by a specific image recognition or image processing algorithm to obtain a lesion image.
Step 013: and inputting the focus image, the clinical information and the operation scheme into a preset prediction model to output a complication prediction result of the patient.
The preset prediction model is a neural network model capable of predicting complications after operation of the patient, for example, the prediction model may be a convolutional neural network model, a cyclic neural network model (such as a long-short-term memory network), a variational self-encoder, and the like.
Specifically, the computer device 100 is preset with a prediction model, and the preset prediction model may also be executed by the computer device 100, or the prediction model is located in another computer device 100 (such as a server) communicatively connected to the computer device 100 and runs on the server. The model file of the preset prediction model may be stored in the memory 40 of the current computer device 100 and executed by the processor 30 of the current computer device 100, or the model file of the prediction model may be stored in a server communicatively connected to the current computer device 100 and executed by the processor 30 of the server.
The processor 30 inputs the obtained lesion image, clinical information and surgical plan of the patient into a preset prediction model, and outputs a complication prediction result obtained by the prediction model, so as to obtain whether the patient has a complication after operation. In the case that the predicted result has a complication, the computer device 100 can output the type of the complication, and thus a doctor can know in advance the postoperative care instruction to be taken for the patient according to the type of the complication, or adjust the surgical scheme before the patient is operated, so as to prevent the predicted complication.
In recent years, deep learning has been an important branch of artificial intelligence, and has achieved remarkable results in the fields of image recognition, speech recognition, natural language processing, and the like. In the medical field, deep learning also has great application potential, for example, important breakthroughs are made in the aspects of medical image analysis, disease prediction, drug discovery and the like. However, despite the application of deep learning in these fields, the application of deep learning in predicting postoperative complications is still relatively small.
The general model utilizes a single data source of a patient to carry out a deep learning technology to improve the accuracy and efficiency of predicting complications, and neglects the relevance between different data of the patient. Therefore, existing predictive models do not fully utilize the information available to the patient, resulting in lower accuracy of the predicted outcome.
Therefore, by acquiring the preoperative image, clinical information and operation scheme of the patient and dividing the focus area of the preoperative image of the patient, the accuracy of the complications prediction result can be improved by using the focus image for the subsequent prediction process compared with the method of directly using the preoperative image for prediction. After the lesion image is obtained by segmentation, the lesion image, clinical information and the operation scheme are input into a preset prediction model, so that a complication prediction result is rapidly output. And because the lesion images, the clinical information and the information of multiple dimensions of the operation scheme are comprehensively considered when the complication prediction result is determined, the accuracy of the complication prediction result can be further improved.
Referring to fig. 4, in certain embodiments, step 012: segmenting a focal region of the preoperative image to generate a focal image, comprising:
step 0121: preprocessing the preoperative image to obtain a preprocessed image containing a focus, wherein the size of the preprocessed image is smaller than that of the preoperative image, and the preprocessing comprises normalization processing; and
Step 0122: inputting the preprocessed image into a preset image segmentation model to generate a focus image.
Specifically, after the pre-operation image of the patient is obtained, the processor 30 needs to pre-process the pre-operation image, for example, the processor 30 may normalize the pre-operation image according to a preset threshold value, so that the outline of the focal region of the pre-operation image is highlighted, and thus a pre-processed image including the focal region may be obtained. Most areas outside the focus area are removed by the preprocessed image, so that the preprocessed image with smaller size is obtained, and the processing amount of subsequent image processing is reduced. And inputting the preprocessed image into a preset image segmentation model to generate a focus image. The image segmentation model may also be a neural network model, such as a convolutional neural network model, which is a model trained in advance for segmenting a lesion area to obtain a lesion image.
The image segmentation model can be used as a training sample by an image which contains a focus and is marked with an image area where the focus is located, and then the image segmentation model is trained to converge based on the training sample, so that the image segmentation model can segment the focus area in the preprocessed image to be used as a focus image.
Therefore, the pretreatment image is used for acquiring the pretreatment image by carrying out pretreatment on the preoperative image, then the pretreatment image is input into a preset image segmentation model to generate the focus image, automatic focus image segmentation can be realized, a great deal of manpower of experienced doctors is not required, the manpower cost is saved, and the focus image characteristics of a patient can be conveniently and rapidly extracted in the subsequent steps.
Referring to fig. 5, in some embodiments, at step 013: inputting lesion images, clinical information, and surgical plan to a preset prediction model to output a patient's complications prediction result, comprising:
step 0131: feature extraction is performed on the focus image, the clinical information and the operation scheme to obtain a first target feature;
step 0132: performing dimension reduction on the first target feature to obtain a second target feature, and constructing a feature vector based on the second target feature;
step 0133: and inputting the feature vector into the prediction model to output a complication prediction result.
Specifically, after the lesion image, the clinical information, and the surgical plan of the patient are acquired, the processor 30 needs to perform feature extraction on the lesion image, the clinical information, and the surgical plan of the patient to obtain the first target feature, where the first target feature includes information such as morphological features, texture features, density features, and statistical features. For example, the processor 30 may perform feature extraction on the lesion image, clinical information, and surgical plan of the patient, respectively, using a feature extraction algorithm in a Convolutional Neural Network (CNN) or other deep learning model, where the feature extraction algorithm may be a Convolutional Neural Network (CNN), a Support Vector Machine (SVM), or the like.
After the first target feature is obtained, the processor 30 needs to perform a dimension reduction process on the first target feature, so as to obtain a second target feature. For example, the processor 30 uses a dimension reduction algorithm (e.g., principal component analysis algorithm (Principal Component Analysis, PCA), linear discriminant analysis algorithm (Linear Discriminant Analysis, LDA), etc.) to delete other redundant information in the lesion image except for the lesion portion, to delete other redundant information in the clinical information except for the critical features predicted with complications, and to delete other redundant information in the surgical plan except for the critical features predicted with complications, thereby obtaining the second target feature.
The processor 30 can then construct a feature vector from the key features of the lesion image, the key features of the clinical information, and the key features of the surgical plan included in the second target feature. For example, the key feature of the lesion image in the second target feature is (1, 2, 3), the key feature of the clinical information is (4, 5, 6), the key feature of the surgical plan is (7, 8, 9), and the feature vector may be expressed as (1, 2,3,4,5,6,7,8, 9). Finally, the processor 30 inputs the acquired feature vector into the prediction model, so that the prediction model can output the complication prediction result.
In this way, features irrelevant to complications can be removed by performing dimension reduction processing on the focus image, clinical information and features extracted by the operation scheme, the data volume of subsequent processing is reduced, and the feature construction feature vector of the dimension reduction processing is input into the prediction model, so that the accuracy of predicting the patient complications prediction result by the prediction model can be improved
Referring to fig. 6 and 7, in some embodiments, the method of predicting complications further comprises, prior to inputting the lesion image, the clinical information, and the surgical plan into the preset prediction model:
step 014: developing a prediction system 50 corresponding to the prediction model based on a preset frame, and deploying the prediction system 50, wherein the prediction system 50 comprises a visual operation interface 51;
step 013: inputting lesion images, clinical information, and surgical plan to a preset prediction model to output a patient's complications prediction result, comprising:
step 0134: the lesion images, clinical information, and surgical plan are input to the prediction system 50 to output the patient's complications prediction results.
The visual operation interface 51 is also called a graphical user interface (Graphical User Interface, GUI), and may be a medium that converts data into graphics or images to be displayed on a screen and interacted with using computer graphics and image processing techniques.
Specifically, before inputting a lesion image, clinical information, and a surgical plan to a preset prediction model, the processor 30 can develop a prediction system 50 corresponding to the prediction model according to a preset framework (e.g., streamlite framework of Python programming language), and can deploy the prediction system 50 to a computer device 100 such as a terminal, a server, a network device, or the like. For example, the processor 30 can deploy the prediction system 50 to a network platform such as an electronic medical record system of a hospital, or on a doctor's personal computer, or a server connected to a personal computer.
Processor 30 may input the lesion image, clinical information, and surgical plan of the patient on a visual operation interface 51 included in prediction system 50. Prediction system 50 may extract features of the lesion image, clinical information, and surgical plan of the patient input to prediction system 50 for constructing feature vectors, and then predict the complications prediction result based on the feature vectors, so that the complications prediction result of the patient may be displayed on visual operation interface 51 (as shown in fig. 7).
Alternatively, the prediction system 50 may be developed based on a preprocessing procedure, an image segmentation model, a prediction model, and the like, where only a pre-operation image needs to be input, the prediction system 50 may preprocess the pre-operation image, then give the preprocessed image to the image segmentation model, the image segmentation model outputs a lesion image, and then extract features of the lesion image, clinical information, and a surgical plan to construct a feature vector, and predict a complication prediction result according to the feature vector, so that the complication prediction result of the patient can be displayed on the visual operation interface 51.
In this way, by developing the prediction system 50 corresponding to the prediction model based on the preset frame, inputting the lesion image, the clinical information, and the surgical plan to the prediction system 50 to output the result of the patient's complication prediction, automation and intellectualization of the patient's complication prediction can be achieved.
Referring to fig. 8, an embodiment of the present application provides a model training method, which includes:
step 021: acquiring a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise training preoperative images, training clinical information, training surgical schemes and complication information;
step 022: dividing a focus region of a training preoperative image of a training sample to generate a training focus image;
step 023: and training the preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample to obtain a prediction model trained to be converged.
Specifically, the processor 30 can obtain a training set for training the prediction model, where the training set includes at least a first training sample with complication information and a second training sample with complication information, and the type of the complications corresponding to the second training sample includes a plurality of types, and the number of the training set includes a plurality of training samples, which is not limited herein.
Each training sample includes at least training preoperative images, training clinical information, and training surgical protocols. For example, the processor 30 can obtain preoperative images, clinical information, surgical plan, and corresponding complications information for a plurality of patients from an electronic medical record system for a plurality of hospitals and serve as a training set for the predictive model. The processor 30 then segments the lesion areas in the acquired training preoperative images to generate a training lesion image. Finally, the processor 30 trains the preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample, and then obtains the prediction model by detecting whether the preset model is converged or not and training until the preset model is converged.
The preset model is a convolutional neural network, a cyclic neural network and the like, can be built in a supervised learning mode, and is trained by setting the layer number and core parameters of the neural network model and adopting machine learning algorithms such as a Support Vector Machine (SVM), random Forest (Random Forest) and the like.
Referring to fig. 9, in some embodiments, the model training method further comprises:
step 024: acquiring a sample set, wherein the sample set comprises a plurality of samples, and the samples comprise preoperative images, clinical information, surgical schemes and complication information;
step 025: deleting incomplete or unqualified samples of at least one of preoperative images, clinical information and surgical protocols to obtain a target sample set;
step 026: and dividing the target sample set into a training set, a testing set and a verification set according to a preset proportion.
In particular, the processor 30 is capable of obtaining a sample set of training a predictive model, the number of samples in the sample set may be plural, not limited herein, and each sample needs to include preoperative images, clinical information, surgical protocols, and complications information. After acquiring the preoperative image, the clinical information and the surgical plan in each sample, the processor 30 needs to determine whether there is an incomplete or unqualified condition in the three of the preoperative image, the clinical information and the surgical plan, and if at least one of the three of the preoperative image, the clinical information and the surgical plan is incomplete or unqualified in the sample, the processor 30 needs to delete the current sample, so that the remaining sample sets can generate the target sample set.
Wherein, the incomplete preoperative image refers to that the preoperative image does not contain a focus area of the patient, and the unqualified preoperative image refers to that the preoperative image in the sample is not data of the current age stage of the patient (for example, the preoperative image data is data five years ago of the patient); incomplete clinical information refers to the lack of data for predicting patient complications in clinical information; clinical information failure refers to that clinical data in the sample is not data of the current age group of the patient (e.g., clinical information is data of five years ago for the patient); or the sample lacks operation scheme data, and the target sample set can be generated after deleting the sample to be deleted.
Finally, the processor 30 can divide the target sample set into a training set, a test set and a verification set according to a preset ratio, for example, the samples of the target sample set may be 100, and the preset ratio may be 8:1:1, namely 80 samples of a training set, 10 samples of a test set and 10 samples of a verification set; alternatively, the samples of the target sample set may be 100, and the preset ratio may be 7:2:1, namely 70 samples of the training set, 20 samples of the test set and 10 samples of the verification set.
Referring to fig. 10, in some embodiments, the model training method further comprises:
step 027: based on a five-fold cross validation method, equally dividing a training set and a testing set into 5 parts, taking one of the 5 parts as the testing set, taking the other 4 parts of the 5 parts as the training set, and respectively training preset models to respectively obtain 5 models to be selected;
step 028: and evaluating the performance of each model to be selected according to the test set corresponding to each model to be selected so as to determine the model to be selected with optimal performance as a prediction model.
Among them, the five-fold Cross Validation method (Cross-Validation) is a method commonly used in machine learning and data mining to verify the accuracy of models.
Specifically, after the training set and the test set are obtained, the processor 30 may train the preset model by using a five-fold cross validation method, that is, the processor 30 equally divides the obtained training set and test set into 5 parts, uses one of the 5 parts as the test set, and uses the other 4 parts of the 5 parts as the training set to train the preset model respectively, so as to obtain 5 candidate models respectively. For example, the processor 30 equally divides the acquired training set and test set into 5 parts, respectively takes A, B, C, D, E as the test set and the corresponding other 4 parts as the training set, respectively trains the preset model, for example, takes a as the test set, takes BCDE as the training set, trains the preset model based on BCDE to obtain the corresponding model M1 to be selected, and tests the model M1 to obtain the performance Q1 (such as accuracy, recall, F1 score, etc.) of the model M1 to be selected based on a; if B is taken as a test set, ACDE is taken as a training set, then a preset model is trained based on the ACDE to obtain a corresponding model M2 to be selected, and the model M2 to be selected is tested based on B to obtain the performance Q2 of the model M2 to be selected; if C is used as a test set, ABDE is used as a training set, then a preset model is trained based on the ABDE to obtain a corresponding model M3 to be selected, and the model M3 to be selected is tested based on C to obtain the performance Q3 of the model M3 to be selected; if D is taken as a test set, ABCE is taken as a training set, then a preset model is trained based on ABCE to obtain a corresponding model M4 to be selected, and the model M4 to be selected is tested based on D to obtain the performance Q4 of the model M4 to be selected; if E is taken as a test set, ABCD is taken as a training set, a preset model is trained based on the ABCD to obtain a corresponding model M5 to be selected, the model M5 to be selected is tested based on E to obtain the performance Q5 of the model M5 to be selected, and finally, the model to be selected with the optimal performance is determined to be a prediction model according to the performances Q1, Q2, Q3, Q4 and Q5 of the 5 models to be selected.
Referring to fig. 11, in order to better implement the method for predicting complications according to the embodiment of the present application, the embodiment of the present application further provides a device 10 for predicting complications. The complication prediction apparatus 10 includes a first acquisition module 11, a first segmentation module 12, and a prediction module 13; the first acquisition module 11 is used for acquiring preoperative images, clinical information and surgical schemes of a patient; the first segmentation module 12 is used for segmenting a focus region of the preoperative image to generate a focus image; the prediction module 13 is used for inputting the lesion image, the clinical information and the surgical scheme to a preset prediction model to output the patient's complication prediction result.
The first segmentation module 11 is specifically configured to pre-process the preoperative image to obtain a pre-processed image including a focus, where the size of the pre-processed image is smaller than that of the preoperative image, and the pre-processing includes normalization processing; and inputting the preprocessed image into a preset image segmentation model to generate a focus image.
The prediction module 13 is specifically configured to perform feature extraction on the lesion image, the clinical information, and the surgical plan, so as to obtain a first target feature; performing dimension reduction on the first target feature to obtain a second target feature, and constructing a feature vector based on the second target feature; and inputting the feature vector into the prediction model to output a complication prediction result.
The complication prediction apparatus 10 further includes a development module 14, where the development module 14 is configured to develop a prediction system 50 corresponding to the prediction model based on a preset framework, and deploy the prediction system 50, and the prediction system 50 includes a visual operation interface 51;
the prediction module 13 is specifically configured to input lesion images, clinical information, and surgical protocols to the prediction system 50 to output patient's complications prediction results.
Referring to fig. 12, in order to better implement the model training method according to the embodiment of the present application, the embodiment of the present application further provides a model training apparatus 20. The model training device comprises a second acquisition module 21, a second segmentation module 22 and a training module 23; the second acquisition module 21 is configured to acquire a training set, where the training set includes a plurality of training samples, and the training samples include training preoperative images, training clinical information, training surgical plans, and complication information; the second segmentation module 22 is configured to segment a lesion area of the training pre-operative image of the training sample to generate a training lesion image; the first training module 23 is configured to train the preset model according to the training focus image, the training clinical information, the training surgical scheme and the complication information corresponding to each training sample, so as to obtain a prediction model trained to converge.
The model training apparatus 20 further comprises a third obtaining module 24, the third obtaining module 24 is configured to obtain a sample set, the sample set comprising a plurality of samples, the samples comprising preoperative images, clinical information, surgical plan and complication information;
the model training apparatus 20 further includes a deleting module 25, where the deleting module 25 is configured to delete samples that are incomplete or failed in at least one of the preoperative image, the clinical information, and the surgical plan, so as to obtain a target sample set;
the model training apparatus 20 further comprises a dividing module 26, where the dividing module 26 is configured to divide the target sample set into a training set, a test set and a verification set according to a preset ratio.
The model training device 20 further includes a second training module 27, where the second training module 27 is configured to divide the training set and the test set into 5 parts based on a five-fold cross validation method, take one of the 5 parts as the test set, and take the other 4 parts of the 5 parts as the training set, and respectively train the preset models to obtain 5 models to be selected respectively;
the model training apparatus 20 further includes a determining module 28, where the determining module 28 is configured to evaluate, according to a test set corresponding to each candidate model, performance of each candidate model, so as to determine, as a prediction model, a candidate model with optimal performance.
Referring to fig. 13, a computer device 100 according to an embodiment of the present application includes a processor 30 and a memory 40, where the memory 40 stores a computer program 60, and the computer program 60 is executed by the processor 30, where the computer program 60 includes instructions for executing the method for predicting a complication according to any one of the above embodiments, or where the computer program 60 includes instructions for executing the method for training a model according to any one of the above embodiments, which is not repeated herein for brevity.
Referring to fig. 14, an embodiment of the present application further provides a computer readable storage medium 300 having stored thereon a computer program 310, where the computer program 310, when executed by a processor 320, implements the steps of the method for predicting complications according to any one of the embodiments described above; alternatively, the steps for implementing the model training method in any of the foregoing embodiments are not repeated herein for brevity.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (14)

1. A method of predicting complications, comprising:
acquiring preoperative images, clinical information and a surgical scheme of a patient;
segmenting a focal region of the preoperative image to generate a focal image;
inputting the focus image, the clinical information and the surgical scheme to a preset prediction model to output a complication prediction result of the patient.
2. The method of claim 1, wherein the pre-operative image comprises at least one of magnetic resonance imaging data and electronic computed tomography data; the clinical information comprises at least one of basic information and preoperative examination information of a patient, wherein the basic information comprises at least one of age, sex, height, weight, blood pressure, blood sugar and blood fat, and the preoperative examination information comprises at least one of blood biochemical examination information, blood coagulation function examination information and tumor marker level information; the surgical plan includes at least one of a surgical mode, a surgical site, a surgical duration, a surgical type, a surgical procedure, and a surgical instrument.
3. The method of claim 1, wherein the segmenting the lesion area of the preoperative image to generate a lesion image comprises:
preprocessing the preoperative image to obtain a preprocessed image containing a focus, wherein the preprocessed image is smaller than the preoperative image in size, and the preprocessing comprises normalization processing; and
Inputting the preprocessed image into a preset image segmentation model to generate the focus image.
4. The method of claim 1, wherein the inputting the lesion image, the clinical information, and the surgical plan to a preset prediction model to output the patient's complications prediction result comprises:
extracting features from the lesion image, the clinical information and the surgical plan to obtain a first target feature;
performing dimension reduction on the first target feature to obtain a second target feature, and constructing a feature vector based on the second target feature;
inputting the feature vector to the prediction model to output the complication prediction result.
5. The method of any one of claims 1-4, wherein the predictive model comprises a convolutional neural network model or a recurrent neural network model.
6. The method of claim 1, further comprising, prior to said inputting the lesion image, the clinical information, and the surgical plan to a preset predictive model:
developing a prediction system corresponding to the prediction model based on a preset frame, and deploying the prediction system, wherein the prediction system comprises a visual operation interface;
the inputting the lesion image, the clinical information, and the surgical plan to a preset prediction model to output a complication prediction result of the patient includes:
inputting the lesion image, the clinical information, and the surgical plan to the prediction system to output a complication prediction result of the patient.
7. A method of model training, comprising:
acquiring a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise training preoperative images, training clinical information, training surgical schemes and complication information;
dividing a focus region of the training pre-operative image of the training sample to generate a training focus image;
and training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample to obtain a prediction model trained to be converged.
8. The model training method of claim 7, wherein the method further comprises:
obtaining a sample set, the sample set comprising a plurality of samples, the samples comprising preoperative images, clinical information, surgical protocols, and complications information;
deleting the sample with incomplete or unqualified at least one of the preoperative image, the clinical information and the surgical plan to obtain a target sample set;
and dividing the target sample set into the training set, the testing set and the verification set according to a preset proportion.
9. The model training method of claim 8, further comprising:
based on a five-fold cross validation method, dividing the training set and the test set into 5 parts, taking one of the 5 parts as the test set, taking the other 4 parts of the 5 parts as the training set, and respectively training the preset models to obtain 5 candidate models respectively;
and evaluating the performance of each model to be selected according to the test set corresponding to each model to be selected so as to determine the model to be selected with the optimal performance as the prediction model.
10. The model training method of claim 7, wherein the training set includes at least a first training sample in which the complication information is uncomplicated and a second training sample in which the complication information is uncomplicated, and the types of complications corresponding to the second training sample include a plurality of types.
11. A complication prediction apparatus, comprising:
the first acquisition module is used for acquiring preoperative images, clinical information and operation schemes of a patient;
the first segmentation module is used for segmenting the focus area of the preoperative image to generate a focus image;
and the prediction module is used for inputting the focus image, the clinical information and the operation scheme to a preset prediction model so as to output a complication prediction result of the patient.
12. A model training device, comprising:
the second acquisition module is used for acquiring a training set, wherein the training set comprises a plurality of training samples, and the training samples comprise training preoperative images, training clinical information, training surgical schemes and complication information;
the second segmentation module is used for segmenting focus areas of the training pre-operation images of the training samples so as to generate training focus images;
the first training module is used for training a preset model according to the training focus image, the training clinical information, the training operation scheme and the complication information corresponding to each training sample so as to obtain a prediction model trained to be converged.
13. A computer device, comprising:
a processor and a memory;
a computer program stored in the memory, the computer program being executable by the processor, the computer program comprising instructions for performing the method of predicting complications according to any one of claims 1 to 6; alternatively, the computer program comprises instructions for performing the model training method of any of claims 7 to 10.
14. A non-transitory computer readable storage medium containing a computer program which, when executed by a processor, causes the processor to perform the complication prediction method of any of claims 1-6; or a model training method as claimed in any one of claims 7 to 10.
CN202310878347.5A 2023-07-17 2023-07-17 Complications prediction method, model training method, device, equipment and storage medium Pending CN117174282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310878347.5A CN117174282A (en) 2023-07-17 2023-07-17 Complications prediction method, model training method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310878347.5A CN117174282A (en) 2023-07-17 2023-07-17 Complications prediction method, model training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117174282A true CN117174282A (en) 2023-12-05

Family

ID=88941990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310878347.5A Pending CN117174282A (en) 2023-07-17 2023-07-17 Complications prediction method, model training method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117174282A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118824516A (en) * 2024-07-03 2024-10-22 中山大学孙逸仙纪念医院 A diagnostic device for infectious hydronephrosis based on CT images

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118824516A (en) * 2024-07-03 2024-10-22 中山大学孙逸仙纪念医院 A diagnostic device for infectious hydronephrosis based on CT images

Similar Documents

Publication Publication Date Title
EP4036931B1 (en) Training method for specializing artificial intelligence model in institution for deployment, and apparatus for training artificial intelligence model
US10997466B2 (en) Method and system for image segmentation and identification
CN107492099B (en) Medical image analysis method, medical image analysis system, and storage medium
KR102289277B1 (en) Medical image diagnosis assistance apparatus and method generating evaluation score about a plurality of medical image diagnosis algorithm
CN109147940B (en) Apparatus and system for automatically predicting physiological condition from medical image of patient
CN117524402A (en) Method for analyzing endoscope image and automatically generating diagnostic report
US12308107B2 (en) Medical image diagnosis assistance apparatus and method for providing user-preferred style based on medical artificial neural network
KR102360615B1 (en) Medical image diagnosis assistance apparatus and method using a plurality of medical image diagnosis algorithm for endoscope images
CN111325714A (en) Region-of-interest processing method, computer device and readable storage medium
US20240054638A1 (en) Automatic annotation of condition features in medical images
US12340513B2 (en) Method and system for predicting expression of biomarker from medical image
US12191037B2 (en) Medical machine learning system
US20200074631A1 (en) Systems And Methods For Identifying Implanted Medical Devices
WO2021112141A1 (en) Document creation assistance device, method, and program
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN111192660B (en) Image report analysis method, device and computer storage medium
JP2024009342A (en) Document creation support device, method and program
CN111128348A (en) Medical image processing method, device, storage medium and computer equipment
AU2020223750B2 (en) Method and System for Image Annotation
CN117174282A (en) Complications prediction method, model training method, device, equipment and storage medium
WO2023060295A1 (en) Mapping brain data to behavior
KR102607593B1 (en) Method for segmenting meniscus in knee magnetic resonance image based on deep learning, program and apparatus therefor
JP2009045110A (en) Image processing apparatus, image processing method, and image processing program
Li et al. Medical CT Image Cutting and Diagnostic Discrimination Model based on Decision Tree and Deep Neural Network
WO2024180385A1 (en) Method for diagnosing pancreatic lesions using ultrasound images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination