CN110916701B - Image scanning time prediction method, device, equipment and storage medium - Google Patents
Image scanning time prediction method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN110916701B CN110916701B CN201911192816.8A CN201911192816A CN110916701B CN 110916701 B CN110916701 B CN 110916701B CN 201911192816 A CN201911192816 A CN 201911192816A CN 110916701 B CN110916701 B CN 110916701B
- Authority
- CN
- China
- Prior art keywords
- scanning
- time
- image sequence
- scanning time
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012545 processing Methods 0.000 claims description 62
- 238000010586 diagram Methods 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 9
- 230000015654 memory Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 abstract description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 12
- 238000013527 convolutional neural network Methods 0.000 description 8
- 210000004185 liver Anatomy 0.000 description 8
- 238000002591 computed tomography Methods 0.000 description 5
- 239000002872 contrast media Substances 0.000 description 5
- 238000013170 computed tomography imaging Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 238000001303 quality assessment method Methods 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 238000002601 radiography Methods 0.000 description 2
- 206010019695 Hepatic neoplasm Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000002440 hepatic effect Effects 0.000 description 1
- 208000014018 liver neoplasm Diseases 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 210000003240 portal vein Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/481—Diagnostic techniques involving the use of contrast agents
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/48—Diagnostic techniques
- A61B6/486—Diagnostic techniques involving generating temporal series of image data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/545—Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Optics & Photonics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- General Physics & Mathematics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The application discloses an image scanning time prediction method, device and equipment and a storage medium, and belongs to the field of medical detection. The method comprises the following steps: acquiring a first image sequence of a first scanning time and a second image sequence of a second scanning time, wherein the first image sequence and the second image sequence are obtained by slicing three-dimensional images obtained by scanning the first scanning time and the second scanning time from multiple angles; taking the first image sequence and the second image sequence as input of a scanning time prediction model, and determining a prediction time interval through the scanning time prediction model; and determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval. Therefore, the next scanning time can be continuously predicted in the scanning process, and the target part can be accurately scanned in the predicted proper scanning time, so that a scanning image with higher quality can be obtained.
Description
Technical Field
The present application relates to the field of medical detection, and in particular, to a method, apparatus, device, and storage medium for predicting image scanning time.
Background
In the field of medical detection, a target portion of a human body is often scanned by using technologies such as CT (Computed Tomography, electronic computer tomography) or MRI (Magnetic Resonance Imaging ) to obtain a scanned image of the target portion, so that a doctor can accurately analyze lesions of the target portion according to the scanned image.
In the related art, it is generally necessary to inject a contrast medium into a vein of a human body and then perform scanning according to a fixed scanning timing set in advance, for example, scanning is performed every 10 seconds. The contrast effect of the contrast agent at the target portion changes with the change of the flow time of the contrast agent in the human body, so that the quality of the scanned image obtained by scanning the target portion at different scanning times is different.
However, when scanning is performed according to a fixed scanning timing, the scanning time of each time may not be an appropriate scanning time, which may result in that the image quality of the scanned image obtained by scanning may be low, and the lesion analysis requirement of the doctor may not be satisfied.
Disclosure of Invention
The embodiment of the application provides an image scanning time prediction method and device, which can be used for solving the problem of lower image quality of scanned images in the related technology. The technical scheme is as follows:
in one aspect, there is provided an image scanning time prediction method, the method comprising:
Acquiring a first image sequence of a first scanning time and a second image sequence of a second scanning time;
The first scanning time and the second scanning time are times for scanning the target part twice, the first scanning time is earlier than the second scanning time, the first image sequence is obtained by slicing the three-dimensional image obtained by scanning the first scanning time from multiple angles, and the second image sequence is obtained by slicing the three-dimensional image obtained by scanning the second scanning time from multiple angles;
Taking the first image sequence and the second image sequence as inputs of a scanning time prediction model, and determining a prediction time interval through the scanning time prediction model, wherein the scanning time prediction model is used for predicting the time interval between any scanning and the next scanning;
and determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
In one aspect, there is provided an image scanning time prediction apparatus, the apparatus comprising:
The first acquisition module is used for acquiring a first image sequence of the first scanning time and a second image sequence of the second scanning time;
The first scanning time and the second scanning time are the time of two adjacent scanning, the first scanning time is earlier than the second scanning time, the first image sequence is obtained by slicing the three-dimensional image obtained by scanning the first scanning time from multiple angles, and the second image sequence is obtained by slicing the three-dimensional image obtained by scanning the second scanning time from multiple angles;
A prediction module, configured to take the first image sequence and the second image sequence as inputs of a scan time prediction model, determine a prediction time interval through the scan time prediction model, and use the scan time prediction model to predict a time interval between any one scan and a next scan;
And the determining module is used for determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
In one aspect, an electronic device is provided, the electronic device including a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, the program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the image scanning time prediction method described above.
In one aspect, a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions loaded and executed by a processor to implement the above image scanning time prediction method is provided.
In another embodiment, a computer program product is also provided which, when executed, is adapted to carry out the above-described image scanning time prediction method.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
In the embodiment of the application, the image sequence of the previous two scans can be obtained, and the scanning time of the next scan is accurately predicted through the scanning time prediction model according to the image sequence of the previous two scans, so that the next scanning time can be continuously predicted in the scanning process, and the target part can be accurately scanned at the predicted proper scanning time, thereby obtaining a scanning image with higher quality and improving the quality of the scanning image. In addition, the image sequence is a multi-angle slice image obtained by slicing the scanned three-dimensional image from multiple angles, so that the scanning time prediction model can fully utilize the spatial information of the three-dimensional image to predict the next scanning time, and the prediction accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an image scanning time prediction method according to an embodiment of the present application;
FIG. 2 is a schematic view of a multi-angle slice according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an enhancement process provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of model results of an intermediate layer of a scan time prediction model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a scanning process provided by an embodiment of the present application;
fig. 6 is a block diagram of an image scanning time prediction apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before explaining the embodiment of the present application in detail, an application scenario of the embodiment of the present application is described.
In the related art, in order to improve the image quality of a scanned image, after each time a target portion is scanned, the scanned image obtained by scanning may be input to a quality evaluation model, and whether the image quality of the scanned image satisfies a quality requirement may be determined by the quality evaluation model. The scanned image input to the quality assessment model is usually a cross-sectional image of the target portion, and the quality assessment model is usually a classification model, and can be obtained by training in advance according to a plurality of positive sample images meeting quality requirements and a plurality of negative sample images not meeting quality requirements. However, the method can only evaluate the quality of the scanned image, and even if the scanned image is evaluated to not meet the quality requirement, the method can only play a role in warning a doctor, and the quality of the scanned image can not be directly improved in the scanning process, so that the method has certain limitation.
In the embodiment of the application, in order to fundamentally improve the quality of a scanned image in a scanning process, an image scanning time prediction method is provided to predict the optimal scanning time of the image, so that the scanned image with higher quality is obtained by scanning in the optimal scanning time.
The image scanning time prediction method provided by the embodiment of the application is a scanning time prediction method based on deep learning and is applied to the field of medical detection. For example, in the process of scanning a target part of a human body by adopting technologies such as CT or MRI, the scan time prediction model provided by the embodiment of the application can be used for analyzing the scan results of the previous two times and predicting the next optimal scan time so as to scan the target part at the optimal scan time, thereby obtaining a scan image with higher quality. The target site may be a tissue or organ such as liver, pulmonary portal vein, pulmonary nodule, etc.
For example, in liver CT or MRI image recognition, the panning can only provide static anatomical structure information, while CT or MRI contrast enhancement scanning can provide functional information of blood flow and tissue metabolism, which is particularly important for the structural description and property discrimination of liver tumors. However, in actual CT and MRI scans, it is difficult for an operator to precisely capture the optimal scan time of arterial phase, portal pulse phase, hepatic parenchymal phase, and delayed phase of the liver due to individual differences. The method provided by the embodiment of the application predicts the optimal scanning time in liver radiography and is used for guiding the accurate grasping of the scanning time during liver CT or MRI radiography scanning to obtain accurate scanning images of liver arterial phase, portal pulse phase, liver parenchyma phase and delay phase so as to assist doctors to accurately judge.
Of course, the image scanning time prediction method provided by the embodiment of the application can be applied to other scanning scenes besides human body scanning, such as a scene of scanning animals or objects, and the like, and the embodiment of the application is not limited to this.
Next, terms related to the embodiments of the present application will be explained.
Multi-angle slicing: firstly reconstructing the two-dimensional cross section slice into a three-dimensional image, and then slicing the three-dimensional image from multiple angles, so that more space information can be effectively acquired.
Multichannel convolutional neural network: the convolutional neural network is a deep learning model, is similar to a common neural network, consists of neurons with learnable weights and bias constants, and remarkably reduces the complexity of the network through two modes of local connection and global sharing. Multi-channel convolutional neural networks mean that different channels will input different image information, which is very effective for processing three-dimensional images with time series.
Dense Block (compact Block): degradation problems typically occur when training convolutional neural networks that are deeper/hierarchically more, i.e., deep networks are less effective than shallow networks. Dense Block is a deep neural network model which can effectively reduce gradient disappearance and enhance feature transfer by stacking different layer features to improve training convergence and reduce parameters.
Next, an implementation environment according to the application embodiment will be described.
The image scanning time prediction method provided by the embodiment of the application can be applied to electronic equipment. By way of example, the electronic device may be a scanning device, such as a CT machine or an MRI machine, etc. Or the electronic device is a control device of the scanning device, such as a terminal device connected with the scanning device for controlling the scanning device. For example, the terminal device may be a mobile phone, a tablet computer, a computer, or the like.
Fig. 1 is a flowchart of an image scanning time prediction method provided by an embodiment of the present application, where the method is applied to an electronic device, as shown in fig. 1, and the method includes the following steps;
Step 101: a first image sequence at a first scan time and a second image sequence at a second scan time are acquired.
The first scanning time and the second scanning time are the time for scanning the target part twice, and the first scanning time is earlier than the second scanning time. For example, the second scanning time is the current scanning time, and the first scanning time is the last scanning time.
The first image sequence is obtained by slicing a three-dimensional image obtained by scanning at a first scanning time from a plurality of angles, and the second image sequence is obtained by slicing a three-dimensional image obtained by scanning at a second scanning time from a plurality of angles. That is, the first image sequence and the second image sequence are both multi-angle slice images.
As one example, the operation of acquiring a first image sequence for a first scan time includes: acquiring a plurality of first cross-sectional images obtained by scanning a target part at a first scanning time; performing three-dimensional reconstruction on the plurality of first cross-sectional images to obtain a first three-dimensional image of the target part; the first three-dimensional image is sliced from a plurality of angles, and the obtained plurality of first slice images are used as a first image sequence.
Wherein the cross-sectional image refers to a cross-sectional slice image of the target site. As an example, the plurality of first cross-sectional images may be CT images obtained by CT scanning or MRI images obtained by MRI scanning.
As an example, in the process of performing three-dimensional reconstruction on the plurality of first cross-sectional images to obtain the first three-dimensional image of the target portion, image interpolation processing may also be performed on the plurality of first cross-sectional images to reconstruct to obtain three-dimensional images with the same resolution in each direction.
As one example, the first three-dimensional image is sliced from multiple angles, i.e., the first three-dimensional image is sliced from different orientations to obtain multi-angle slice images. The angles may be preset or may be selected randomly, which is not limited in the embodiment of the present application.
Referring to fig. 2, fig. 2 is a schematic diagram of a multi-angle slicing according to an embodiment of the present application, where a first three-dimensional image may be sliced according to the multi-angle slicing method shown in fig. 2.
As one example, the operation of acquiring a second image sequence for a second scan time includes: acquiring a plurality of second cross-sectional images obtained by scanning the target part at a second scanning time; performing three-dimensional reconstruction on the plurality of second cross-sectional images to obtain a second three-dimensional image of the target part; slicing the second three-dimensional image from a plurality of angles, and taking the obtained plurality of second slice images as a second image sequence.
Further, after the image sequence of any one scanning time is acquired, the image sequence of the scanning time may also be stored, so that the image sequence of the scanning time is directly acquired from the stored data when needed. For example, when the target image is scanned at the current second scanning time, the first image sequence of the first scanning time is acquired and stored, after the target image is scanned at the second scanning time, the first image sequence of the first scanning time may be directly acquired from the stored data, a plurality of second cross-sectional images obtained by scanning the target portion at the second scanning time are acquired, the plurality of second cross-sectional images are subjected to three-dimensional reconstruction, a second three-dimensional image of the target portion is obtained, then the second three-dimensional image is sliced from a plurality of angles, and the obtained plurality of second slice images are used as the second image sequence.
As an example, a first image sequence of a first scan time and a second image sequence of a second scan time may be acquired after a current second scan time scans a target image, so that a scan time of a next scan is predicted based on the first scan time and the second scan time image sequence.
Step 102: the first image sequence and the second image sequence are used as input of a scanning time prediction model, and a prediction time interval is determined through the scanning time prediction model.
The scan time prediction model is used for predicting the time interval between any one scan and the next scan, and can be obtained by training through training data. The predicted time interval refers to a time interval between the second scan time and the scan time of the next scan.
As an example, the scan time prediction model includes an input layer for acquiring input data input into the scan time prediction model, an intermediate layer for performing prediction processing on the input data, and an output layer for processing data output from the intermediate layer to obtain output data.
As an example, the first image sequence and the second image sequence may be input as input layers, the prediction processing may be repeated n times for the first image sequence and the second image sequence through the intermediate layer to obtain n time intervals, and then the n time intervals may be processed through the output layer to obtain the predicted time intervals.
Wherein n is a positive integer, and n may be 1 or an integer greater than 1. That is, the first image sequence and the second image sequence may be repeatedly subjected to the prediction processing only 1 time through the intermediate layer, or may be repeatedly subjected to the prediction processing a plurality of times.
The implementation manner of processing the n time intervals through the output layer to obtain the predicted time interval may include the following two ways:
The first implementation mode: an average time interval of the n time intervals is determined by the output layer, and the average time interval is taken as the predicted time interval.
When n is 1, the time interval obtained by the prediction process may be directly determined as the prediction time interval.
The second implementation mode: and carrying out weighted average on the n time intervals through an output layer to obtain a predicted time interval.
Wherein, the weight of each time interval can be preset. For example, the weights of the n time intervals may be set to decrease in order according to the prediction order, or to increase in order according to the prediction order, or the like, which may be set in other manners, and the embodiment of the present application is not limited thereto.
As an example, if the first image sequence and the second image sequence are repeatedly subjected to 10 prediction processes through the intermediate layer to obtain 10 time intervals, the prediction time intervals may be determined by the following formula:
Where t is the predicted time interval, t i is the i-th time interval, and w i is the weight of the i-th time interval.
It should be noted that, when the weights of the n time intervals are all 1, the average time interval of the n time intervals may be directly determined as the predicted time interval.
As one example, the intermediate layer includes a feature map extraction section, a multi-channel fusion section, and a prediction section, and the operations of repeatedly performing prediction processing on the first image sequence and the second image sequence n times by the intermediate layer include: for any one of the n prediction processes, determining, by the feature map extracting section, a first feature map corresponding to the first image sequence and a second feature map corresponding to the second image sequence; splicing the first characteristic diagram and the second characteristic diagram through the multi-channel fusion part to obtain a third characteristic diagram; and the over-prediction part predicts the third characteristic diagram to obtain a time interval.
As one example, the feature map extracting section may include a first convolution layer through which a first feature map corresponding to the first image sequence is determined, and a second convolution layer through which a second feature map corresponding to the second image sequence is determined.
As an example, the plurality of first slice images in the first image sequence may be subjected to convolution processing by the first convolution layer, respectively, to obtain feature maps of the plurality of first slice images, and then the first feature map is determined according to the feature maps of the plurality of first slice images. And respectively carrying out convolution processing on a plurality of second slice images in the second image sequence through the second convolution layer to obtain feature images of the plurality of second slice images, and determining the second feature images according to the feature images of the plurality of second slice images.
As one example, the feature maps of the plurality of first slice images may be determined directly as the first feature map, and the feature maps of the plurality of second slice images may be determined as the second feature map.
As another example, the feature map extracting section may further include a first enhancement layer and a second enhancement layer, and the feature map of at least one of the plurality of first slice images may be subjected to enhancement processing by the first enhancement layer, and the feature map after enhancement processing and the feature map of the plurality of first slice images may be determined as the first feature map. And performing enhancement processing on the feature map of at least one second slice image in the plurality of second slice images through the second enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of second slice images as second feature maps.
Wherein the enhancement process includes at least one of a rotation process and a mirroring process. Referring to fig. 3, fig. 3 is a schematic diagram of enhancement processing provided in an embodiment of the present application, as shown in fig. 3, for a feature map of a certain first slice image in a plurality of first slice images, rotation processing and mirror image processing may be performed on the feature map, to obtain a feature map after rotation processing and a feature map after mirror image processing.
As an example, at least one feature map of the first slice image may be selected from the feature maps of the plurality of first slice images, and then the selected feature map may be subjected to enhancement processing. And selecting at least one feature map of the second slice image from the feature maps of the second slice images, and then performing enhancement processing on the selected feature map. The selection policy of the feature map may be preset according to actual needs, and for example, a random selection policy may be used for selection.
The enhancement layer is used for enhancing the feature map, namely, by adopting an enhanced input mode and processing strategy, the image is further rotated, mirrored and the like, so that the robustness of the model can be improved.
As one example, the prediction portion may be a convolutional neural network, such as a convolutional neural network based on a Dense Block. Dense Block is a deep neural network that can effectively mitigate gradient vanishing and enhance feature delivery by stacking different layers of features to improve training convergence and reduce parameters.
Referring to fig. 4, fig. 4 is a schematic diagram of a model result of an intermediate layer of a scan time prediction model according to an embodiment of the present application, and as shown in fig. 4, the intermediate layer of the scan time prediction model includes a feature map extracting portion, a multi-channel fusion portion, and a prediction portion. The feature extraction part comprises two convolution layers and two enhancement layers, wherein the convolution layers are used for carrying out convolution processing on the corresponding image sequences, and the enhancement layers are used for carrying out enhancement processing on feature images obtained through the convolution processing. The multi-channel fusion part is used for carrying out multi-channel fusion on the enhanced images of the two image sequences, namely, splicing the first characteristic image F1 and the second characteristic image F2 which are respectively corresponding to the two image sequences to obtain a new third characteristic image F. The prediction part is a convolutional neural network and is used for performing prediction processing on the third characteristic diagram F output by the multi-channel fusion part to obtain a time interval.
Step 103: and determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
That is, according to the second scanning time and the predicted time interval, a third scanning time with a time later than the second scanning time and a distance from the second scanning time by the predicted time interval may be determined, where the third scanning time is the predicted scanning time of the next scanning.
Further, after the third scan time is determined, the target portion may be scanned at the third scan time. For example, when the current time reaches the third scanning time, the scanning device can be controlled to scan the target part.
In the embodiment of the application, the optimal scanning time of the next period can be predicted through the image scanning results of the first two periods, and accurate scanning time suggestion is provided for the next period of scanning. For example, the predicted scanning time can be fed back to CT or MRI scanning equipment in real time, a suggestion instruction is sent to an operator for the next period of scanning, the image accuracy of each period in liver scanning is improved, and a doctor is further assisted in improving the clinical diagnosis accuracy.
The third scanning time may be the optimal scanning time for the next scanning, and the target portion is scanned at the predicted third scanning time, so that a higher quality scanned image may be obtained.
Before determining the prediction time interval by the scan time prediction model, training data is also required to be acquired, and the scan time prediction model to be trained is trained by the training data to obtain the scan time prediction model capable of predicting the time interval between any one scan and the next scan.
As one example, the training process of the scan time prediction model includes: acquiring a plurality of sample data, each sample data comprising an image sequence of a first sample scan time and a second sample scan time, and a sample time interval; and training the scanning time prediction model to be trained according to the plurality of sample data to obtain the scanning time prediction model.
The image sequence is a multi-angle slice image obtained by slicing a scanned three-dimensional image from a plurality of angles. The first sample scanning time and the second sample scanning time are the times of scanning the target part in two adjacent times, the first sample scanning time is earlier than the second sample scanning time, the sample time interval is the time interval between the second sample scanning time and the third sample scanning time, and the third sample scanning time refers to the scanning time of scanning the target part after the second sample scanning time so as to obtain a scanning image meeting the image quality requirement.
As an example, the plurality of sample data may be screened and predicted from the scanned image obtained by the actual scanning by the doctor in the actual scanning process, or may be obtained in other manners, which is not limited in the embodiment of the present application.
In the embodiment of the application, the image sequence of the previous two scans can be obtained, and the scanning time of the next scan is accurately predicted through the scanning time prediction model according to the image sequence of the previous two scans, so that the next scanning time can be continuously predicted in the scanning process, and the target part can be accurately scanned at the predicted proper scanning time, thereby obtaining a scanning image with higher quality and improving the quality of the scanning image. In addition, the image sequence is a multi-angle slice image obtained by slicing the scanned three-dimensional image from multiple angles, so that the scanning time prediction model can fully utilize the spatial information of the three-dimensional image to predict the next scanning time, and the prediction accuracy is improved.
In addition, through collecting the multi-angle section image of target position, training is carried out simultaneously based on multi-angle section image, therefore can utilize the space information of almost whole three-dimensional image, on the other hand, multi-angle section image can carry out effective enhancement to the input image, suppresses the phenomenon of fitting excessively, has further simplified the design and the training degree of difficulty of model. In addition, by adopting the multichannel convolutional neural network, image sequences with different scanning times can be processed simultaneously, so that the accuracy of prediction is further improved. In addition, by adopting the enhancement input and multiple predictions, the whole three-dimensional image space information can be fully utilized for prediction, and the robustness of the model is improved.
As an example, in the actual scanning process, each time after the scanning is completed once, the image sequence of the current scanning and the image sequence of the last scanning are used as inputs of a scanning time prediction model, and the optimal scanning time of the next scanning is determined through the scanning time prediction model.
Fig. 5 is a schematic diagram of a scanning process according to an embodiment of the present application, and as shown in fig. 5, the scanning process includes the following steps:
1, scanning the target site before injecting the contrast agent, to obtain an image sequence 0.
2, After the contrast medium is injected, the target site is scanned, and an image sequence 1 is obtained.
And 3, taking the image sequence 0 and the image sequence 1 as inputs of a scanning time prediction model, predicting a time interval between the scanning time prediction model and the scanning time of the third scanning, and further determining the third scanning time.
And 4, scanning the target part at the predicted third scanning time to obtain an image sequence 2.
And 5, taking the image sequence 1 and the image sequence 2 as inputs of a scanning time prediction model, predicting a time interval between the scanning time prediction model and the scanning time of the fourth scanning, and further determining the fourth scanning time.
And 6, scanning the target part at the predicted fourth scanning time to obtain an image sequence 3.
And 7, analogizing, taking the image sequence n-1 and the image sequence n as the input of a scanning time prediction model, predicting the time interval between the scanning time prediction model and the scanning time of the next scanning, and further determining the scanning time of the next scanning.
Fig. 6 is a block diagram of an apparatus for predicting image scanning time according to an embodiment of the present application, and as shown in fig. 6, the apparatus includes a first acquisition module 601, a prediction module 602, and a determination module 603.
A first acquiring module 601, configured to acquire a first image sequence at a first scanning time and a second image sequence at a second scanning time;
The first scanning time and the second scanning time are times for scanning the target part twice, the first scanning time is earlier than the second scanning time, the first image sequence is obtained by slicing the three-dimensional image obtained by scanning the first scanning time from multiple angles, and the second image sequence is obtained by slicing the three-dimensional image obtained by scanning the second scanning time from multiple angles;
A prediction module, configured to take the first image sequence and the second image sequence as inputs of a scan time prediction model, determine a prediction time interval through the scan time prediction model, and use the scan time prediction model to predict a time interval between any one scan and a next scan;
And the determining module is used for determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
Optionally, the first obtaining module 601 is configured to:
Acquiring a plurality of first cross-sectional images obtained by scanning the target part at the first scanning time; performing three-dimensional reconstruction on the plurality of first cross-sectional images to obtain a first three-dimensional image of the target part; slicing the first three-dimensional image from the plurality of angles, and taking a plurality of obtained first slice images as the first image sequence;
Acquiring a plurality of second cross-sectional images obtained by scanning the target part at the second scanning time; performing three-dimensional reconstruction on the plurality of second cross-sectional images to obtain a second three-dimensional image of the target part; slicing the second three-dimensional image from the plurality of angles, and taking the obtained plurality of second slice images as the second image sequence.
Optionally, the scan time prediction model includes an input layer, an intermediate layer, and an output layer;
the prediction module 602 is configured to:
taking the first image sequence and the second image sequence as input of the input layer;
Repeatedly carrying out prediction processing on the first image sequence and the second image sequence for n times through the intermediate layer to obtain n time intervals, wherein n is a positive integer;
and carrying out weighted average on the n time intervals through the output layer to obtain the predicted time interval.
Optionally, the intermediate layer includes a feature map extraction portion, a multi-channel fusion portion, and a prediction portion;
the prediction module 602 is configured to:
For any one of the n prediction processes, determining, by the feature map extracting section, a first feature map corresponding to the first image sequence and a second feature map corresponding to the second image sequence;
splicing the first characteristic diagram and the second characteristic diagram through the multi-channel fusion part to obtain a third characteristic diagram;
And carrying out prediction processing on the third characteristic map through the prediction part to obtain a time interval.
Optionally, the feature map extracting part includes a first convolution layer and a second convolution layer;
the prediction module 602 is configured to:
Respectively carrying out convolution processing on a plurality of first slice images in the first image sequence through the first convolution layer to obtain feature images of the plurality of first slice images, and determining the first feature images according to the feature images of the plurality of first slice images;
and respectively carrying out convolution processing on a plurality of second slice images in the second image sequence through the second convolution layer to obtain feature images of the plurality of second slice images, and determining the second feature images according to the feature images of the plurality of second slice images.
Optionally, the feature map extracting part further includes a first enhancement layer and a second enhancement layer;
the prediction module 602 is configured to:
Performing enhancement processing on the feature map of at least one first slice image of the plurality of first slice images through the first enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of first slice images as the first feature map, wherein the enhancement processing comprises at least one of rotation processing and mirror image processing;
the determining the second feature map according to the feature maps of the plurality of second slice images includes:
and performing enhancement processing on the feature map of at least one second slice image in the plurality of second slice images through the second enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of second slice images as the second feature map.
Optionally, the apparatus further comprises:
A second acquisition module for acquiring a plurality of sample data, each sample data including an image sequence of a first sample scan time and a second sample scan time, and a sample time interval;
The first sample scanning time and the second sample scanning time are times for scanning the target part twice, the first sample scanning time is earlier than the second sample scanning time, the sample time interval is a time interval between the second sample scanning time and a third sample scanning time, and the third sample scanning time refers to a scanning time for scanning the target part after the second sample scanning time to obtain a scanning image meeting an image quality requirement;
And the training module is used for training the scanning time prediction model to be trained according to the plurality of sample data to obtain the scanning time prediction model.
In the embodiment of the application, the image sequence of the previous two scans can be obtained, and the scanning time of the next scan is accurately predicted through the scanning time prediction model according to the image sequence of the previous two scans, so that the next scanning time can be continuously predicted in the scanning process, and the target part can be accurately scanned at the predicted proper scanning time, thereby obtaining a scanning image with higher quality and improving the quality of the scanning image. In addition, the image sequence is a multi-angle slice image obtained by slicing the scanned three-dimensional image from multiple angles, so that the scanning time prediction model can fully utilize the spatial information of the three-dimensional image to predict the next scanning time, and the prediction accuracy is improved.
It should be noted that: when the image scanning time prediction device provided in the above embodiment predicts the scanning time, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image scanning time prediction apparatus and the image scanning time prediction method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the image scanning time prediction apparatus and the image scanning time prediction method are detailed in the method embodiments and are not repeated here.
Fig. 7 is a schematic structural diagram of an electronic device 700 according to an embodiment of the present application, where the electronic device 700 may be a scanning device, such as a CT machine or an MRI machine. Or the electronic device 700 is a control device of the scanning device, such as a terminal device connected to the scanning device for controlling the scanning device. For example, the terminal device may be a mobile phone, a tablet computer, a computer, or the like. The electronic device 700 may include one or more processors (central processing units, CPU) 701 and one or more memories 702, where the memories 702 store at least one instruction, and the at least one instruction is loaded and executed by the processors 701 to implement the image scanning time prediction method provided in the above method embodiments. Of course, the electronic device 700 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a computer readable storage medium having instructions stored thereon that when executed by a processor implement the above-described image scan time prediction method is also provided.
In an exemplary embodiment, a computer program product is also provided, which, when executed, is adapted to carry out the above-described image scanning time prediction method.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.
Claims (16)
1. An image scan time prediction method, the method comprising:
Acquiring a first image sequence of a first scanning time and a second image sequence of a second scanning time;
The first scanning time and the second scanning time are times for scanning the target part twice, the first scanning time is earlier than the second scanning time, the first image sequence is obtained by slicing the three-dimensional image obtained by scanning the first scanning time from multiple angles, and the second image sequence is obtained by slicing the three-dimensional image obtained by scanning the second scanning time from multiple angles;
Taking the first image sequence and the second image sequence as inputs of a scanning time prediction model, and determining a prediction time interval through the scanning time prediction model, wherein the scanning time prediction model is used for predicting the time interval between any scanning and the next scanning;
and determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
2. The method of claim 1, wherein the acquiring the first image sequence at the first scan time and the second image sequence at the second scan time comprises:
Acquiring a plurality of first cross-sectional images obtained by scanning the target part at the first scanning time; performing three-dimensional reconstruction on the plurality of first cross-sectional images to obtain a first three-dimensional image of the target part; slicing the first three-dimensional image from the plurality of angles, and taking a plurality of obtained first slice images as the first image sequence;
Acquiring a plurality of second cross-sectional images obtained by scanning the target part at the second scanning time; performing three-dimensional reconstruction on the plurality of second cross-sectional images to obtain a second three-dimensional image of the target part; slicing the second three-dimensional image from the plurality of angles, and taking the obtained plurality of second slice images as the second image sequence.
3. The method of claim 1, wherein the scan-time prediction model comprises an input layer, an intermediate layer, and an output layer;
Said determining a prediction time interval by means of a scan time prediction model using said first image sequence and said second image sequence as inputs to said scan time prediction model comprises:
taking the first image sequence and the second image sequence as input of the input layer;
Repeatedly carrying out prediction processing on the first image sequence and the second image sequence for n times through the intermediate layer to obtain n time intervals, wherein n is a positive integer;
and carrying out weighted average on the n time intervals through the output layer to obtain the predicted time interval.
4. A method according to claim 3, wherein the intermediate layer comprises a feature map extraction portion, a multi-channel fusion portion, and a prediction portion;
The performing prediction processing on the first image sequence and the second image sequence repeatedly for n times through the intermediate layer includes:
For any one of the n prediction processes, determining, by the feature map extracting section, a first feature map corresponding to the first image sequence and a second feature map corresponding to the second image sequence;
splicing the first characteristic diagram and the second characteristic diagram through the multi-channel fusion part to obtain a third characteristic diagram;
And carrying out prediction processing on the third characteristic map through the prediction part to obtain a time interval.
5. The method of claim 4, wherein the feature map extraction portion comprises a first convolution layer and a second convolution layer;
the determining, by the feature map extracting section, a first feature map corresponding to the first image sequence and a second feature map corresponding to the second image sequence, includes:
Respectively carrying out convolution processing on a plurality of first slice images in the first image sequence through the first convolution layer to obtain feature images of the plurality of first slice images, and determining the first feature images according to the feature images of the plurality of first slice images;
and respectively carrying out convolution processing on a plurality of second slice images in the second image sequence through the second convolution layer to obtain feature images of the plurality of second slice images, and determining the second feature images according to the feature images of the plurality of second slice images.
6. The method of claim 5, wherein the feature map extraction portion further comprises a first enhancement layer and a second enhancement layer;
the determining the first feature map according to the feature maps of the plurality of first slice images includes:
Performing enhancement processing on the feature map of at least one first slice image of the plurality of first slice images through the first enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of first slice images as the first feature map, wherein the enhancement processing comprises at least one of rotation processing and mirror image processing;
the determining the second feature map according to the feature maps of the plurality of second slice images includes:
and performing enhancement processing on the feature map of at least one second slice image in the plurality of second slice images through the second enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of second slice images as the second feature map.
7. The method of any of claims 1-6, wherein prior to determining a prediction time interval by the scan time prediction model, further comprising:
Acquiring a plurality of sample data, each sample data comprising an image sequence of a first sample scan time and a second sample scan time, and a sample time interval;
The first sample scanning time and the second sample scanning time are times for scanning the target part twice, the first sample scanning time is earlier than the second sample scanning time, the sample time interval is a time interval between the second sample scanning time and a third sample scanning time, and the third sample scanning time refers to a scanning time for scanning the target part after the second sample scanning time to obtain a scanning image meeting an image quality requirement;
And training the scanning time prediction model to be trained according to the plurality of sample data to obtain the scanning time prediction model.
8. A scanning time determining apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a first image sequence of a first scanning time and a second image sequence of a second scanning time;
The first scanning time and the second scanning time are times for scanning the target part twice, the first scanning time is earlier than the second scanning time, the first image sequence is obtained by slicing the three-dimensional image obtained by scanning the first scanning time from multiple angles, and the second image sequence is obtained by slicing the three-dimensional image obtained by scanning the second scanning time from multiple angles;
A prediction module, configured to take the first image sequence and the second image sequence as inputs of a scan time prediction model, determine a prediction time interval through the scan time prediction model, and use the scan time prediction model to predict a time interval between any one scan and a next scan;
And the determining module is used for determining a third scanning time for scanning the target part next time after the second scanning time according to the second scanning time and the predicted time interval.
9. The apparatus of claim 8, wherein the first acquisition module is configured to:
Acquiring a plurality of first cross-sectional images obtained by scanning the target part at the first scanning time; performing three-dimensional reconstruction on the plurality of first cross-sectional images to obtain a first three-dimensional image of the target part; slicing the first three-dimensional image from the plurality of angles, and taking a plurality of obtained first slice images as the first image sequence;
Acquiring a plurality of second cross-sectional images obtained by scanning the target part at the second scanning time; performing three-dimensional reconstruction on the plurality of second cross-sectional images to obtain a second three-dimensional image of the target part; slicing the second three-dimensional image from the plurality of angles, and taking the obtained plurality of second slice images as the second image sequence.
10. The apparatus of claim 8, wherein the scan-time prediction model comprises an input layer, an intermediate layer, and an output layer;
The prediction module is used for:
taking the first image sequence and the second image sequence as input of the input layer;
Repeatedly carrying out prediction processing on the first image sequence and the second image sequence for n times through the intermediate layer to obtain n time intervals, wherein n is a positive integer;
and carrying out weighted average on the n time intervals through the output layer to obtain the predicted time interval.
11. The apparatus of claim 10, wherein the intermediate layer comprises a feature map extraction portion, a multi-channel fusion portion, and a prediction portion;
The prediction module is used for:
For any one of the n prediction processes, determining, by the feature map extracting section, a first feature map corresponding to the first image sequence and a second feature map corresponding to the second image sequence;
splicing the first characteristic diagram and the second characteristic diagram through the multi-channel fusion part to obtain a third characteristic diagram;
And carrying out prediction processing on the third characteristic map through the prediction part to obtain a time interval.
12. The apparatus of claim 11, wherein the feature map extraction portion comprises a first convolution layer and a second convolution layer;
The prediction module is used for:
Respectively carrying out convolution processing on a plurality of first slice images in the first image sequence through the first convolution layer to obtain feature images of the plurality of first slice images, and determining the first feature images according to the feature images of the plurality of first slice images;
and respectively carrying out convolution processing on a plurality of second slice images in the second image sequence through the second convolution layer to obtain feature images of the plurality of second slice images, and determining the second feature images according to the feature images of the plurality of second slice images.
13. The apparatus of claim 12, wherein the feature map extraction portion further comprises a first enhancement layer and a second enhancement layer;
The prediction module is used for:
Performing enhancement processing on the feature map of at least one first slice image of the plurality of first slice images through the first enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of first slice images as the first feature map, wherein the enhancement processing comprises at least one of rotation processing and mirror image processing;
and performing enhancement processing on the feature map of at least one second slice image in the plurality of second slice images through the second enhancement layer, and determining the feature map after the enhancement processing and the feature map of the plurality of second slice images as the second feature map.
14. The apparatus according to any one of claims 8-13, wherein the apparatus further comprises:
A second acquisition module for acquiring a plurality of sample data, each sample data including an image sequence of a first sample scan time and a second sample scan time, and a sample time interval;
The first sample scanning time and the second sample scanning time are times for scanning the target part twice, the first sample scanning time is earlier than the second sample scanning time, the sample time interval is a time interval between the second sample scanning time and a third sample scanning time, and the third sample scanning time refers to a scanning time for scanning the target part after the second sample scanning time to obtain a scanning image meeting an image quality requirement;
And the training module is used for training the scanning time prediction model to be trained according to the plurality of sample data to obtain the scanning time prediction model.
15. An electronic device comprising a processor and a memory having stored therein at least one instruction, at least one program, code set, or instruction set, the instruction, program, code set, or instruction set being loaded and executed by the processor to implement the method of any of claims 1-7.
16. A computer readable storage medium having stored therein at least one instruction, at least one program, code set or instruction set, the instruction, program, code set or instruction set being loaded and executed by a processor to implement the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911192816.8A CN110916701B (en) | 2019-11-28 | 2019-11-28 | Image scanning time prediction method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911192816.8A CN110916701B (en) | 2019-11-28 | 2019-11-28 | Image scanning time prediction method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110916701A CN110916701A (en) | 2020-03-27 |
CN110916701B true CN110916701B (en) | 2024-09-06 |
Family
ID=69846806
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911192816.8A Active CN110916701B (en) | 2019-11-28 | 2019-11-28 | Image scanning time prediction method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110916701B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20240078790A1 (en) * | 2022-09-02 | 2024-03-07 | Motional Ad Llc | Enriching later-in-time feature maps using earlier-in-time feature maps |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101686825A (en) * | 2007-06-21 | 2010-03-31 | 皇家飞利浦电子股份有限公司 | Use the dynamic model adjustment to be used for the acquisition protocols of dynamic medical imaging |
CN108968996A (en) * | 2017-05-30 | 2018-12-11 | 通用电气公司 | Motion gate medical imaging |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007528767A (en) * | 2004-03-12 | 2007-10-18 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Previous scan for optimization of MRI scan parameters |
US20170209113A1 (en) * | 2016-01-22 | 2017-07-27 | General Electric Company | Methods and systems for adaptive scan control |
DE102018201411A1 (en) * | 2018-01-30 | 2019-08-01 | Robert Bosch Gmbh | Method for determining a time course of a measured variable, prognosis system, actuator control system, method for training the actuator control system, training system, computer program and machine-readable storage medium |
CN109745062B (en) * | 2019-01-30 | 2020-01-10 | 腾讯科技(深圳)有限公司 | CT image generation method, device, equipment and storage medium |
CN110009709B (en) * | 2019-05-08 | 2023-07-07 | 上海联影医疗科技股份有限公司 | Medical image imaging method and system |
CN110458817B (en) * | 2019-08-05 | 2023-07-18 | 上海联影医疗科技股份有限公司 | Medical image quality prediction method, device, equipment and storage medium |
CN110464326B (en) * | 2019-08-19 | 2022-05-10 | 上海联影医疗科技股份有限公司 | Scanning parameter recommendation method, system, device and storage medium |
-
2019
- 2019-11-28 CN CN201911192816.8A patent/CN110916701B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101686825A (en) * | 2007-06-21 | 2010-03-31 | 皇家飞利浦电子股份有限公司 | Use the dynamic model adjustment to be used for the acquisition protocols of dynamic medical imaging |
CN108968996A (en) * | 2017-05-30 | 2018-12-11 | 通用电气公司 | Motion gate medical imaging |
Also Published As
Publication number | Publication date |
---|---|
CN110916701A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109872306B (en) | Medical image segmentation method, device and storage medium | |
CN111080584B (en) | Quality control method for medical image, computer device and readable storage medium | |
CN110858399B (en) | Method and apparatus for providing post-examination images of a virtual tomographic stroke | |
KR102410955B1 (en) | Method and computer program for automatic segmentation of abnominal organs based on deep learning in medical images | |
EP2401719B1 (en) | Methods for segmenting images and detecting specific structures | |
CN110766730A (en) | Image registration and follow-up evaluation method, storage medium and computer equipment | |
CN111488872B (en) | Image detection method, image detection device, computer equipment and storage medium | |
US12106533B2 (en) | Method and system for segmenting interventional device in image | |
CN111028212A (en) | Key point detection method and device, computer equipment and storage medium | |
CN113192031B (en) | Vascular analysis method, vascular analysis device, vascular analysis computer device, and vascular analysis storage medium | |
CN114298234A (en) | Brain medical image classification method and device, computer equipment and storage medium | |
CN110473226B (en) | Training method of image processing network, computer device and readable storage medium | |
KR102349515B1 (en) | Tumor automatic segmentation based on deep learning in a medical image | |
CN111951276A (en) | Image segmentation method, device, computer equipment and storage medium | |
CN110738664A (en) | Image positioning method and device, computer equipment and storage medium | |
CA3104607A1 (en) | Contrast-agent-free medical diagnostic imaging | |
JP4964191B2 (en) | Image processing apparatus and method, and program | |
CN110570417A (en) | Pulmonary nodule classification method and device and image processing equipment | |
KR102336003B1 (en) | Apparatus and method for increasing learning data using patch matching | |
CN110916701B (en) | Image scanning time prediction method, device, equipment and storage medium | |
KR102332472B1 (en) | Tumor automatic segmentation using deep learning based on dual window setting in a medical image | |
CN113160199A (en) | Image recognition method and device, computer equipment and storage medium | |
CN109447974B (en) | Volume data processing method, volume data processing apparatus, image processing workstation, and readable storage medium | |
JP2007536054A (en) | Pharmacokinetic image registration | |
CN114913133B (en) | Lung medical image processing method and device, storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40022638 Country of ref document: HK |
|
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |