CN111493935B - Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence - Google Patents
Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence Download PDFInfo
- Publication number
- CN111493935B CN111493935B CN202010353559.8A CN202010353559A CN111493935B CN 111493935 B CN111493935 B CN 111493935B CN 202010353559 A CN202010353559 A CN 202010353559A CN 111493935 B CN111493935 B CN 111493935B
- Authority
- CN
- China
- Prior art keywords
- video
- color doppler
- video frame
- frame
- artificial intelligence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0883—Clinical applications for diagnosis of the heart
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/488—Diagnostic techniques involving Doppler signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Cardiology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本申请涉及医学视频图像的智能识别技术领域,尤其涉及一种基于人工智能的超声心动图自动预测识别方法及系统。The present application relates to the technical field of intelligent identification of medical video images, and in particular, to a method and system for automatic prediction and identification of echocardiography based on artificial intelligence.
背景技术Background technique
超声心动图目前是评估心脏结构和功能的最重要的方法之一。对于超声心动图中心脏瓣膜反流特征的识别,通常医生会通过肉眼识别有无异常的彩色血流来鉴别,这就易受速度量程和彩色增益的影响,从而高估或低估上述特征的严重程度,并不适用于心脏瓣膜反流特征的精确评估。此外还有缩流法,连续多普勒等方法可定量分析心脏瓣膜反流的程度,但由于超声心动图在图像采集、测量、分析、判断等方面存在明显的个体差异,受到医务工作者的经验和能力影响比较大,使得检验准确性和一致性难以保证,常常给临床识别带来极大困难。Echocardiography is currently one of the most important methods for assessing cardiac structure and function. For the identification of heart valve regurgitation features in echocardiography, doctors usually identify the presence or absence of abnormal color flow with the naked eye, which is easily affected by velocity range and color gain, thereby overestimating or underestimating the severity of the above features. degree and is not suitable for precise assessment of the characteristics of valvular regurgitation. In addition, there are systolic flow method, continuous Doppler and other methods to quantitatively analyze the degree of heart valve regurgitation, but due to the obvious individual differences in image acquisition, measurement, analysis, judgment, etc. The influence of experience and ability is relatively large, making it difficult to guarantee the accuracy and consistency of the test, which often brings great difficulties to clinical identification.
人工智能技术对心脏超声影像的自动测量、分析与判读较专业医生的人工操作有巨大优势,可以实现数据分析与识别的标准化,从而排除人为主观因素的干扰,减少心脏超声判断的个体间差异和个体内差异,提高超声影像判别的准确性和一致性。人工智能技术也使得心脏超声更高效,更符合临床现实需求,从而大大提升医疗行业的效率、降低医疗成本和家庭及社会经济负担。Artificial intelligence technology has great advantages over the manual operation of professional doctors in the automatic measurement, analysis and interpretation of cardiac ultrasound images. It can realize the standardization of data analysis and identification, thereby eliminating the interference of human subjective factors and reducing the inter-individual differences and differences in cardiac ultrasound judgment. Intra-individual differences, improve the accuracy and consistency of ultrasound image discrimination. Artificial intelligence technology also makes cardiac ultrasound more efficient and more in line with clinical needs, thereby greatly improving the efficiency of the medical industry, reducing medical costs and family and socioeconomic burdens.
近年来,虽然利用人工智能技术处理医学影像是研究热点,但是更多的是采用决策树、聚类、贝叶斯分类、支持向量机、EM等传统的机器学习算法,这些算法未利用超声心动图的动态视频,仅通过随机抽取的超声图像进行分类,丢失了心脏的运动信息,而且彩色多普勒视频包含心脏内血流的流向信息,有助于识别瓣膜反流,如果忽视此类信息容易造成分类不够准确,实际应用的效果一般。In recent years, although the use of artificial intelligence technology to process medical images is a research hotspot, more traditional machine learning algorithms such as decision trees, clustering, Bayesian classification, support vector machines, and EM are used. These algorithms do not use echocardiography. The dynamic video of the figure, which is classified only by randomly selected ultrasound images, loses the motion information of the heart, and the color Doppler video contains the flow direction information of the blood flow in the heart, which is helpful to identify valvular regurgitation. If such information is ignored It is easy to cause the classification to be inaccurate, and the effect of practical application is general.
发明内容SUMMARY OF THE INVENTION
鉴于现有技术中的上述缺陷或不足,本申请提供了一种基于人工智能的超声心动图自动预测识别方法及系统,该方法采用了针对心脏超声影像专门设计的深度学习模型,自动预测超声心动图中的预识别特征,并输出与该预识别特征最相关的视频帧,大大提高了人工智能技术处理心脏超声影像的准确率,满足了医疗系统的临床需求,减轻了医务人员的工作负担。In view of the above-mentioned defects or deficiencies in the prior art, the present application provides a method and system for automatic prediction and identification of echocardiography based on artificial intelligence. The method adopts a deep learning model specially designed for cardiac ultrasound images to automatically predict echocardiography. The pre-identified features in the figure and the video frames most relevant to the pre-identified features are output, which greatly improves the accuracy of the artificial intelligence technology in processing cardiac ultrasound images, meets the clinical needs of the medical system, and reduces the workload of medical staff.
本发明的第一方面提供了一种基于人工智能的超声心动图自动预测识别方法及系统,包括:A first aspect of the present invention provides a method and system for automatic prediction and identification of echocardiography based on artificial intelligence, including:
获取检测对象的超声心动图中至少一个切面的彩色多普勒视频;acquiring a color Doppler video of at least one slice in the echocardiogram of the test subject;
提取所述彩色多普勒视频中的每一视频帧,并将每一视频帧输入至训练好的卷积神经网络,以获得对应每一视频帧的N维特征向量;Extracting each video frame in the color Doppler video, and inputting each video frame to a trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame;
将所述每一视频帧的N维特征向量通过注意力模块生成对应每一视频帧的权重;The N-dimensional feature vector of each video frame is generated by the attention module to the weight corresponding to each video frame;
利用所述权重计算每一视频帧的N维特征向量的加权和,以获得所述彩色多普勒视频的整体特征表示;Using the weights to calculate a weighted sum of N-dimensional feature vectors for each video frame to obtain an overall feature representation of the color Doppler video;
基于所述整体特征表示,计算所述彩色多普勒视频含有预识别图像特征的预测值。Based on the global feature representation, a prediction is calculated that the color Doppler video contains pre-identified image features.
进一步的,还包括如下步骤:将权重最大的帧作为关键帧输出。Further, it also includes the following steps: outputting the frame with the largest weight as a key frame.
进一步的,所述至少一个切面包括心尖四腔心切面。Further, the at least one view includes an apical four-chamber view.
进一步的,所述预识别图像特征为瓣膜反流。Further, the pre-identified image feature is valve regurgitation.
进一步的,还包括如下步骤:Further, it also includes the following steps:
测量所述关键帧中左心房区域的相对面积;measuring the relative area of the left atrial region in the keyframe;
测量所述关键帧中左心房区域内反流束的相对面积;measuring the relative area of the regurgitation jet within the left atrial region in the keyframe;
计算所述反流束的相对面积与所述左心房的相对面积的比值。The ratio of the relative area of the regurgitant jet to the relative area of the left atrium is calculated.
进一步的,在将每一视频帧输入至预训练的卷积神经网络之前,还包括:将含有预识别图像特征的超声心动图至少一个切面的彩色多普勒视频作为训练样本输入至预训练的卷积神经网络,对所述预训练的卷积神经网络进行训练。Further, before each video frame is input to the pre-trained convolutional neural network, the method further includes: inputting a color Doppler video of at least one slice of echocardiography containing pre-identified image features as a training sample into the pre-trained convolutional neural network. A convolutional neural network for training the pre-trained convolutional neural network.
进一步的,还包括:Further, it also includes:
当损失函数的值不再降低,或损失函数的值低于预定值时,停止所述卷积神经网络的训练;When the loss function The value of is no longer decreasing, or the loss function When the value of is lower than a predetermined value, stop the training of the convolutional neural network;
所述损失函数表示为:The loss function Expressed as:
; ;
其中,所述表示在视频级别上计算的分类损失,所述表示用来调节每一视频帧的权重的稀疏损失,为的归一化常数;Among them, the represents the classification loss computed at the video level, the represents the sparse loss used to adjust the weights of each video frame, for The normalization constant of ;
所述根据下式计算:said Calculate according to the following formula:
其中,表示第n个彩色多普勒视频中是否含有预识别图像特征,n的取值范围为(1,2,……N),N为彩色多普勒视频的总数量,若,表示第n个彩色多普勒视频中不含有预识别图像特征;若,表示第n个彩色多普勒视频中含有预识别图像特征;表示第n个彩色多普勒视频中含有预识别图像特征的预测值;为第n个彩色多普勒视频的整体特征表示,则是和的交叉熵;in, Indicates whether the nth color Doppler video contains pre-identified image features, the value range of n is (1, 2,...N), N is the total number of color Doppler videos, if , indicating that the nth color Doppler video does not contain pre-identified image features; if , indicating that the nth color Doppler video contains pre-identified image features; Represents the predicted value of the pre-identified image feature in the nth color Doppler video; is the overall feature representation of the nth color Doppler video, is and the cross entropy;
所述根据下式计算:said Calculate according to the following formula:
其中,,表示第n个彩色多普勒视频中的第t个视频帧的权重,t取值范围为(1,2,……,T),T为视频的帧数;n的取值范围为(1,2,……N),N为彩色多普勒视频的总数量。in, , Indicates the weight of the t-th video frame in the n-th color Doppler video, the value range of t is (1,2,...,T), T is the number of video frames; the value range of n is (1 ,2,...N), where N is the total number of color Doppler videos.
本发明的第二方面还提供了一种基于人工智能的超声心动图自动预测识别系统,包括:A second aspect of the present invention also provides an artificial intelligence-based echocardiography automatic prediction and identification system, comprising:
视频获取模块,用于获取检测对象的超声心动图中至少一个切面的彩色多普勒视频;a video acquisition module for acquiring a color Doppler video of at least one slice in the echocardiogram of the detected object;
输入提取模块,用于提取所述彩色多普勒视频中的每一视频帧,并将每一视频帧输入至训练好的卷积神经网络,以获得对应每一视频帧的N维特征向量;An input extraction module is used to extract each video frame in the color Doppler video, and each video frame is input to the trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame;
权重生成模块,用于将所述每一视频帧的N维特征向量通过注意力模块生成对应每一视频帧的权重;A weight generation module, which is used to generate the weight corresponding to each video frame by the N-dimensional feature vector of each video frame through the attention module;
整体特征计算模块,利用所述权重计算每一视频帧的N维特征向量的加权和,以获得所述彩色多普勒视频的整体特征表示;an overall feature calculation module, using the weight to calculate the weighted sum of the N-dimensional feature vectors of each video frame to obtain the overall feature representation of the color Doppler video;
预测输出模块,基于所述整体特征表示,计算所述彩色多普勒视频含有预识别图像特征的预测值。A prediction output module, based on the overall feature representation, calculates a prediction value that the color Doppler video contains pre-identified image features.
进一步的,还包括:关键帧输出模块,用于将权重最大的帧作为关键帧输出。Further, it also includes: a key frame output module for outputting the frame with the largest weight as a key frame.
进一步的,所述至少一个切面包括心尖四腔心切面;所述预识别图像特征为瓣膜反流;所述系统进一步包括:预识别图像特征测量模块,用于测量所述关键帧中左心房区域的相对面积;测量所述关键帧中左心房区域内反流束的相对面积;计算所述反流束的相对面积与所述左心房的相对面积的比值。Further, the at least one view includes an apical four-chamber view; the pre-identified image feature is valve regurgitation; the system further includes: a pre-identified image feature measurement module for measuring the left atrial region in the key frame ; measure the relative area of the regurgitation bundle in the left atrium region in the key frame; calculate the ratio of the relative area of the regurgitation bundle to the relative area of the left atrium.
综上,本发明的一种基于人工智能的超声心动图自动预测识别方法及系统,通过识别心脏超声的重要图像特征组成的一组视频帧来从视频中识别该重要图像特征,该方法基于一种全新的深度神经网络,学习如何测量视频中每个帧的重要性,并自动选择代表帧的一个稀疏子集来预测视频级别的分类。通过使用本发明提供的上述方法和系统,可以用于准确识别超声心动图中的二尖瓣反流影像特征,还可以用于准确识别超声心动图中的三尖瓣反流、主动脉瓣反流、肺动脉瓣反流、房间隔缺损、室间隔缺损、动脉导管未闭等心脏超声影像特征。To sum up, an artificial intelligence-based echocardiographic automatic prediction and identification method and system of the present invention identify the important image features from the video by identifying a group of video frames composed of important image features of cardiac ultrasound, and the method is based on a A novel deep neural network that learns how to measure the importance of each frame in a video and automatically selects a sparse subset of representative frames to predict video-level classification. By using the above method and system provided by the present invention, it can be used to accurately identify the image features of mitral regurgitation in echocardiography, and can also be used to accurately identify tricuspid regurgitation and aortic regurgitation in echocardiography Cardiac ultrasound imaging features such as flow, pulmonary regurgitation, atrial septal defect, ventricular septal defect, and patent ductus arteriosus.
附图说明Description of drawings
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present application will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:
图1是本发明一实施例提供的一种基于人工智能的超声心动图的自动预测识别方法的流程图;1 is a flowchart of an artificial intelligence-based automatic prediction and identification method for echocardiography provided by an embodiment of the present invention;
图2是本发明另一实施例提供的一种具有关键帧输出功能的超声心动图的自动预测识别方法的流程图;2 is a flowchart of an automatic prediction and identification method for echocardiography with a key frame output function provided by another embodiment of the present invention;
图3是本发明另一实施例提供的一种具有显著度评估功能的超声心动图的自动预测识别方法的流程图;3 is a flowchart of an automatic prediction and identification method for echocardiography with a saliency evaluation function provided by another embodiment of the present invention;
图4是本发明另一实施例提供的一种具有预训练功能的超声心动图的自动预测识别方法的流程图;4 is a flowchart of an automatic prediction and identification method for echocardiography with a pre-training function provided by another embodiment of the present invention;
图5是本发明另一实施例提供的一种超声心动图的自动预测识别系统的功能模块图;5 is a functional block diagram of an automatic prediction and identification system for echocardiography provided by another embodiment of the present invention;
图6是本发明另一实施例提供的一种电子设备的结构框图;6 is a structural block diagram of an electronic device provided by another embodiment of the present invention;
图7是本发明另一实施例提供的一种电子设备的部件组成图。FIG. 7 is a component composition diagram of an electronic device according to another embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与发明相关的部分。The present application will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the related invention, but not to limit the invention. In addition, it should be noted that, for the convenience of description, only the parts related to the invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other in the case of no conflict. The present application will be described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
本文所述的超声心动图是利用超声的特殊物理学特性检查心脏和大血管的解剖结构及功能状态的超声图像。Echocardiography, as described herein, is an ultrasound image that examines the anatomical structure and functional status of the heart and great vessels using the special physical properties of ultrasound.
参考图1,其示出了本发明实施例的一种基于人工智能的超声心动图自动预测识别方法。本实施例以预测识别超声心动图中的二尖瓣反流特征为例进行说明,但本实施例提出的方法也适用于三尖瓣反流、主动脉瓣反流、肺动脉瓣反流、房间隔缺损、室间隔缺损、动脉导管未闭等其它心脏血流特征的超声图像的预测识别。Referring to FIG. 1 , it shows an artificial intelligence-based echocardiographic automatic prediction and identification method according to an embodiment of the present invention. This embodiment takes the prediction and identification of mitral valve regurgitation features in echocardiography as an example for description, but the method proposed in this embodiment is also applicable to tricuspid regurgitation, aortic regurgitation, pulmonary regurgitation, Predictive identification of ultrasound images of septal defect, ventricular septal defect, patent ductus arteriosus and other cardiac blood flow characteristics.
步骤S101,获取检测对象的超声心动图中至少一个切面的彩色多普勒视频。Step S101, acquiring a color Doppler video of at least one slice in the echocardiogram of the detection object.
多普勒超声技术最常用的包括脉冲式多普勒、连续式多普勒和彩色多普勒血流显像三种。彩色多普勒属于显象技术,它是一种面积显示,在同一面积内有很多的声束发射和被接受回来。彩色多普勒血流显像的血流方向被定义为朝向探头的血流为红色,背离探头的血流为蓝色。心脏各瓣膜口及大血管都是单一顺行的血流,而一旦出现逆行方向的血流就应考虑是否异常。The most commonly used Doppler ultrasound techniques include pulsed Doppler, continuous Doppler and color Doppler flow imaging. Color Doppler belongs to the imaging technology, it is an area display, in the same area, there are many sound beams emitted and received back. The flow direction of color Doppler flow imaging is defined as the flow towards the probe as red and the flow away from the probe as blue. The valve orifices and great vessels of the heart are single antegrade blood flow, and once the retrograde blood flow occurs, it should be considered whether it is abnormal.
具体的,超声检测时针对检测对象不同体位、心脏的不同切面,能够获得含有多个序列的超声心动图原始图像,每个序列对应超声检查中的一个切面。Specifically, for different body positions of the detection object and different slices of the heart during ultrasonic inspection, original echocardiographic images containing multiple sequences can be obtained, and each sequence corresponds to one slice in the ultrasonic inspection.
本实施例中,首先获取超声心动图中涉及至少一个切面的彩色多普勒视频(例如,既可以是一个切面的视频,也可以是多个切面的视频的组合),该彩色多普勒视频具有多个视频帧,将该彩色多普勒视频作为深度学习算法模块的输入。需要指出的是,使用视频而不使用单张图像作为输入,是因为视频在时间维度上提供了更多信息,这包括帧与帧之间的变化信息,而单一的图像往往丢失了心脏的运动信息,容易造成分类不够准确。此外,彩色多普勒视频能够显示出瓣膜内的血流方向,特征更为显著,进一步提高了算法模块的分类准确性。In this embodiment, a color Doppler video involving at least one slice in the echocardiogram (for example, it may be a video of one slice, or a combination of videos of multiple slices) is first acquired, and the color Doppler video With multiple video frames, this color Doppler video is used as input to the deep learning algorithm module. It should be pointed out that video is used instead of a single image as input because video provides more information in the temporal dimension, including the change information from frame to frame, and a single image often loses the movement of the heart information, it is easy to cause the classification to be inaccurate. In addition, the color Doppler video can show the direction of blood flow in the valve, and the features are more significant, which further improves the classification accuracy of the algorithm module.
步骤S102,提取所述彩色多普勒视频中的每一视频帧,并将每一视频帧输入至训练好的卷积神经网络,以获得对应每一视频帧的N维特征向量。Step S102: Extract each video frame in the color Doppler video, and input each video frame into the trained convolutional neural network to obtain an N-dimensional feature vector corresponding to each video frame.
具体的,彩色多普勒视频是由一组视频帧构成的,每个视频帧可以理解为一帧超声图像,首先将单一视频中的全部T个视频帧提取出来,再逐帧的输入至已经训练好的卷积神经网络中。本实施例并不限定上述卷积神经网络的类型,例如既可以采用双流网络和3D卷积组合而成的I3D网络,又可以采用ResNet残差网络,本实施例以ResNet残差网络为例进行说明。Specifically, color Doppler video is composed of a group of video frames, each video frame can be understood as a frame of ultrasound image, firstly, all T video frames in a single video are extracted, and then input frame by frame to the in a trained convolutional neural network. This embodiment does not limit the type of the above-mentioned convolutional neural network. For example, an I3D network composed of a dual-stream network and a 3D convolution can be used, and a ResNet residual network can be used. This embodiment uses the ResNet residual network as an example to carry out illustrate.
当每个视频帧通过Resnet神经网络后,得到N维度的特征向量,其中为视频帧的标号,的取值为(1,2……T),T为视频的帧数;N维度代表着对应帧所包含的信息,N既不能太小,也不能太大,N设置的太小会使视频帧包含过少的特征信息,N设置的太大,会使视频帧中包含过多的无用信息,浪费运算资源,本实施例中N优选为1024,实际运用时N可以通过多次人工尝试来获得合适的设置值。After each video frame passes through the Resnet neural network, an N-dimensional feature vector is obtained ,in is the label of the video frame, The value of N is (1, 2...T), T is the number of frames of the video; the N dimension represents the information contained in the corresponding frame, and N can neither be too small nor too large. If N is set too small, it will make the video The frame contains too little feature information. If N is set too large, the video frame will contain too much useless information and waste computing resources. In this embodiment, N is preferably 1024. In actual use, N can be determined by multiple manual attempts. to obtain the appropriate setting value.
步骤S103,将所述每一视频帧的N维特征向量通过注意力模块生成对应每一视频帧的权重。Step S103, generating a weight corresponding to each video frame from the N-dimensional feature vector of each video frame through the attention module.
在临床检验中,彩色多普勒视频中往往只有关键的几帧含有样本反流与否的特征信息,是否带有特征信息以及带有的特征信息的多少决定了该视频帧的权重,这里用来表示第t帧的权重,t取值范围为(1,2,……,T),T为视频的帧数。In clinical inspection, only a few key frames in color Doppler video often contain the feature information of whether the sample is refluxed or not. Whether it has feature information and the amount of feature information with it determines the weight of the video frame. Here we use To represent the weight of the t-th frame, the value range of t is (1, 2, ..., T), and T is the number of frames of the video.
权重是通过将Resnet神经网络输出的N维特征向量输入至注意力模块(attention module)获得的。该注意力模块其实是深度学习算法重用于模拟人脑的注意力模型,举个例子来说,当我们观赏一幅画时,虽然我们可以看到整幅画的全貌,但是在我们深入仔细地观察时,其实眼睛聚焦的就只有很小的一块,这个时候人的大脑主要关注在这一小块图案上,也就是说这个时候人脑对整幅图的关注并不是均衡的,是有一定的权重区分的。N维特征向量经由注意力模块计算后能够获得每一视频帧所对应的权重。由于注意力模块的具体实现方法已经广泛应用于深度学习模型,因此本实施例不再赘述。The weights are obtained by combining the N-dimensional feature vector output by the Resnet neural network The input is obtained by the attention module. The attention module is actually an attention model reused by deep learning algorithms to simulate the human brain. For example, when we look at a painting, although we can see the whole picture, but when we look deeply and carefully When observing, in fact, the eyes focus on only a very small piece. At this time, the human brain mainly focuses on this small pattern, which means that the human brain does not pay attention to the whole picture at this time. There is a certain amount of attention. weights are differentiated. N-dimensional feature vector The weight corresponding to each video frame can be obtained after calculation by the attention module . Since the specific implementation method of the attention module has been widely used in the deep learning model, it is not repeated in this embodiment.
步骤S104,利用所述权重计算每一视频帧的N维特征向量的加权和,以获得所述彩色多普勒视频的整体特征表示。Step S104, using the weight to calculate the weighted sum of the N-dimensional feature vectors of each video frame to obtain the overall feature representation of the color Doppler video.
具体的,每一视频帧对应的1024维特征向量能够表明该帧的视频表示,而视频的整体特征表示可以通过每帧的权重与每帧的特征向量的加权和来计算获得,即:Specifically, the 1024-dimensional feature vector corresponding to each video frame can indicate the video representation of the frame, and the overall feature representation of the video can be determined by the weight of each frame with the feature vector of each frame to calculate the weighted sum of , namely:
其中,为视频的整体特征表示,为第t帧的视频特征表示,为第t帧的权重,T为视频的帧数。in, is the overall feature representation of the video, is the video feature representation of the t-th frame, is the weight of the t-th frame, and T is the frame number of the video.
步骤S105,基于所述整体特征表示,通过FC全连接层和sigmoid激活函数计算获得含有预识别图像特征的预测值。Step S105 , based on the overall feature representation, a prediction value containing pre-identified image features is obtained by calculating through the FC fully connected layer and the sigmoid activation function.
获得视频的整体特征表示后,在经过FC 全连接层和sigmod激活函数的两层运算后,得到最终的预测值,的取值在0和1之间,代表着由样本视频中存在二尖瓣膜反流特征的几率,即:Obtain the overall feature representation of the video After the two-layer operation of the FC fully connected layer and the sigmod activation function, the final predicted value is obtained , The value of is between 0 and 1, which represents the probability of the presence of mitral valve regurgitation features in the sample video, namely:
参见图2,优选的在图1所示实施例的基础上进一步的包括:Referring to Fig. 2, preferably on the basis of the embodiment shown in Fig. 1, it further includes:
步骤S106,将权重最大的帧作为关键帧输出。Step S106, the weight The largest frame is output as a keyframe.
权重最大的视频帧意味着体现出二尖瓣反流特征最明显或贡献最大的视频帧,将该关键帧自动输出并生成在超声报告上,节省了医生选取截图的操作步骤,提高了超声报告的生成效率。Weights The largest video frame means the video frame that reflects the most obvious features of mitral valve regurgitation or contributes the most. The key frame is automatically output and generated on the ultrasound report, which saves the operation steps for doctors to select screenshots and improves the quality of the ultrasound report. Generation efficiency.
进一步的,上述实施例中的至少一个切面优选为包括心尖四腔心(A4C)切面。心尖四腔心(A4C)切面是临床超声心动图中的一个标准切面,也是应用最广泛的一个重要切面,通过心尖四腔心(A4C)切面可以更加准确的识别出二尖瓣反流等图像特征。Further, at least one slice in the above embodiments preferably includes an apical four-chamber (A4C) slice. The apical four-chamber (A4C) view is a standard view in clinical echocardiography, and it is also one of the most widely used important views. The apical four-chamber (A4C) view can more accurately identify images such as mitral regurgitation feature.
参见图3,优选的,在根据图1所示实施例已经识别出检测对象的彩色多普勒视频中是否存在预识别图像特征的基础上,进一步包括:Referring to FIG. 3 , preferably, on the basis that whether there is a pre-identified image feature in the color Doppler video of the detection object has been identified according to the embodiment shown in FIG. 1 , the method further includes:
步骤S107,识别该预识别图像特征的显著程度。Step S107, identifying the salient degree of the pre-identified image feature.
以二尖瓣反流特征的识别为例,具体方法是,在已经获得的二尖瓣反流最明显的关键帧上测量反流面积和左心房面积之比,输出该比值至超声报告。Taking the identification of mitral regurgitation features as an example, the specific method is to measure the ratio of the regurgitation area and the left atrial area on the obtained key frame with the most obvious mitral regurgitation, and output the ratio to the ultrasound report.
首先,测量左心房面积。通常情况下,前期经过了标注人员的精心勾画已经有数百组勾画过的超声心动图(所谓勾画就是将心脏的关键部位,如左心房,人为的画上边界)。将这些数据投入所述神经网络进行学习,完成模型对左心房的自动识别。通过计算左心房在图中的相对面积,即在勾画区域的像素数量,就可以得到左心房的相对面积。First, measure the left atrial area. Usually, there are hundreds of sets of echocardiograms that have been carefully delineated by the annotators in the early stage (the so-called delineation is to artificially draw the borders of key parts of the heart, such as the left atrium). These data are input into the neural network for learning to complete the automatic identification of the left atrium by the model. The relative area of the left atrium can be obtained by calculating the relative area of the left atrium in the image, that is, the number of pixels in the delineated area.
然后,测量反流束的面积。因为使用的是彩色多普勒超声,反流束通常被标记为蓝色(未反流的血液通常为红色),因此只需要计算左心房内蓝色像素点的数量值就可以计算出反流束的相对面积。Then, the area of the reflux beam is measured. Because color Doppler ultrasound is used, the regurgitation beam is usually marked in blue (unregurgitated blood is usually red), so it is only necessary to count the number of blue pixels in the left atrium to calculate the regurgitation relative area of the bundle.
最后,计算反流束相对面积和左心房相对面积的比值,通过该比值可以获知超声视频中反流图像特征的显著程度。Finally, the ratio of the relative area of the regurgitation beam to the relative area of the left atrium was calculated, and the salience of the features of the regurgitation image in the ultrasound video can be known by this ratio.
参见图4,优选的,在执行图1所示方法之前,还包括Referring to Fig. 4, preferably, before executing the method shown in Fig. 1, the method further includes
步骤S100,将含有预识别图像特征的超声心动图中至少一个切面的彩色多普勒视频作为训练样本输入至预训练的卷积神经网络,对所述预训练的卷积神经网络进行训练的步骤。Step S100, the color Doppler video of at least one slice in the echocardiogram containing the pre-identified image features is input to the pre-trained convolutional neural network as a training sample, and the step of training the pre-trained convolutional neural network .
训练模型的过程实际上就是不断调节参数使得预测结果越来越好的过程,在这个过程中,通常使用损失函数作为参考,损失函数通常用来评价模型的预测值和真实值之间的差别,而本发明的神经网络模型具有两个主要任务,一是准确判断样本超声心动图是否含有预识别图像特征,二是输出对判断这一结果最有帮助的视频关键帧,因此对应的损失函数也设计为两部分组成,分别为分类损失和稀疏损失。The process of training the model is actually the process of continuously adjusting the parameters to make the prediction results better and better. In this process, the loss function is usually used as a reference, and the loss function is usually used to evaluate the difference between the predicted value of the model and the real value. The neural network model of the present invention has two main tasks, one is to accurately judge whether the sample echocardiogram contains pre-identified image features, and the other is to output the video key frames that are most helpful for judging the result, so the corresponding loss function It is also designed to be composed of two parts, namely the classification loss and sparse loss .
表示在视频级别上计算的分类损失: Represents the classification loss computed at the video level:
其中,表示第n个彩色多普勒视频中是否含有预识别图像特征,n的取值范围为(1,2,……N),N为彩色多普勒视频的总数量,若,表示第n个彩色多普勒视频中不含有预识别图像特征;若,表示第n个彩色多普勒视频中含有预识别图像特征;表示第n个彩色多普勒视频中含有预识别图像特征的预测值;为第n个彩色多普勒视频的整体特征表示,则是和的交叉熵。in, Indicates whether the nth color Doppler video contains pre-identified image features, the value range of n is (1, 2,...N), N is the total number of color Doppler videos, if , indicating that the nth color Doppler video does not contain pre-identified image features; if , indicating that the nth color Doppler video contains pre-identified image features; Represents the predicted value of the pre-identified image feature in the nth color Doppler video; is the overall feature representation of the nth color Doppler video, is and the cross entropy.
所谓稀疏损失,即用来调节视频中特定视频帧的权重的损失函数,因为在临床检验中,往往只有关键的几帧代表着样本反流与否,因此在所有帧中只有少数帧是重要的,即只有少数帧对应的接近1,其余帧对应的都接近0,换言之这些帧的重要性是稀疏的,本实施例使用L1 范数来衡量整个数据的稀疏性,依据数学定义,L1 范数是指向量中各个元素绝对值之和,L1 越小,则整个元素越稀疏,具体形式如下:so-called sparsity loss , which is used to adjust the weight of a specific video frame in the video The loss function of , because in clinical tests, only a few key frames often represent whether the sample is refluxed or not, so only a few frames are important in all frames, that is, only a few frames correspond to close to 1, and the remaining frames correspond to all close to 0, in other words the importance of these frames is sparse. In this example, the L1 norm is used to measure the sparsity of the entire data. According to the mathematical definition, the L1 norm is the sum of the absolute values of each element in the pointer. The smaller the L1, the sparser the entire element. The specific form is as follows :
所述根据下式计算:said Calculate according to the following formula:
其中,,表示第n个彩色多普勒视频中的第t个视频帧的权重,t取值范围为(1,2,……,T),T为视频的帧数;n的取值范围为(1,2,……N),N为彩色多普勒视频的总数量。in, , Indicates the weight of the t-th video frame in the n-th color Doppler video, the value range of t is (1,2,...,T), T is the number of video frames; the value range of n is (1 ,2,...N), where N is the total number of color Doppler videos.
本实施例在训练过程中的损失函数的数学表达为:The loss function in the training process of this example The mathematical expression is:
; ;
在具体的训练过程中,用于训练神经网络的样本可以循环使用,当到达设定的循环次数、损失函数的值低于预定值、或损失函数的值不再降低时,表明可以停止卷积神经网络的训练。In the specific training process, the samples used to train the neural network can be used cyclically. When the set number of cycles and the loss function are reached, is lower than a predetermined value, or the loss function When the value of is no longer decreasing, it indicates that the training of the convolutional neural network can be stopped.
应当注意,尽管在附图1中以特定顺序描述了本发明方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。相反,流程图中描绘的步骤必要时可以改变执行顺序。It should be noted that although the operations of the method of the present invention are described in a particular order in FIG. 1, this does not require or imply that the operations must be performed in that particular order, or that all illustrated operations must be performed to achieve the desired result. Rather, the steps depicted in the flowcharts may be executed in a modified order as necessary.
参考图5,其示出了本发明另一实施例的一种基于人工智能的超声心动图自动预测识别系统200,包括:Referring to FIG. 5, it shows an artificial intelligence-based echocardiography automatic prediction and
视频获取模块201,用于获取检测对象的超声心动图中至少一个切面的彩色多普勒视频;A
输入提取模块202,用于提取所述彩色多普勒视频中的每一视频帧,并将每一视频帧输入至训练好的卷积神经网络,以获得对应每一视频帧的N维特征向量;
权重生成模块203,用于将所述每一视频帧的N维特征向量通过注意力模块生成对应每一视频帧的权重;The
整体特征计算模块204,利用所述权重计算每一视频帧的N维特征向量的加权和,以获得所述彩色多普勒视频的整体特征表示;the overall
预测输出模块205,基于所述整体特征表示,通过FC全连接层和sigmoid激活函数计算获得含有预识别图像特征的预测值。The
进一步的,所述超声心动图的自动预测识别系统200包括:预识别图像特征测量模块206,用于测量所述关键帧中左心房区域的面积;测量所述关键帧中左心房区域内反流束的面积;计算所述反流束的面积与所述左心房的面积的比值。Further, the automatic prediction and
应当理解,本实施例中的超声心动图的自动预测识别系统200所记载的诸模块与图1所述方法中的各个步骤相对应。由此,上文针对方法描述的操作和特征同样适用于本实施例的各个模块,在此不再赘述。本实施例的系统可以预先实现在电子设备中,也可以通过下载等方式而加载到电子设备中。本实施例的系统中的相应模块可以与电子设备中的单元相互配合以实现本申请实施例的方案。此外,描述于本实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。这些单元或模块的名称在某种情况下并不构成对该单元或模块本身的限定,例如,视频获取模块201还可以被描述为“用于获取检测对象的超声心动图中至少一个切面的彩色多普勒视频的模块201”。It should be understood that the modules described in the automatic prediction and
参见图6,其示出了本发明另一实施例的一种电子设备300,包括:Referring to FIG. 6, it shows an
至少一个处理器301;以及,at least one processor 301; and,
与所述至少一个处理器301通信连接的存储器302;其中,a memory 302 in communication with the at least one processor 301; wherein,
所述存储器302存储有可被所述至少一个处理器301执行的指令,以使所述至少一个处理器301能够执行上述方法实施例中的步骤。The memory 302 stores instructions executable by the at least one processor 301 to enable the at least one processor 301 to perform the steps in the above method embodiments.
参见图7,图6所示实施例中的电子设备例如可以是B超机。B超机亦可包含有计算机系统700,其包括中央处理单元(CPU)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有系统700操作所需的各种程序和数据。CPU 701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。以下部件连接至I/O接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。驱动器710也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。Referring to FIG. 7 , the electronic device in the embodiment shown in FIG. 6 may be, for example, a B-mode ultrasound machine. The ultrasound machine may also include a
作为另一方面,本申请还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中所述系统或电子设备中所包含的计算机可读存储介质;也可以是单独存在,未装配入设备中的计算机可读存储介质。计算机可读存储介质存储有一个或者一个以上程序,所述程序被一个或者一个以上的处理器用来执行描述于本申请的超声心动图的自动预测识别方法。As another aspect, the present application also provides a computer-readable storage medium, and the computer-readable storage medium may be a computer-readable storage medium included in the system or electronic device described in the foregoing embodiments; or it may be a separate computer-readable storage medium. There is a computer-readable storage medium that does not fit into a device. The computer-readable storage medium stores one or more programs used by one or more processors to perform the automatic predictive identification method of echocardiography described in this application.
附图中的流程图和框图,图示了按照本发明各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,所述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logic for implementing the specified logic Executable instructions for the function. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present application and an illustration of the applied technical principles. Those skilled in the art should understand that the scope of the invention involved in this application is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, and should also cover the above-mentioned technical features without departing from the inventive concept. Other technical solutions formed by any combination of its equivalent features. For example, a technical solution is formed by replacing the above-mentioned features with the technical features disclosed in this application (but not limited to) with similar functions.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010353559.8A CN111493935B (en) | 2020-04-29 | 2020-04-29 | Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010353559.8A CN111493935B (en) | 2020-04-29 | 2020-04-29 | Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111493935A CN111493935A (en) | 2020-08-07 |
CN111493935B true CN111493935B (en) | 2021-01-15 |
Family
ID=71866649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010353559.8A Active CN111493935B (en) | 2020-04-29 | 2020-04-29 | Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111493935B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112101413A (en) * | 2020-08-12 | 2020-12-18 | 海南大学 | Intelligent system for stroke risk prediction |
CN112258476B (en) * | 2020-10-22 | 2024-07-09 | 东软教育科技集团有限公司 | Method, system and storage medium for analyzing abnormal motion pattern of heart muscle of echocardiography |
CN112435247B (en) * | 2020-11-30 | 2022-03-25 | 中国科学院深圳先进技术研究院 | Patent foramen ovale detection method, system, terminal and storage medium |
CN112419313B (en) * | 2020-12-10 | 2023-07-28 | 清华大学 | A Multi-Section Classification Method Based on Ultrasonography of Congenital Heart Disease |
CN112489043B (en) * | 2020-12-21 | 2024-08-13 | 无锡祥生医疗科技股份有限公司 | Heart disease detection device, model training method, and storage medium |
CN113180737B (en) * | 2021-05-06 | 2022-02-08 | 中国人民解放军总医院 | Artificial intelligence-based oval hole closure detection method, system, equipment and medium |
CN113487665B (en) * | 2021-06-04 | 2022-03-11 | 中国人民解放军总医院 | A kind of measuring method, device, equipment and medium of lumen gap |
CN114469176B (en) * | 2021-12-31 | 2024-07-05 | 深圳度影医疗科技有限公司 | Fetal heart ultrasonic image detection method and related device |
CN114666571B (en) * | 2022-03-07 | 2024-06-14 | 中国科学院自动化研究所 | Video sensitive content detection method and system |
CN114723710A (en) * | 2022-04-11 | 2022-07-08 | 安徽鲲隆康鑫医疗科技有限公司 | Method and device for detecting ultrasonic video key frame based on neural network |
CN115797330B (en) * | 2022-12-30 | 2024-04-05 | 北京百度网讯科技有限公司 | Algorithm correction method based on ultrasonic video, ultrasonic video generation method and equipment |
CN119515965A (en) * | 2025-01-21 | 2025-02-25 | 上海交通大学医学院附属上海儿童医学中心 | TSDNet-based ventricular septal defect positioning system and method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10321892B2 (en) * | 2010-09-27 | 2019-06-18 | Siemens Medical Solutions Usa, Inc. | Computerized characterization of cardiac motion in medical diagnostic ultrasound |
US10231693B2 (en) * | 2010-12-23 | 2019-03-19 | Koninklijke Philips N.V. | Automated identification of the location of a regurgitant orifice of a mitral valve in an ultrasound image |
CN103824284B (en) * | 2014-01-26 | 2017-05-10 | 中山大学 | Key frame extraction method based on visual attention model and system |
US10271817B2 (en) * | 2014-06-23 | 2019-04-30 | Siemens Medical Solutions Usa, Inc. | Valve regurgitant detection for echocardiography |
SG11201707443YA (en) * | 2015-04-02 | 2017-10-30 | Cardiawave | Method and apparatus for treating valvular disease |
CN105913084A (en) * | 2016-04-11 | 2016-08-31 | 福州大学 | Intensive track and DHOG-based ultrasonic heartbeat video image classifying method |
CN108171141B (en) * | 2017-12-25 | 2020-07-14 | 淮阴工学院 | A Cascaded Multimodal Fusion Video Object Tracking Method Based on Attention Model |
-
2020
- 2020-04-29 CN CN202010353559.8A patent/CN111493935B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111493935A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111493935B (en) | Method and system for automatic prediction and recognition of echocardiography based on artificial intelligence | |
Baumgartner et al. | SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound | |
Carrer et al. | Automatic pleural line extraction and COVID-19 scoring from lung ultrasound data | |
Smistad et al. | Real-time automatic ejection fraction and foreshortening detection using deep learning | |
Yan et al. | Automatic tracing of vocal-fold motion from high-speed digital images | |
US11207055B2 (en) | Ultrasound Cardiac Doppler study automation | |
Liao et al. | On modelling label uncertainty in deep neural networks: automatic estimation of intra-observer variability in 2d echocardiography quality assessment | |
CN110197713B (en) | Medical image processing method, device, equipment and medium | |
CN111275755B (en) | Mitral valve orifice area detection method, system and equipment based on artificial intelligence | |
CA3129213A1 (en) | Neural network image analysis | |
CN113570594A (en) | Method and device for monitoring target tissue in ultrasonic image and storage medium | |
CN110992352A (en) | Automatic infant head circumference CT image measuring method based on convolutional neural network | |
CN111915557A (en) | Deep learning atrial septal defect detection method and device | |
CN113180737B (en) | Artificial intelligence-based oval hole closure detection method, system, equipment and medium | |
Jafari et al. | Deep bayesian image segmentation for a more robust ejection fraction estimation | |
CN116704305A (en) | Multi-modal and multi-section classification method for echocardiography based on deep learning algorithm | |
Yasrab et al. | End-to-end first trimester fetal ultrasound video automated crl and nt segmentation | |
Ragnarsdottir et al. | Interpretable prediction of pulmonary hypertension in newborns using echocardiograms | |
CN116167957B (en) | cTTE image processing method, computer device, system and storage medium | |
EP4292044B1 (en) | Image sequence analysis | |
CN117011601A (en) | Multi-modal classification prediction method, apparatus, processor and machine-readable storage medium | |
CN113222985B (en) | Image processing method, image processing device, computer equipment and medium | |
US11803967B2 (en) | Methods and systems for bicuspid valve detection with generative modeling | |
Hatfaludi et al. | Deep learning based aortic valve detection and state classification on echocardiographies | |
Islam et al. | An ontological approach to investigate the impact of deep convolutional neural networks in anomaly detection of left ventricular hypertrophy using echocardiography images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |