CN114782537A - Human carotid artery localization method and device based on 3D vision - Google Patents
Human carotid artery localization method and device based on 3D vision Download PDFInfo
- Publication number
- CN114782537A CN114782537A CN202210527495.8A CN202210527495A CN114782537A CN 114782537 A CN114782537 A CN 114782537A CN 202210527495 A CN202210527495 A CN 202210527495A CN 114782537 A CN114782537 A CN 114782537A
- Authority
- CN
- China
- Prior art keywords
- carotid artery
- contour
- human body
- neck
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0891—Clinical applications for diagnosis of blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/40—Positioning of patients, e.g. means for holding or immobilising parts of the patient's body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
- A61B8/5261—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/54—Control of the diagnostic device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Vascular Medicine (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Hematology (AREA)
- Quality & Reliability (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Analysis (AREA)
Abstract
本公开提供一种基于3D视觉的人体颈动脉定位方法及装置,涉及智慧医疗技术领域,包括获取超声图像数据;利用深度学习检测超声图像数据,生成人体特定位置特征点参数;根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置;将颈动脉在超声图像中的位置投影到颈部表面,以指导自主超声扫查。从3D相机拍摄到的面部和颈部图像中识别特征,并根据这些特征对颈动脉的投影位置进行建模,该投影位置可以用作机器人自主超声扫查的初始规划路径,指导自动化颈动脉自主超声扫查。
The present disclosure provides a method and device for locating human carotid artery based on 3D vision, which relates to the field of smart medical technology, including acquiring ultrasound image data; detecting ultrasound image data using deep learning, and generating human body specific position feature point parameters; Point parameters, the radial basis function is used to determine the projection position of the carotid artery; the position of the carotid artery in the ultrasound image is projected to the neck surface to guide the autonomous ultrasound scan. Identify features from face and neck images captured by 3D cameras, and model the projected position of the carotid artery based on these features, which can be used as an initial planned path for robotic autonomous ultrasound scans to guide automated carotid artery autonomy Ultrasound scan.
Description
技术领域technical field
本公开涉及智慧医疗技术领域,尤其涉及一种基于3D视觉的人体颈动脉定位方法及装置。The present disclosure relates to the field of smart medical technology, and in particular, to a method and device for locating human carotid arteries based on 3D vision.
背景技术Background technique
在磁共振成像(MR)系统或计算机断层扫描成像(CT)系统等医学影像系统中,有时需要配合使用3D相机来进行辅助信息如病人的体位信息等的采集。3D相机的摄像头通常由一个二维彩色(RGB)摄像头和一个深度摄像头组成。In a medical imaging system such as a magnetic resonance imaging (MR) system or a computed tomography (CT) system, sometimes a 3D camera needs to be used together to collect auxiliary information such as patient's body position information. The camera of a 3D camera usually consists of a two-dimensional color (RGB) camera and a depth camera.
在安装3D相机时,通常需要将3D相机安装于病床的正上方以探测病床和病床上的病人,从而最大程度获得病人的相关状态和信息。When installing a 3D camera, it is usually necessary to install the 3D camera directly above the hospital bed to detect the bed and the patient on the bed, so as to obtain the relevant status and information of the patient to the greatest extent.
发明内容SUMMARY OF THE INVENTION
本公开的目的在于提出一种基于3D视觉的人体颈动脉定位方法及装置,基于3D视觉的人体颈动脉定位方法,用于颈动脉的机器人超声扫查。The purpose of the present disclosure is to propose a 3D vision-based human carotid artery positioning method and device, and a 3D vision-based human carotid artery positioning method for robotic ultrasound scanning of carotid arteries.
本公开第一方面提供了一种基于3D视觉的人体颈动脉定位方法,包括:A first aspect of the present disclosure provides a 3D vision-based human carotid artery positioning method, including:
获取超声图像数据;Obtain ultrasound image data;
利用深度学习检测超声图像数据,生成人体特定位置特征点参数;Use deep learning to detect ultrasound image data and generate specific position feature point parameters of the human body;
根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置;According to the specific position feature point parameters of the human body, the radial basis function is used to determine the projection position of the carotid artery;
将颈动脉在超声图像中的位置投影到颈部表面,以指导自主超声扫查。The location of the carotid artery in the ultrasound image is projected onto the surface of the neck to guide the autonomous ultrasound scan.
本公开第二方面提供了一种基于3D视觉的人体颈动脉定位装置,包括:A second aspect of the present disclosure provides a 3D vision-based human carotid artery positioning device, including:
获取模块,获取超声图像数据;an acquisition module to acquire ultrasound image data;
生成模块,利用深度学习检测超声图像数据,生成人体特定位置特征点参数;The generation module uses deep learning to detect ultrasonic image data, and generates specific position feature point parameters of the human body;
确定模块,根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置;The determining module adopts the radial basis function to determine the projection position of the carotid artery according to the parameter of the specific position feature point of the human body;
指导模块,将颈动脉在超声图像中的位置投影到颈部表面,以指导自主超声扫查。Guidance module that projects the position of the carotid artery in the ultrasound image onto the neck surface to guide the autonomous ultrasound scan.
本公开第三方面提供了一种电子设备,包括:存储器以及一个或多个处理器;A third aspect of the present disclosure provides an electronic device, including: a memory and one or more processors;
所述存储器,用于存储一个或多个程序;the memory for storing one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如任意实施例提供的基于3D视觉的人体颈动脉定位方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the 3D vision-based human carotid artery localization method as provided in any of the embodiments.
本公开第四方面提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器实现如任意实施例提供的基于3D视觉的人体颈动脉定位方法。A fourth aspect of the present disclosure provides a storage medium containing computer-executable instructions, where the computer-executable instructions implement the 3D vision-based human carotid artery localization method provided in any of the embodiments by a computer processor.
本公开提供一种基于3D视觉的人体颈动脉定位方法及装置,从3D相机拍摄到的面部和颈部图像中识别特征,并根据这些特征对颈动脉的投影位置进行建模,该投影位置可以用作机器人自主超声扫查的初始规划路径,指导自动化颈动脉自主超声扫查。The present disclosure provides a method and device for locating human carotid artery based on 3D vision, identifying features from face and neck images captured by a 3D camera, and modeling the projection position of the carotid artery according to these features, where the projection position can be Used as the initial planning path for robotic autonomous ultrasound scans to guide automated carotid autonomous ultrasound scans.
附图说明Description of drawings
图1为本公开实施例中的基于3D视觉的人体颈动脉定位方法的流程图;1 is a flowchart of a method for locating a human carotid artery based on 3D vision in an embodiment of the present disclosure;
图2为本公开实施例中的基于3D视觉的人体颈动脉定位方法的另一流程图;2 is another flowchart of a method for locating human carotid artery based on 3D vision in an embodiment of the present disclosure;
图3为本公开实施例中的基于3D视觉的人体颈动脉定位方法的另一流程图;3 is another flowchart of a method for locating a human carotid artery based on 3D vision according to an embodiment of the present disclosure;
图4为图1中的特征点示意图;Fig. 4 is the characteristic point schematic diagram in Fig. 1;
图5为本公开实施例中的基于3D视觉的人体颈动脉定位装置的示意图;5 is a schematic diagram of a 3D vision-based human carotid artery positioning device in an embodiment of the present disclosure;
图6为本公开实施例中的基于3D视觉的人体颈动脉定位装置的示意图;6 is a schematic diagram of a 3D vision-based human carotid artery positioning device in an embodiment of the present disclosure;
图7为本公开实施例提供的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention. In addition, it should be noted that, for the convenience of description, the drawings only show some but not all structures related to the present invention.
由于3D相机无法直接感知动脉血管,需要从3D相机拍摄到的面部和颈部图像中识别特征,并根据这些特征对颈动脉的投影位置进行建模,该投影位置可以用作机器人自主超声扫查的初始规划路径,指导自动化颈动脉自主超声扫查。该方法包含特征识别以及投影位置建模两个阶段。Since the 3D camera cannot directly perceive the arteries and blood vessels, it is necessary to identify features from the face and neck images captured by the 3D camera, and model the projection position of the carotid artery based on these features, which can be used as a robotic autonomous ultrasound scan The initial planning path to guide automated carotid autonomous ultrasound scanning. The method consists of two stages: feature recognition and projection location modeling.
如图1所示,本公开实施例提供了一种基于3D视觉的人体颈动脉定位方法,包括:As shown in FIG. 1 , an embodiment of the present disclosure provides a 3D vision-based human carotid artery positioning method, including:
S101、获取超声图像数据;S101, acquiring ultrasound image data;
所述超声图像数据包括:对齐的彩色图数据和深度图数据;The ultrasound image data includes: aligned color map data and depth map data;
所述获取超声图像数据包括:利用3D相机拍照获取对齐的彩色图数据和深度图数据。The acquiring ultrasound image data includes: taking pictures with a 3D camera to acquire aligned color map data and depth map data.
本公开实施例中采用固定安装的3D相机,拍摄完整的人体面部和颈部区域的RGB-D彩色图和深度图,并从中计算颈动脉在颈部表面的投影位置。In the embodiment of the present disclosure, a fixedly installed 3D camera is used to capture an RGB-D color map and a depth map of a complete human face and neck region, and the projection position of the carotid artery on the neck surface is calculated from the RGB-D color map and the depth map.
S102、利用深度学习检测超声图像数据,生成人体特定位置特征点参数,本公开实施例中所述人体特定位置包括:左下颚轮廓、右下颚轮廓、左颈部轮廓和右颈部轮廓中的至少一种。S102. Use deep learning to detect ultrasonic image data, and generate a human body specific position feature point parameter. In this embodiment of the present disclosure, the human body specific position includes: at least one of the contour of the left mandible, the contour of the right mandible, the contour of the left neck, and the contour of the right neck A sort of.
如图2和4所示,利用深度学习检测超声图像数据,生成人体特定位置特征点参数包括:As shown in Figures 2 and 4, deep learning is used to detect ultrasonic image data, and the parameters for generating specific position feature points of the human body include:
S201、利用深度学习检测左下颚轮廓、右下颚轮廓、左颈部轮廓和右颈部轮廓的特征点,生成对应的拟合曲线;S201, using deep learning to detect the feature points of the contour of the left lower jaw, the contour of the right lower jaw, the contour of the left neck and the contour of the right neck, and generate a corresponding fitting curve;
S202、根据设定的采样数目对每一条拟合后曲线进行均匀采样;S202, performing uniform sampling on each curve after fitting according to the set sampling number;
S203、根据所述采样数目,计算特征点参数,对i从0到N-1,根据下式计算对应的采样点坐标,完成采样;所述特征点参数包括:采样步长和采样点坐标;S203, according to the sampling number, calculate the characteristic point parameters, for i from 0 to N-1, calculate the corresponding sampling point coordinates according to the following formula, and complete the sampling; the characteristic point parameters include: sampling step size and sampling point coordinates;
xi=x0+i·tx i =x 0 +i·t
yi=L(xi)y i =L(x i )
S204、根据所述特征点参数采用拉格朗日插值法,将提取到的特征点进行曲线拟合,生成对应的拟合曲线;或者,S204, using the Lagrangian interpolation method according to the feature point parameters, and performing curve fitting on the extracted feature points to generate a corresponding fitting curve; or,
S205、若上述S201步骤中未能成功提取到左下颚轮廓、右下颚轮廓、左颈部轮廓和右颈部轮廓四条曲线中的某一条,则将提取失败的曲线上的采样点坐标全部记为预设数值。例如,若头部存在较大角度偏转,则会导致部分曲线上的特征点提取失败的情况,提取失败的曲线上的采样点坐标全部记为(-1,-1)。S205. If one of the four curves of the left mandible contour, the right mandible contour, the left neck contour and the right neck contour fails to be successfully extracted in the above step S201, the coordinates of the sampling points on the failed curve are all recorded as Default value. For example, if the head has a large angle deflection, it will lead to the failure to extract the feature points on some curves, and the coordinates of the sampling points on the curves that fail to extract are all recorded as (-1,-1).
所述采用拉格朗日插值法,提取到的特征点进行曲线拟合,生成对应的拟合曲线采用下式:The Lagrangian interpolation method is used to perform curve fitting on the extracted feature points, and the following formula is used to generate the corresponding fitting curve:
其中,L(x)表示拟合曲线的解析表达式,li(x)是表达式L(x)中的一项,(xi,yi)是特征点的坐标,xi是横坐标,yi是纵坐标,k是特征Among them, L(x) represents the analytical expression of the fitting curve, li (x) is an item in the expression L(x), (x i , y i ) is the coordinate of the feature point , and xi is the abscissa , y i is the ordinate, k is the feature
点的数目。number of points.
S103、根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置;S103, using the radial basis function to determine the projection position of the carotid artery according to the specific position feature point parameters of the human body;
S104、将颈动脉在超声图像中的位置投影到颈部表面,以指导自主超声扫查。S104 , project the position of the carotid artery in the ultrasound image onto the neck surface to guide the autonomous ultrasound scan.
所述根据颈动脉的投影到颈部表面,以指导自主超声扫查采用下式:The following formula is used to guide the autonomous ultrasound scan according to the projection of the carotid artery onto the neck surface:
其中,u,v是图片投影位置点的横纵坐标,D为深度图数据,其中fx,fy分别为3D相机的x或者y方向的焦距,cx,cy为光轴对于投影平面坐标中心的偏移量。Among them, u, v are the horizontal and vertical coordinates of the image projection position point, D is the depth map data, where f x , f y are the focal length of the 3D camera in the x or y direction, c x , cy are the optical axis for the projection plane. The offset of the coordinate center.
如图3所示,所述根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置包括:As shown in Figure 3, the use of the radial basis function to determine the projection position of the carotid artery according to the feature point parameters of the specific position of the human body includes:
S301、根据人体特定位置特征点参数,计算颈动脉在彩色图上的投影位置概率分布;S301. Calculate the probability distribution of the projection position of the carotid artery on the color map according to the feature point parameters of the specific position of the human body;
其中F表示的是特征点集,n是特征点的数目,μi是径向基函数的中心,特征点的坐标决定。和λi是径向基函数的方差和系数,为待训练的参数。Among them, F represents the set of feature points, n is the number of feature points, μ i is the center of the radial basis function, and the coordinates of the feature points are determined. and λ i are the variances and coefficients of the radial basis function, the parameters to be trained.
S302、根据所述投影位置概率分布,(或者利用极大似然估计模型预先计算得到模型的参数并存储),具体地,预先采集和标注训练数据{Xi,Fi},包含不同的偏转姿态,Fi表示提取到的特征点集合,Xi是对应的颈动脉的图片投影位置点集合。估计出模型参数λ和σ;S302. According to the probability distribution of the projection position, (or use the maximum likelihood estimation model to pre-calculate and store the parameters of the model), specifically, pre-collect and label the training data {X i , F i }, including different deflections Attitude, F i represents the extracted feature point set , Xi is the corresponding carotid artery image projection location point set. Estimate the model parameters λ and σ;
S303、根据人体特定位置特征点参数和所述预先构建的极大似然估计模型的参数,提取出概率达到设定阈值的区域和预设区域,以得到颈动脉在彩色图上的中的投影位置。S303. According to the parameters of the specific position feature points of the human body and the parameters of the pre-built maximum likelihood estimation model, extract the region and the preset region whose probability reaches the set threshold, so as to obtain the projection of the carotid artery on the color map Location.
由上可见,本公开提供一种基于3D视觉的人体颈动脉定位方法,基于面部和颈部的特征对颈动脉的位置进行建模,克服了3D相机无法直接感知颈动脉的缺陷,避免了自主超声扫查前人工规划路径和搜索的过程,实现了全流程的自动化。As can be seen from the above, the present disclosure provides a 3D vision-based human carotid artery positioning method, which models the position of the carotid artery based on the features of the face and neck, overcomes the defect that the 3D camera cannot directly perceive the carotid artery, and avoids autonomous The process of manually planning the path and searching before the ultrasound scan realizes the automation of the whole process.
如图5和6所示,本公开实施例中的基于3D视觉的人体颈动脉定位装置600包括:As shown in FIGS. 5 and 6 , the 3D vision-based human carotid
获取模块601,用于获取超声图像数据;an acquisition module 601, configured to acquire ultrasound image data;
生成模块602,用于利用深度学习检测超声图像数据,生成人体特定位置特征点参数;A generating module 602 is used to detect ultrasonic image data by using deep learning, and generate a parameter of a specific position feature point of the human body;
确定模块603,用于根据人体特定位置特征点参数,采用径向基函数确定颈动脉的投影位置;The determining module 603 is used for determining the projection position of the carotid artery by using the radial basis function according to the feature point parameters of the specific position of the human body;
指导模块604,用于将颈动脉在超声图像中的位置投影到颈部表面,以指导自主超声扫查。A guidance module 604 for projecting the position of the carotid artery in the ultrasound image onto the surface of the neck to guide the autonomous ultrasound scan.
所述超声图像数据包括:对齐的彩色图数据和深度图数据;The ultrasound image data includes: aligned color map data and depth map data;
获取模块601用于利用3D相机拍照获取对齐的彩色图数据和深度图数据。The acquiring module 601 is configured to use a 3D camera to take pictures to acquire aligned color map data and depth map data.
所述人体特定位置包括:左下颚轮廓、右下颚轮廓、左颈部轮廓和右颈部轮廓中的至少一种;The specific position of the human body includes: at least one of the contour of the left lower jaw, the contour of the right lower jaw, the contour of the left neck and the contour of the right neck;
生成模块602用于利用深度学习检测左下颚轮廓、右下颚轮廓、左颈部轮廓和右颈部轮廓的特征点,生成对应的拟合曲线;The generating module 602 is configured to detect the feature points of the left mandible contour, the right mandible contour, the left neck contour and the right neck contour by using deep learning, and generate a corresponding fitting curve;
根据设定的采样数目对每一条拟合后曲线进行均匀采样;According to the set sampling number, each fitted curve is uniformly sampled;
根据所述采样数目,计算特征点参数,完成采样;所述特征点参数包括:采样步长和采样点坐标;According to the sampling number, the characteristic point parameters are calculated to complete the sampling; the characteristic point parameters include: sampling step size and sampling point coordinates;
根据所述特征点参数采用拉格朗日插值法,将提取到的特征点进行曲线拟合,生成对应的拟合曲线。According to the feature point parameters, a Lagrangian interpolation method is used to perform curve fitting on the extracted feature points to generate a corresponding fitting curve.
所述采用拉格朗日插值法,提取到的特征点进行曲线拟合,生成对应的拟合曲线采用下式:The Lagrangian interpolation method is used to perform curve fitting on the extracted feature points, and the following formula is used to generate the corresponding fitting curve:
其中,L(x)表示拟合曲线的解析表达式,li(x)是表达式L(x)中的一项,(xi,yi)是特征点的坐标,x是横坐标,y=L(x)是纵坐标,k是特征点的数目。Among them, L(x) represents the analytical expression of the fitted curve, li(x) is an item in the expression L(x), (x i , y i ) is the coordinate of the feature point, x is the abscissa, y =L(x) is the ordinate, and k is the number of feature points.
确定模块603用于根据人体特定位置特征点参数,计算颈动脉在彩色图上的投影位置概率分布;The determining module 603 is used to calculate the probability distribution of the projection position of the carotid artery on the color map according to the parameter of the specific position feature point of the human body;
根据所述投影位置概率分布,计算得到预先构建的极大似然估计模型的参数;According to the projection position probability distribution, the parameters of the pre-built maximum likelihood estimation model are obtained by calculation;
根据人体特定位置特征点参数和所述预先构建的极大似然估计模型的参数,提取出概率达到设定阈值的区域和预设区域,以得到颈动脉在彩色图上的中的投影位置。According to the parameters of the specific position feature points of the human body and the parameters of the pre-built maximum likelihood estimation model, the region and the preset region whose probability reaches the set threshold are extracted to obtain the projection position of the carotid artery on the color map.
确定模块603用于若提取出的概率没有达到设定阈值的区域和/或不在预设区域,则将提取失败的曲线上的采样点坐标全部记为预设数值。The determination module 603 is configured to record all the coordinates of the sampling points on the curve that fails to extract as the preset value if the extracted probability does not reach the area of the set threshold value and/or is not in the preset area.
所述根据颈动脉的投影到颈部表面,以指导自主超声扫查采用下式:The following formula is used to guide the autonomous ultrasound scan according to the projection of the carotid artery onto the neck surface:
其中,u,v是图片投影位置点的横纵坐标,D为深度图数据,其中fx,fy分别为3D相机的x或者y方向的焦距,cx,cy为光轴对于投影平面坐标中心的偏移量。Among them, u, v are the horizontal and vertical coordinates of the image projection position point, D is the depth map data, where f x , f y are the focal length of the 3D camera in the x or y direction, c x , cy are the optical axis for the projection plane. The offset of the coordinate center.
本公开实施例所提供的基于3D视觉的人体颈动脉定位装置可执行本发明任意实施例所提供的基于3D视觉的人体颈动脉定位方法,具备执行方法相应的功能模块和有益效果。The 3D vision-based human carotid artery positioning apparatus provided by the embodiments of the present disclosure can execute the 3D vision-based human carotid artery positioning method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
图7是本公开实施例提供的电子设备的示意图。如图7所示,该实施例的电子设备包括:处理器701、存储器702以及存储在该存储器702中并可在处理器701上运行的计算机程序703。处理器701执行计算机程序703时实现上述各个方法实施例中的步骤。或者,处理器701执行计算机程序703时实现上述各装置实施例中各模块/单元的功能。FIG. 7 is a schematic diagram of an electronic device provided by an embodiment of the present disclosure. As shown in FIG. 7 , the electronic device of this embodiment includes: a processor 701 , a memory 702 , and a
示例性地,计算机程序703可以被分割成一个或多个模块/单元,一个或多个模块/单元被存储在存储器702中,并由处理器701执行,以完成本公开。一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述计算机程序703在电子设备中的执行过程。Illustratively, the
电子设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等电子设备。电子设备可以包括但不仅限于处理器701和存储器702。本领域技术人员可以理解,图7仅仅是电子设备的示例,并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如,电子设备还可以包括输入输出设备、网络接入设备、总线等。The electronic device may be a desktop computer, a notebook, a palmtop computer, a cloud server and other electronic devices. The electronic device may include, but is not limited to, the processor 701 and the memory 702 . Those skilled in the art can understand that FIG. 7 is only an example of an electronic device, and does not constitute a limitation to the electronic device. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as , the electronic device may also include an input and output device, a network access device, a bus, and the like.
处理器701可以是中央处理单元(Central Processing Unit,CPU),也可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable GateArray,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 701 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-available processors Field-Programmable GateArray (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
存储器702可以是电子设备的内部存储单元,例如,电子设备的硬盘或内存。存储器702也可以是电子设备的外部存储设备,例如,电子设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器702还可以既包括电子设备的内部存储单元也包括外部存储设备。存储器702用于存储计算机程序以及电子设备所需的其它程序和数据。存储器702还可以用于暂时地存储已经输出或者将要输出的数据。The memory 702 may be an internal storage unit of the electronic device, eg, a hard disk or memory of the electronic device. The memory 702 can also be an external storage device of the electronic device, for example, a pluggable hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash memory card (Flash Card) equipped on the electronic device )Wait. Further, the memory 702 may also include both an internal storage unit of the electronic device and an external storage device. The memory 702 is used to store computer programs and other programs and data required by the electronic device. The memory 702 may also be used to temporarily store data that has been or will be output.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated to different functional units, Module completion means dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this disclosure.
在本公开所提供的实施例中,应该理解到,所揭露的装置/电子设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/电子设备实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are only illustrative. For example, the division of modules or units is only a logical function division. In actual implementation, there may be other division methods. Multiple units or components may be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本公开实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,计算机程序可以存储在计算机可读存储介质中,该计算机程序在被处理器执行时,可以实现上述各个方法实施例的步骤。计算机程序可以包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(RandomAccess Memory,RAM)、电载波信号、电信信号以及软件分发介质等。需要说明的是,计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如,在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on this understanding, the present disclosure can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium, and the computer program can be processed When the device is executed, the steps of the foregoing method embodiments may be implemented. A computer program may include computer program code, which may be in source code form, object code form, executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-Only Memory (ROM), random access memory Memory (Random Access Memory, RAM), electric carrier signal, telecommunication signal and software distribution medium, etc. It should be noted that the content contained in computer-readable media may be modified as appropriate in accordance with the requirements of legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to legislation and patent practice, computer-readable media may not be Including electrical carrier signals and telecommunication signals.
以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围,均应包含在本公开的保护范围之内。The above embodiments are only used to illustrate the technical solutions of the present disclosure, but not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The recorded technical solutions are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be included in the present disclosure. within the scope of protection.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210527495.8A CN114782537B (en) | 2022-05-16 | 2022-05-16 | Human carotid artery positioning method and device based on 3D vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210527495.8A CN114782537B (en) | 2022-05-16 | 2022-05-16 | Human carotid artery positioning method and device based on 3D vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114782537A true CN114782537A (en) | 2022-07-22 |
CN114782537B CN114782537B (en) | 2025-06-27 |
Family
ID=82437650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210527495.8A Active CN114782537B (en) | 2022-05-16 | 2022-05-16 | Human carotid artery positioning method and device based on 3D vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114782537B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117898769A (en) * | 2024-02-06 | 2024-04-19 | 哈尔滨库柏特科技有限公司 | Autonomous ultrasonic robot carotid artery scanning method and device based on three-dimensional reconstruction |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110257545A1 (en) * | 2010-04-20 | 2011-10-20 | Suri Jasjit S | Imaging based symptomatic classification and cardiovascular stroke risk score estimation |
CN105407807A (en) * | 2013-07-24 | 2016-03-16 | 皇家飞利浦有限公司 | Non-imaging two dimensional array probe and system for automated screening of carotid stenosis |
CN111789634A (en) * | 2020-06-09 | 2020-10-20 | 浙江大学 | A path planning method for automatic ultrasound scanning of human spine |
CN112151169A (en) * | 2020-09-22 | 2020-12-29 | 深圳市人工智能与机器人研究院 | A method and system for autonomous scanning of an ultrasonic robot with human-like operation |
CN113456106A (en) * | 2021-08-03 | 2021-10-01 | 无锡祥生医疗科技股份有限公司 | Carotid scanning method, device and computer readable storage medium |
CN113951930A (en) * | 2021-09-16 | 2022-01-21 | 李世岩 | Three-dimensional neck ultrasonic automatic scanning and evaluation system and method |
-
2022
- 2022-05-16 CN CN202210527495.8A patent/CN114782537B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110257545A1 (en) * | 2010-04-20 | 2011-10-20 | Suri Jasjit S | Imaging based symptomatic classification and cardiovascular stroke risk score estimation |
CN105407807A (en) * | 2013-07-24 | 2016-03-16 | 皇家飞利浦有限公司 | Non-imaging two dimensional array probe and system for automated screening of carotid stenosis |
CN111789634A (en) * | 2020-06-09 | 2020-10-20 | 浙江大学 | A path planning method for automatic ultrasound scanning of human spine |
CN112151169A (en) * | 2020-09-22 | 2020-12-29 | 深圳市人工智能与机器人研究院 | A method and system for autonomous scanning of an ultrasonic robot with human-like operation |
CN113456106A (en) * | 2021-08-03 | 2021-10-01 | 无锡祥生医疗科技股份有限公司 | Carotid scanning method, device and computer readable storage medium |
CN113951930A (en) * | 2021-09-16 | 2022-01-21 | 李世岩 | Three-dimensional neck ultrasonic automatic scanning and evaluation system and method |
Non-Patent Citations (1)
Title |
---|
郭俊锋 等: "三维超声扫查截面点云灰度图像的生成方法", 甘肃科学学报, vol. 29, no. 06, 25 December 2017 (2017-12-25), pages 41 - 45 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117898769A (en) * | 2024-02-06 | 2024-04-19 | 哈尔滨库柏特科技有限公司 | Autonomous ultrasonic robot carotid artery scanning method and device based on three-dimensional reconstruction |
Also Published As
Publication number | Publication date |
---|---|
CN114782537B (en) | 2025-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028205B (en) | Eye pupil positioning method and device based on binocular distance measurement | |
KR102013866B1 (en) | Method and apparatus for calculating camera location using surgical video | |
US10318839B2 (en) | Method for automatic detection of anatomical landmarks in volumetric data | |
CN107918925B (en) | Registration of Magnetic Tracking System and Imaging Device | |
US20160199147A1 (en) | Method and apparatus for coordinating position of surgery region and surgical tool during image guided surgery | |
CN107563304B (en) | Terminal device unlocking method and device, and terminal device | |
CN110728673A (en) | Target part analysis method and device, computer equipment and storage medium | |
WO2018216341A1 (en) | Information processing device, information processing method, and program | |
CN110742631A (en) | Imaging method and device for medical image | |
CN112634309A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113902932A (en) | Feature extraction method, visual positioning method and device, medium and electronic equipment | |
CN115205286A (en) | Tower-climbing robot manipulator bolt identification and positioning method, storage medium, terminal | |
CN112288708B (en) | Method, device, medium, and electronic device for detecting lymph node in CT image | |
CN115836322B (en) | Image cropping method and device, electronic device and storage medium | |
CN110348351A (en) | Image semantic segmentation method, terminal and readable storage medium | |
CN114782537B (en) | Human carotid artery positioning method and device based on 3D vision | |
CN115797451A (en) | Acupuncture point identification method, device and equipment and readable storage medium | |
WO2022105745A1 (en) | Method and apparatus for determining pose of tracked object during image tracking process | |
JP2006113832A (en) | Stereoscopic image processor and program | |
WO2024259938A1 (en) | Travelable region segmentation method and apparatus, readable storage medium, and robot | |
CN118035595A (en) | Point cloud picture generation method, device and equipment with temperature information and storage medium | |
WO2022127318A1 (en) | Scanning positioning method and apparatus, storage medium and electronic device | |
CN116363030A (en) | Medical image processing method, medical image processing device, electronic equipment and storage medium | |
CN113140031B (en) | Three-dimensional image modeling system and method and oral cavity scanning equipment applying same | |
CN115880428A (en) | A method, device and equipment for processing animal detection data based on three-dimensional technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |