CN102999902B - Optical navigation positioning navigation method based on CT registration result - Google Patents
Optical navigation positioning navigation method based on CT registration result Download PDFInfo
- Publication number
- CN102999902B CN102999902B CN201210454220.2A CN201210454220A CN102999902B CN 102999902 B CN102999902 B CN 102999902B CN 201210454220 A CN201210454220 A CN 201210454220A CN 102999902 B CN102999902 B CN 102999902B
- Authority
- CN
- China
- Prior art keywords
- image
- preoperative
- registration
- dimensional
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000002604 ultrasonography Methods 0.000 claims abstract description 68
- 230000000747 cardiac effect Effects 0.000 claims abstract description 41
- 238000003709 image segmentation Methods 0.000 claims abstract description 29
- 230000002792 vascular Effects 0.000 claims abstract description 29
- 230000008676 import Effects 0.000 claims abstract description 15
- 230000029058 respiratory gaseous exchange Effects 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 88
- 230000009466 transformation Effects 0.000 claims description 47
- 230000011218 segmentation Effects 0.000 claims description 32
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 239000000523 sample Substances 0.000 claims description 17
- 238000010009 beating Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000011524 similarity measure Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims 28
- 210000001367 artery Anatomy 0.000 claims 2
- 239000008280 blood Substances 0.000 claims 2
- 210000004369 blood Anatomy 0.000 claims 2
- 210000003462 vein Anatomy 0.000 claims 2
- 235000013350 formula milk Nutrition 0.000 claims 1
- 230000009191 jumping Effects 0.000 claims 1
- 230000004807 localization Effects 0.000 claims 1
- 239000000203 mixture Substances 0.000 claims 1
- 238000005070 sampling Methods 0.000 claims 1
- 210000004351 coronary vessel Anatomy 0.000 abstract description 16
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000003190 augmentative effect Effects 0.000 abstract description 4
- 238000012937 correction Methods 0.000 abstract description 3
- 230000010365 information processing Effects 0.000 abstract description 2
- 238000001356 surgical procedure Methods 0.000 description 18
- 239000003550 marker Substances 0.000 description 12
- 239000002184 metal Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000009877 rendering Methods 0.000 description 5
- 230000010247 heart contraction Effects 0.000 description 4
- 210000004204 blood vessel Anatomy 0.000 description 3
- 208000029078 coronary artery disease Diseases 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 208000024172 Cardiovascular disease Diseases 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000007675 cardiac surgery Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000338 in vitro Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000002560 therapeutic procedure Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000748 cardiovascular system Anatomy 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000002651 drug therapy Methods 0.000 description 1
- 238000002695 general anesthesia Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 210000000779 thoracic wall Anatomy 0.000 description 1
- 230000008733 trauma Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
一种信息处理技术领域的基于CT配准结果的光学导航定位系统及其导航方法,该系统包括:术前CT图像导入模块、图像分割模块、体表初始配准模块、术前CT图像与术中二维超声图像模块和术中导航模块;本发明采用虚拟现实和术中超声相结合的方式,补偿由于呼吸等因素造成的术中定位误差,从而实现对冠脉搭桥术的目标点的精确定位导航。通过对术前心脏CT图像数据中的心脏和冠脉血管树进行手动分割重建,然后在光学导航仪和基于CT与超声的术中配准误差校正的帮助下搭建一个将内窥镜和虚拟内窥镜相融合的增强虚拟现实环境,从而可实现对冠脉搭桥术的目标点的精确定位导航。
An optical navigation positioning system and navigation method based on CT registration results in the field of information processing technology, the system includes: preoperative CT image import module, image segmentation module, body surface initial registration module, preoperative CT image and surgical Two-dimensional ultrasound image module and intraoperative navigation module; the present invention uses a combination of virtual reality and intraoperative ultrasound to compensate for intraoperative positioning errors caused by factors such as breathing, thereby achieving accurate target points for coronary artery bypass grafting. Locate navigation. By manually segmenting and reconstructing the heart and coronary vascular tree in preoperative cardiac CT image data, and then building a virtual endoscopic and virtual endoscopic model with the help of optical navigator and intraoperative registration error correction based on CT and ultrasound. The augmented virtual reality environment integrated with the speculum can realize the precise positioning and navigation of the target point of coronary artery bypass grafting.
Description
技术领域 technical field
本发明涉及的是一种信息处理技术领域的系统及方法,具体是一种用于辅助冠状动脉搭桥手术的基于术中超声与术前CT配准结果的导航定位系统及其导航方法。 The present invention relates to a system and method in the technical field of information processing, in particular to a navigation and positioning system and navigation method based on intraoperative ultrasound and preoperative CT registration results for assisting coronary artery bypass surgery.
背景技术 Background technique
近年来我国心血管病的发病率逐年上升,其中冠心病是最常见的心血管疾病。针对冠心病的治疗方法中,冠状动脉搭桥术是目前除了药物治疗和介入治疗之外的最主要、成熟治疗方式。然而传统冠状动脉搭桥手术需要胸骨正中切口,必要时还需要借助体外循环等操作才能完成手术。其包括切口大、恢复慢、并发症多等缺点。作为新型治疗方式的导航辅助微创冠脉搭桥手术,仅需要通过在胸壁上开几个手指粗细的切口、利用特殊的手术器械即可完成手术,达到了切口美观、创伤小、恢复快、并发症少等要求。 In recent years, the incidence of cardiovascular disease in my country has been increasing year by year, among which coronary heart disease is the most common cardiovascular disease. Among the treatment methods for coronary heart disease, coronary artery bypass grafting is currently the most important and mature treatment method in addition to drug therapy and interventional therapy. However, traditional coronary artery bypass surgery requires a median sternotomy, and if necessary, extracorporeal circulation and other operations are required to complete the operation. It includes disadvantages such as large incision, slow recovery, and many complications. As a new treatment method, navigation-assisted minimally invasive coronary artery bypass surgery only needs to make a few finger-thick incisions on the chest wall and use special surgical instruments to complete the operation, achieving beautiful incisions, small trauma, fast recovery, and less complications. Fewer symptoms and other requirements.
进行导航辅助微创冠脉搭桥的主要手术难点之一是如何快速而精确定位目标点,不当或者错误的定位可直接影响到手术的成败和远期疗效。目前,微创冠脉搭桥手术的定位主要依赖术前影像,未能利用反映手术区域真实情况的实时信息,无法解决术中由于呼吸、体表标记点位移和体位变化等因素导致的误差问题,从而使得定位效果不理想。 One of the main surgical difficulties in navigation-assisted minimally invasive coronary artery bypass grafting is how to quickly and accurately locate the target point. Improper or wrong positioning can directly affect the success or failure of the operation and the long-term efficacy. At present, the positioning of minimally invasive coronary artery bypass surgery mainly relies on preoperative images, and fails to use real-time information that reflects the real situation of the surgical area, and cannot solve the error problem caused by factors such as breathing, body surface marker displacement, and body position changes during the operation. As a result, the positioning effect is not ideal.
经过对现有技术的检索发现,“3D-image guidance for minimally invasive robotic coronary artery bypass,Heart Surg Forum 2000-9732(3)”(三维图像指导机器人辅助微创冠状动脉搭桥术,心脏外科论坛,2000年)中TM.Peter最先尝试应用三维影像图像进行心脏外科手术术前规划的研究,他通过对患者术前采集的CT图像进行分割,得到心脏和骨架的表面模型,初步建立了一个微创搭桥手术的手术导航系统。该系统同时提供了一个虚拟内窥镜,利用术前手术配准的技术,使得真实内窥镜与模型试验中模型的相对位置关系和虚拟内窥镜与系统中建模出的心脏骨架模型的相对位置关系一致。但该系统只是一个原型系统,存在许多不足,如真实内窥镜2D图像没有与虚拟的3D场景有效融合等问题。 After searching the prior art, it was found that "3D-image guidance for minimally invasive robotic coronary artery bypass, Heart Surg Forum 2000-9732(3)" (three-dimensional image guidance robot-assisted minimally invasive coronary artery bypass surgery, Heart Surg Forum, 2000 TM.Peter first tried to use 3D images to study the preoperative planning of cardiac surgery. He obtained the surface model of the heart and skeleton by segmenting the CT images collected before the operation, and preliminarily established a minimally invasive Surgical navigation system for bypass surgery. At the same time, the system provides a virtual endoscope, using preoperative surgical registration technology, so that the relative positional relationship between the real endoscope and the model in the model test and the relationship between the virtual endoscope and the heart skeleton model modeled in the system The relative positional relationship is the same. But this system is only a prototype system, and there are many deficiencies, such as the real endoscopic 2D image is not effectively fused with the virtual 3D scene and so on.
“Flexible calibration of actuated stereoscopic endoscope for overlay in robot assisted surgery,MICCAI 2002(1):LNCS 2488,T.Dohi and R.Kikinis(eds),25-34.”(机器人辅助手术中立体内窥镜的灵活标定,2002年医学影像计算与计算机辅助介入大会)中Mourguess和Coste-Maniere尝试着将一个动物的心脏在内窥镜下的影像与该心脏的冠脉树三维模型的虚拟内窥镜各自以一定透明度重叠进行融合,对手术中定位目标点起到了一定的效果。但是术中冠脉树的模型是静止的,而且冠脉树的三维模型与内窥镜影像的融合是依靠识别内窥镜中观察到 的标示点来使二者较为准确的融合,这种方法的鲁棒性以及准确性并不十分令人满意。 "Flexible calibration of actuated stereoscopic endoscope for overlay in robot assisted surgery, MICCAI 2002 (1): LNCS 2488, T. Dohi and R. Kikinis (eds), 25-34." (Flexible calibration of stereoscopic endoscope in robot assisted surgery , 2002 Medical Imaging Computing and Computer-Aided Intervention Conference), Mourguess and Coste-Maniere tried to combine the endoscopic image of an animal's heart with the virtual endoscope of the three-dimensional model of the coronary tree of the heart with a certain transparency. Overlapping and fusion play a certain role in locating target points during surgery. However, the model of the coronary tree is static during the operation, and the fusion of the three-dimensional model of the coronary tree and the endoscopic image relies on identifying the marked points observed in the endoscope to make the two more accurate fusion. The robustness and accuracy are not very satisfactory.
中国专利文献号CN1650813,记载了“外科手术导航系统基于光学定位的机器人手术定位方法”,该方法首先在机器人基座上设计三个标志位,通过光学跟踪仪定位指针来选取这三个标志,让机器人探针和光学跟踪仪定位指针对接时同时采集空间同一坐标系,最后通过光学跟踪仪与机器人系统的坐标互换。该技术主要使用与脑外科手术中,由于脑外科手术解剖学形变小,精确度问题上更容易把握;但该技术无法应用于解剖学形变较大的心血管系统领域。 Chinese Patent Document No. CN1650813 records the "robot surgical positioning method based on optical positioning for surgical navigation system". In this method, three marking positions are firstly designed on the robot base, and the three markings are selected by positioning the pointer with an optical tracker. When the robot probe and the optical tracker positioning pointer are docked, the same coordinate system in space is collected at the same time, and finally the coordinates of the optical tracker and the robot system are exchanged. This technology is mainly used in brain surgery. Due to the small anatomical deformation of brain surgery, the accuracy is easier to grasp; however, this technology cannot be applied to the field of cardiovascular system with large anatomical deformation.
中国专利文献号CN101703409A,记载了“一种超声引导机器人辅助治疗的系统和方法”,该系统包括手术机器人、二维超声仪、磁定位仪和工作站,通过将他们相结合,实现了对病患区域的有效检测及自动化治疗,从而降低医生的劳动强度,并提高手术实施的精确度。但是此方法中使用的是磁定位仪,磁定位仪在使用时受到场地、器械、其他仪器等各种干扰,容易产生误差。同时三维重建的超声信息会产生图像变形问题,用其图像信息指导手术可能会造成偏差,而且产生信息量有限,无法用于一些较为精细复杂的手术,如心脏手术的精确操作。 Chinese Patent Document No. CN101703409A describes "a system and method for ultrasound-guided robot-assisted therapy". The system includes a surgical robot, a two-dimensional ultrasound instrument, a magnetic locator, and a workstation. Effective detection and automatic treatment of the region, thereby reducing the labor intensity of doctors and improving the accuracy of surgery. However, what is used in this method is a magnetic locator, and the magnetic locator is subject to various interferences such as the field, equipment, and other instruments when it is used, and errors are prone to occur. At the same time, 3D reconstructed ultrasound information will cause image distortion, and using the image information to guide surgery may cause deviations, and the amount of information generated is limited, so it cannot be used for some more delicate and complex operations, such as the precise operation of cardiac surgery.
可见,虽然机器人辅助治疗系统已经开始应用于临床治疗中,但是其在手术目标定位上的发展尚不完善,很多临床难题仍有待解决。 It can be seen that although the robot-assisted treatment system has begun to be used in clinical treatment, its development in surgical target positioning is not perfect, and many clinical problems still need to be solved.
发明内容 Contents of the invention
本发明针对现有技术存在的上述不足,提出了一种基于CT配准结果的光学导航定位系统及其导航方法,采用虚拟现实和术中超声相结合的方式,补偿由于呼吸等因素造成的术中定位误差,从而实现对冠脉搭桥术的目标点的精确定位导航。通过对术前CT图像数据中的心脏和冠脉血管树进行手动分割重建,然后在光学导航仪和基于术前CT图像与二维超声图像的术中配准误差校正的帮助下搭建一个将内窥镜和虚拟内窥镜相融合的增强虚拟现实环境,从而可实现对冠脉搭桥术的目标点的精确定位导航。 Aiming at the above-mentioned deficiencies existing in the prior art, the present invention proposes an optical navigation and positioning system and its navigation method based on CT registration results, and uses a combination of virtual reality and intraoperative ultrasound to compensate for surgical defects caused by factors such as breathing. Positioning error, so as to realize the precise positioning and navigation of the target point of coronary artery bypass grafting. By manually segmenting and reconstructing the heart and coronary vascular tree in the preoperative CT image data, and then building an internal model with the help of an optical navigator and intraoperative registration error correction based on the preoperative CT image and the two-dimensional ultrasound image The augmented virtual reality environment combined with endoscope and virtual endoscope can realize the precise positioning and navigation of the target point of coronary artery bypass grafting.
本发明是通过以下技术方案实现的: The present invention is achieved through the following technical solutions:
本发明涉及一种基于CT配准结果的光学导航定位系统,包括:术前CT图像导入模块、图像分割模块、体表初始配准模块、术前CT图像与术中二维超声图像配准模块和术中导航模块,其中:术前CT图像导入模块接收术前影像学检查获得的DICOM格式图像文件,生成术前影像包并分别输出至图像分割模块和体表初始配准模块,术前CT图像与术中二维超声图像配准模块分别与图像分割模块和体表初始配准模块相连并接收带有手术目标点的三维动态冠脉树以及体表初始配准结果,术前CT图像与术中二维超声图像配准模块输出术前CT转换矩阵至术中导航模块,通过术中导航模块输出精确手术导航信息。 The invention relates to an optical navigation and positioning system based on CT registration results, including: a preoperative CT image import module, an image segmentation module, a body surface initial registration module, a preoperative CT image and an intraoperative two-dimensional ultrasonic image registration module and the intraoperative navigation module, wherein: the preoperative CT image import module receives the DICOM format image files obtained by the preoperative imaging examination, generates a preoperative image package and outputs it to the image segmentation module and the body surface initial registration module respectively, and the preoperative CT image The image and intraoperative 2D ultrasound image registration module are respectively connected to the image segmentation module and body surface initial registration module and receive the 3D dynamic coronary tree with surgical target points and the initial body surface registration results. The intraoperative two-dimensional ultrasound image registration module outputs the preoperative CT transformation matrix to the intraoperative navigation module, and outputs accurate surgical navigation information through the intraoperative navigation module.
所述的术前影像包包括:目标区域内一个或若干个心动周期的若干幅DIOM格式的术前CT图像以及根据DICOM数据重建得到的三维立体影像,其中:一个心动周期术前CT图像数 据由若干个相位的术前图像组成。 The preoperative image package includes: several preoperative CT images in DIOM format for one or several cardiac cycles in the target area and three-dimensional images reconstructed from DICOM data, wherein: the preoperative CT image data of one cardiac cycle is composed of The preoperative images of several phases are composed.
所述的带有手术目标点的三维动态冠脉树包括:由图像分割模块生成的动态心脏模型和血管树模型。 The three-dimensional dynamic coronary tree with operation target points includes: a dynamic heart model and a blood vessel tree model generated by an image segmentation module.
所述的术前影像导入模块包括:DICOM图像读入单元和术前影像三维绘制单元,其中:DICOM图像读入单元将术前影像学检查获得的心脏一个心动周期的一系列DICOM格式图像文件导入解析出DICOM数据并传输至术前影像三维绘制单元,术前影像三维绘制单元根据DICOM数据重建成三维立体影像并合并为术前影像包并分别输出至图像分割模块和体表初始配准模块。 The preoperative image import module includes: a DICOM image read-in unit and a preoperative image three-dimensional rendering unit, wherein: the DICOM image read-in unit imports a series of DICOM format image files of one cardiac cycle of the heart obtained by the preoperative imaging examination into The DICOM data is analyzed and transmitted to the preoperative image 3D rendering unit, which reconstructs a 3D stereoscopic image based on the DICOM data and merges it into a preoperative image package, which is output to the image segmentation module and body surface initial registration module respectively.
所述的图像分割模块包括:心脏分割单元和血管树分割单元,其中:心脏分割单元对术前影像包中一个心动周期内的心脏CT图像基于临床经验逐帧手动勾勒出心脏轮廓,并根据获得的心脏CT图像分割结果重建得到动态心脏模型;血管树分割单元对术前影像包中一个心动周期内的血管树CT图像基于临床经验逐帧手动勾勒出血管树轮廓并根据临床需求手动标出手术目标点,从而根据获得的血管树CT图像分割结果重建获得血管树模型并与动态心脏模型一并作为带有手术目标点的三维动态冠脉树输出至术前CT图像与术中二维超声图像配准模块。 The image segmentation module includes: a heart segmentation unit and a vascular tree segmentation unit, wherein: the heart segmentation unit manually outlines the heart outline frame by frame based on clinical experience on the cardiac CT images in a cardiac cycle in the preoperative image package, and obtains Based on the clinical experience, the vascular tree segmentation unit manually outlines the vascular tree frame by frame based on the vascular tree CT images within one cardiac cycle in the preoperative image package and manually marks the surgery according to clinical needs. Target points, so as to reconstruct the vascular tree model according to the obtained vascular tree CT image segmentation results and output it to the preoperative CT image and intraoperative 2D ultrasound image together with the dynamic heart model as a three-dimensional dynamic coronary tree with surgical target points Registration module.
所述的体表初始配准模块包括:配准标记点选择单元和转换矩阵计算单元,其中:配准标记点选择单元选择设置于手术对象体表上的配准标记点,并获得其术前CT图像数据在图像坐标系坐标以及真实手术对象的空间坐标,转换矩阵计算单元根据图像坐标系坐标以及空间坐标,利用刚体配准算法实现这两个坐标之间的配准,即实现图像空间与手术对象空间的配准,并利用这两组坐标计算出初始转换矩阵作为体表初始配准结果,输出至术前CT图像与术中二维超声配准模块。 The body surface initial registration module includes: a registration marker selection unit and a conversion matrix calculation unit, wherein: the registration marker selection unit selects the registration markers set on the body surface of the surgical object, and obtains its preoperative The coordinates of the CT image data in the image coordinate system and the space coordinates of the real surgical object, the transformation matrix calculation unit uses the rigid body registration algorithm to realize the registration between the two coordinates according to the coordinates of the image coordinate system and the space coordinates, that is, to realize the registration between the image space and the space coordinates. The registration of the surgical object space, and use these two sets of coordinates to calculate the initial transformation matrix as the initial registration result of the body surface, and output it to the preoperative CT image and intraoperative two-dimensional ultrasound registration module.
所述的术前CT图像与术中二维超声图像配准模块包括:术中二维超声图像读入单元、二维超声图像与术前CT图像配准单元和ECG(electrocardiogram,心电图)读入单元,其中:二维超声图像与术前CT图像配准单元接收术中二维超声图像读入单元采集的二维超声输入图像和心电图读入单元采集的当前心脏跳动相位,通过输入心电图信号将带有手术目标点的三维动态冠脉树中对应的若干个相位与当前心脏跳动相位进行匹配,计算出两者之间的转换矩阵并输出至术中导航模块,其中每一个转换矩阵对应心动周期中的一个相位的心脏和带有手术目标点的三维冠脉树。 The preoperative CT image and intraoperative two-dimensional ultrasonic image registration module includes: an intraoperative two-dimensional ultrasonic image read-in unit, a two-dimensional ultrasonic image and preoperative CT image registration unit, and an ECG (electrocardiogram, electrocardiogram) read-in unit. unit, wherein: the two-dimensional ultrasonic image and preoperative CT image registration unit receives the two-dimensional ultrasonic input image acquired by the intraoperative two-dimensional ultrasonic image read-in unit and the current heart beating phase acquired by the electrocardiogram read-in unit, through the input electrocardiogram signal The corresponding several phases in the three-dimensional dynamic coronary tree with surgical target points are matched with the current beating phase of the heart, and the conversion matrix between the two is calculated and output to the intraoperative navigation module, where each conversion matrix corresponds to the cardiac cycle A phased heart in , and the 3D coronary tree with surgical target points.
所述的术中导航模块包括:转换矩阵选择单元和导航显示单元,其中:转换矩阵选择单元根据当前心脏跳动相位选择与之相对应的一个转换矩阵,并将所得的该转换矩阵输出至导航显示单元,通过导航显示单元计算出当前虚拟器械与实际目标点之间的距离以及相对位置关系并进行视频显示,从而引导手术精确完成。 The intraoperative navigation module includes: a transformation matrix selection unit and a navigation display unit, wherein: the transformation matrix selection unit selects a corresponding transformation matrix according to the current heart beating phase, and outputs the obtained transformation matrix to the navigation display The navigation and display unit calculates the distance and relative positional relationship between the current virtual instrument and the actual target point and displays it on video, so as to guide the operation to be completed accurately.
本发明涉及上述系统的导航方法,包括以下步骤: The present invention relates to the navigation method of above-mentioned system, comprises the following steps:
第一步、图像分割模块中的心脏分割单元通过术前CT图像导入模块获得目标区域一个心动周期的一系列CT图像数据(DICOM格式)后,对该心动周期内的心脏CT图像基于临床经验逐帧手动勾勒出心脏轮廓,获得分割结果,并根据分割结果重建得到动态心脏模型。 In the first step, after the heart segmentation unit in the image segmentation module obtains a series of CT image data (DICOM format) of one cardiac cycle in the target area through the preoperative CT image import module, the heart CT images in the cardiac cycle are gradually analyzed based on clinical experience. The frame of the heart is manually outlined, the segmentation result is obtained, and the dynamic heart model is reconstructed according to the segmentation result.
第二步、图像分割模块中的血管树分割单元对术前影像包中一个心动周期内的血管树CT图像基于临床经验逐帧手动勾勒出血管树轮廓并根据临床需求手动标出手术目标点,获得分割结果,从而根据分割结果重建获得血管树模型,构建出一个包含动态血管树模型的三维虚拟场景,该三维虚拟场景构成了一个虚拟内窥镜图像。 In the second step, the vascular tree segmentation unit in the image segmentation module manually outlines the outline of the vascular tree frame by frame based on clinical experience on the CT images of the vascular tree within one cardiac cycle in the preoperative image package, and manually marks the surgical target points according to clinical needs. A segmentation result is obtained, and a vascular tree model is reconstructed according to the segmentation result, and a three-dimensional virtual scene including a dynamic vascular tree model is constructed, and the three-dimensional virtual scene constitutes a virtual endoscopic image.
第三步、体表初始配准模块利用体表配准来获得术前CT图像坐标系与手术对象实时二维超声图像坐标系的转换矩阵,即两个不同空间坐标系下的坐标点,通过特征互相映射并实现一一对应关系,达到最终术前CT图像与手术对象实时二维超声图像的对应;具体通过在术前CT图像中选取若干个配准标记点,在真实的空间找到与图像中配准标记点对应的点并利用光学导航仪得到这些点在真实空间的坐标,利用这两组不同坐标系,但是一一对应的点集的位置坐标,求得两个空间之间的转换矩阵T。 The third step, the body surface initial registration module uses the body surface registration to obtain the conversion matrix of the preoperative CT image coordinate system and the real-time two-dimensional ultrasound image coordinate system of the surgical object, that is, the coordinate points in two different spatial coordinate systems, through The features are mapped to each other and realize the one-to-one correspondence, so as to achieve the correspondence between the final preoperative CT image and the real-time two-dimensional ultrasound image of the surgical object; specifically, by selecting several registration marker points in the preoperative CT image, find the corresponding image in the real space. Register the points corresponding to the marked points in the center and use the optical navigator to obtain the coordinates of these points in the real space. Using these two sets of different coordinate systems, but the position coordinates of the point sets that correspond one-to-one, obtain the conversion between the two spaces Matrix T.
第四步、通过输入心电图信号将带有手术目标点的三维动态冠脉树中对应的若干个相位与当前心脏跳动相位相匹配,即以上述转换矩阵T作为初始转化矩阵,设Ti(i=1,2,…N)为在转换矩阵T的基础上针对术前影像包中的每一个相位的CT图像进行校正后的转换矩阵,即针对其中一个相位的CT图像i,Ti×T能进一步的校正该相位CT图像与手术对象实时二维超声图像之间的配准误差,其中N为一个心动周期术前CT图像数据中术前CT图像的相位个数,从而得到一组对T进行校正的转换矩阵;具体步骤包括: The fourth step is to match several phases in the three-dimensional dynamic coronary tree with the surgical target point with the current heart beating phase by inputting the electrocardiogram signal, that is, the above-mentioned transformation matrix T is used as the initial transformation matrix, and Ti (i= 1, 2, ... N) is the transformation matrix corrected for each phase of the CT image in the preoperative image package on the basis of the transformation matrix T, that is, for one of the phases of the CT image i, Ti×T can further Correct the registration error between the phase CT image and the real-time two-dimensional ultrasound image of the surgical object, where N is the phase number of the preoperative CT image in the preoperative CT image data of one cardiac cycle, so as to obtain a set of corrections for T The transformation matrix; the specific steps include:
4.1)通过超声探头对心脏采集一系列手术对象的手术对象实时图像,其中每一幅采集到的手术对象实时图像都对应某一个相应的术前CT图像、每一个相位的术前CT图像都对应了一系列的二维超声图像,对于相位i的术前CT图像数据来说,提取出心脏内壁的表面轮廓; 4.1) Acquisition of a series of real-time images of the surgical object to the heart through the ultrasonic probe, wherein each of the collected real-time images of the surgical object corresponds to a corresponding preoperative CT image, and each phase of the preoperative CT image corresponds to A series of two-dimensional ultrasound images were obtained, and for phase i preoperative CT image data, the surface contour of the inner wall of the heart was extracted;
4.2)对相位i所对应的每一幅二维超声图像提取出心脏内壁的轮廓,从这一系列的二维超声图像中提取的内壁组成了一组点集,从该点集中提取出特征点点集,从术前CT图像中提取的心脏内壁表面轮廓是另一组点集; 4.2) Extract the contour of the inner wall of the heart for each two-dimensional ultrasound image corresponding to phase i, and the inner wall extracted from this series of two-dimensional ultrasound images forms a set of points, and extracts the feature points from the point set set, the surface contour of the inner wall of the heart extracted from the preoperative CT image is another set of points;
4.3)通过迭代最近点算法(ICP算法)将二维超声图像上的点集配准到术前CT图像上的点集,得到这两个点集之间的转换矩阵Ti,用于后续的精配准处理的起始矩阵。 4.3) Register the point set on the two-dimensional ultrasound image to the point set on the preoperative CT image through the iterative closest point algorithm (ICP algorithm), and obtain the transformation matrix Ti between the two point sets for subsequent fine matching The starting matrix for quasi-processing.
第五步、通过体外光学定位将真实内窥镜图像与虚拟内窥镜图像进行融合。 In the fifth step, the real endoscopic image is fused with the virtual endoscopic image through external optical positioning.
所述的体外光学定位是指采用NDI光学定位仪进行红外线反射定位,从而实现对研究对象的三维定位。 The in vitro optical positioning refers to the use of NDI optical positioning instrument for infrared reflection positioning, so as to realize the three-dimensional positioning of the research object.
所述的融合是指:通过将两幅场景以不同的透明度相融合,便形成了一个增强虚拟现实环境,使得虚拟内窥镜看到的包含心脏跳动模型的虚拟场景与真实内窥镜看到的真实场景一致;同时从步骤4.3得到的一组转换矩阵中根据当前心脏跳动相位选择对应的转换矩阵Ti,从而建立图像空间与真实空间之间的映射转换关系,最终将该映射转换关系用于实时显示出器械与目标点之间的距离和相对位置关系,从而引导手术准确实施。 The fusion refers to: by fusing the two scenes with different transparency, an enhanced virtual reality environment is formed, so that the virtual scene containing the beating heart model seen by the virtual endoscope is the same as that seen by the real endoscope. At the same time, from the set of transformation matrices obtained in step 4.3, select the corresponding transformation matrix Ti according to the current heartbeat phase, so as to establish the mapping transformation relationship between the image space and the real space, and finally use the mapping transformation relationship for Real-time display of the distance and relative positional relationship between the instrument and the target point, so as to guide the accurate implementation of the operation.
技术效果 technical effect
本发明的优点包括:1.率先在机器人辅助冠脉搭桥手术导航中引入术中超声,通过术中二维超声图像与术前CT图像的配准来校正呼吸等因素造成的误差;2.解决了临床上一直以来依靠个人经验来定位搭桥目标点的不确定性问题。 The advantages of the present invention include: 1. It is the first to introduce intraoperative ultrasound in robot-assisted coronary artery bypass surgery navigation, and correct errors caused by factors such as breathing through the registration of intraoperative two-dimensional ultrasound images and preoperative CT images; 2. It solves the uncertainty problem of relying on personal experience to locate the target point of the bridge in clinical practice.
附图说明 Description of drawings
图1为本发明模块结构示意图。 Fig. 1 is a schematic diagram of the module structure of the present invention.
图2为实施例多幅三维心脏示意图(含冠脉树及心电图对应位置)。 Fig. 2 is a plurality of three-dimensional schematic diagrams of the heart (including the corresponding positions of the coronary tree and the electrocardiogram) of the embodiment.
图3为光学导航仪及配准模块示意图。 Fig. 3 is a schematic diagram of an optical navigator and a registration module.
图4为实施例流程介绍示意图。 Fig. 4 is a schematic diagram for introducing the flow of the embodiment.
图5为实施例融合示意图; Fig. 5 is the fusion schematic diagram of embodiment;
图中:a为现实内窥镜图像;b为虚拟内窥镜图像;c为两者重合图像。 In the figure: a is the real endoscopic image; b is the virtual endoscopic image; c is the overlapping image of the two.
具体实施方式 detailed description
下面对本发明的实施例作详细说明,本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。 The embodiments of the present invention are described in detail below. This embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and specific operating procedures are provided, but the protection scope of the present invention is not limited to the following implementation example.
实施例1 Example 1
如图1所示,本实施例包括:包括:术前CT图像导入模块、图像分割模块、体表初始配准模块、术前CT图像与术中二维超声图像配准模块和术中导航模块,其中:术前CT图像导入模块接收术前影像学检查获得的DICOM格式图像文件,生成术前影像包并分别输出至图像分割模块和体表初始配准模块,术前CT图像与术中二维超声图像配准模块分别与图像分割模块和体表初始配准模块相连并接收带有手术目标点的三维动态冠脉树以及体表初始配准结果,术前CT图像与术中二维超声图像配准模块输出术前CT转换矩阵至术中导航模块,通过术中导航模块输出精确手术导航信息。 As shown in Figure 1, this embodiment includes: including: preoperative CT image import module, image segmentation module, body surface initial registration module, preoperative CT image and intraoperative two-dimensional ultrasound image registration module and intraoperative navigation module , wherein: the preoperative CT image import module receives the DICOM format image files obtained by the preoperative imaging examination, generates a preoperative image package and outputs it to the image segmentation module and the body surface initial registration module respectively, and the preoperative CT image and the intraoperative two The three-dimensional ultrasound image registration module is connected with the image segmentation module and the body surface initial registration module respectively, and receives the three-dimensional dynamic coronary tree with the surgical target point and the initial registration result of the body surface. The preoperative CT image and the intraoperative two-dimensional ultrasound The image registration module outputs the preoperative CT transformation matrix to the intraoperative navigation module, and the precise surgical navigation information is output through the intraoperative navigation module.
所述的术前影像包包括:目标区域内一个或若干个心动周期的若干幅DIOM格式的术前CT图像以及根据DICOM数据重建得到的三维立体影像。 The preoperative image package includes: several preoperative CT images in DIOM format of one or several cardiac cycles in the target area and three-dimensional images reconstructed according to DICOM data.
所述的带有手术目标点的三维动态冠脉树包括:由图像分割模块生成的动态心脏模型和血管树模型。 The three-dimensional dynamic coronary tree with operation target points includes: a dynamic heart model and a blood vessel tree model generated by an image segmentation module.
所述的术前CT图像导入模块包括:DICOM图像读入单元和术前影像三维绘制单元,其中:DICOM图像读入单元将术前影像学检查获得的心脏一个心动周期的一系列DICOM格式图像文件导入解析出DICOM数据并传输至术前影像三维绘制单元,术前影像三维绘制单元根据DICOM数据重建成三维立体影像并合并为术前影像包并分别输出至图像分割模块和体表初始配准模块。 The preoperative CT image importing module includes: a DICOM image reading unit and a preoperative image three-dimensional rendering unit, wherein: the DICOM image reading unit is a series of DICOM format image files of one cardiac cycle of the heart obtained by the preoperative imaging examination Import and analyze the DICOM data and transmit it to the preoperative image 3D rendering unit. The preoperative image 3D rendering unit reconstructs a 3D stereoscopic image based on the DICOM data and merges it into a preoperative image package and outputs it to the image segmentation module and body surface initial registration module respectively. .
所述的图像分割模块包括:心脏分割单元和血管树分割单元,其中:心脏分割单元对术前影像包中一个心动周期内的心脏CT图像基于临床经验逐帧手动勾勒出心脏轮廓,并根据获得的心脏CT图像分割结果重建得到动态心脏模型;血管树分割单元对术前影像包中一个心动周期内的血管树CT图像基于临床经验逐帧手动勾勒出血管树轮廓并根据临床需求手动标出手术目标点,从而根据获得的血管树CT图像分割结果重建获得血管树模型并与动态心脏模型一并作为如图2所示的带有手术目标点的三维动态冠脉树输出至术前CT图像与术中二维超声图像配准模块。 The image segmentation module includes: a heart segmentation unit and a vascular tree segmentation unit, wherein: the heart segmentation unit manually outlines the heart outline frame by frame based on clinical experience on the cardiac CT images in a cardiac cycle in the preoperative image package, and obtains Based on the clinical experience, the vascular tree segmentation unit manually outlines the vascular tree frame by frame based on the vascular tree CT images within one cardiac cycle in the preoperative image package and manually marks the surgery according to clinical needs. Target points, so as to reconstruct the vascular tree model according to the obtained vascular tree CT image segmentation results and output it to the preoperative CT image and Intraoperative 2D ultrasound image registration module.
所述的体表初始配准模块包括:配准标记点选择单元和转换矩阵计算单元,其中:配准标记点选择单元选择设置于手术对象体表上的配准标记点,并获得其在CT图像数据在图像坐标系坐标以及空间坐标,转换矩阵计算单元根据图像坐标系坐标以及空间坐标,利用刚体配准算法实现这两个坐标之间的配准,即实现图像空间与手术对象空间的配准,并利用这两组坐标计算出初始转换矩阵作为体表初始配准结果,输出至术前CT图像与术中二维超声图像配准模块。 The body surface initial registration module includes: a registration marker selection unit and a transformation matrix calculation unit, wherein: the registration marker selection unit selects the registration markers set on the body surface of the surgical object, and obtains its CT The image data is in the coordinates of the image coordinate system and the space coordinates, and the conversion matrix calculation unit uses the rigid body registration algorithm to realize the registration between the two coordinates according to the coordinates of the image coordinate system and the space coordinates, that is, to realize the registration between the image space and the surgical object space. The two sets of coordinates are used to calculate the initial transformation matrix as the initial registration result of the body surface, and output to the preoperative CT image and intraoperative two-dimensional ultrasound image registration module.
所述的配准标记点是指:在术前获取CT图像数据时,在手术对象的体表上贴上6-8个金属标记点,金属标记点均匀分布在手术对象的胸廓体表上,金属标记点会在术前CT图像数据中高亮显示。 The registration marker points refer to: when acquiring CT image data before operation, paste 6-8 metal marker points on the body surface of the surgical object, and the metal marker points are evenly distributed on the thoracic body surface of the surgical object, Metal markers are highlighted in the preoperative CT image data.
所述的空间坐标具体通过在配准标记点选择单元中设置NDI spectra光学导航仪得以实现。 The space coordinates mentioned above are specifically realized by setting an NDI spectrum optical navigator in the registration marker selection unit.
所述的术前CT图像与术中二维超声图像配准模块包括:术中二维超声图像读入单元、二维超声图像与术前CT图像配准单元和ECG(electrocardiogram,心电图)读入单元,其中:二维超声图像与术前CT图像配准单元接收术中超声图像读入单元采集的二维超声输入图像和心电图读入单元采集的当前心脏跳动相位,通过输入心电图信号将带有手术目标点的三维动态冠脉树中对应的若干个相位与当前心脏跳动相位进行匹配,计算出两者之间的术前CT转换矩阵并输出至术中导航模块,其中每一组术前CT转换矩阵对应一种当前心脏跳动相位与带有手术目标点的三维动态冠脉树中对应相位。 The preoperative CT image and intraoperative two-dimensional ultrasonic image registration module includes: an intraoperative two-dimensional ultrasonic image read-in unit, a two-dimensional ultrasonic image and preoperative CT image registration unit, and an ECG (electrocardiogram, electrocardiogram) read-in unit. unit, wherein: the two-dimensional ultrasound image and preoperative CT image registration unit receives the two-dimensional ultrasound input image collected by the intraoperative ultrasound image read-in unit and the current heart beating phase collected by the electrocardiogram read-in unit, through the input electrocardiogram signal will have The corresponding several phases in the three-dimensional dynamic coronary tree of the surgical target point are matched with the current beating phase of the heart, and the preoperative CT transformation matrix between the two is calculated and output to the intraoperative navigation module. Each group of preoperative CT The conversion matrix corresponds to a current beating phase of the heart and a corresponding phase in the three-dimensional dynamic coronary tree with the surgical target point.
所述的术中导航模块包括:转换矩阵选择单元和导航显示单元,其中:转换矩阵选择单 元根据当前心脏跳动相位选择术前CT转换矩阵中对应的一组转换矩阵,并将所得的一组转换矩阵输出至导航显示单元,通过导航显示单元计算出当前虚拟器械与实际目标点之间的距离以及相对位置关系并进行视频显示,从而引导手术精确完成。 The intraoperative navigation module includes: a transformation matrix selection unit and a navigation display unit, wherein: the transformation matrix selection unit selects a corresponding group of transformation matrices in the preoperative CT transformation matrix according to the current cardiac beating phase, and transforms the obtained group of The matrix is output to the navigation display unit, and the distance and relative position relationship between the current virtual instrument and the actual target point are calculated by the navigation display unit and displayed on video, so as to guide the operation to be completed accurately.
导航方法,包括以下步骤: Navigation method, including the following steps:
第一步、图像分割模块中的心脏分割单元通过术前CT图像导入模块获得目标区域一个心动周期的一系列CT图像数据(DICOM格式)后,对该心动周期内的心脏CT图像基于临床经验逐帧手动勾勒出心脏轮廓,获得分割结果,并根据分割结果重建得到动态心脏模型。 In the first step, after the heart segmentation unit in the image segmentation module obtains a series of CT image data (DICOM format) of one cardiac cycle in the target area through the preoperative CT image import module, the heart CT images in the cardiac cycle are gradually analyzed based on clinical experience. The frame of the heart is manually outlined, the segmentation result is obtained, and the dynamic heart model is reconstructed according to the segmentation result.
第二步、图像分割模块中的血管树分割单元对术前影像包中一个心动周期内的血管树CT图像基于临床经验逐帧手动勾勒出血管树轮廓并根据临床需求手动标出手术目标点,获得分割结果,并根据分割结果重建获得血管树模型,构建出一个包含动态血管树模型的三维虚拟场景,该三维虚拟场景构成了一个虚拟内窥镜图像。 In the second step, the vascular tree segmentation unit in the image segmentation module manually outlines the outline of the vascular tree frame by frame based on clinical experience on the CT images of the vascular tree within one cardiac cycle in the preoperative image package, and manually marks the surgical target points according to clinical needs. A segmentation result is obtained, and a vascular tree model is reconstructed according to the segmentation result to construct a three-dimensional virtual scene including a dynamic vascular tree model, and the three-dimensional virtual scene constitutes a virtual endoscopic image.
第三步、体表初始配准模块利用体表配准来获得术前CT图像数据在图像坐标系坐标与真实手术对象的空间坐标的转换矩阵,即两个不同空间坐标系下的坐标点,通过特征互相映射并实现一一对应关系,达到最终术前CT图像与手术对象实时二维超声图像的对应;具体通过在术前CT图像中选取若干个配准标记点,在真实的空间找到与图像中配准标记点对应的点并利用光学导航仪得到这些点在真实空间的坐标,利用这两组不同坐标系,但是一一对应的点集的位置坐标,求得两个空间之间的转换矩阵。 The third step, the body surface initial registration module uses the body surface registration to obtain the transformation matrix between the coordinates of the preoperative CT image data in the image coordinate system and the spatial coordinates of the real surgical object, that is, the coordinate points in two different spatial coordinate systems, Through feature mutual mapping and one-to-one correspondence, the correspondence between the final preoperative CT image and the real-time two-dimensional ultrasound image of the surgical object is achieved; specifically, by selecting a number of registration markers in the preoperative CT image, find the matching points in the real space. Register the points corresponding to the marked points in the image and use the optical navigator to obtain the coordinates of these points in the real space. Using these two sets of different coordinate systems, but the position coordinates of the point sets that correspond one-to-one, the distance between the two spaces is obtained. transformation matrix.
所述的体表配准采用刚体配准算法,即在二维空间中,点(x1,y1)经过刚体变换到点(x2,y2)的变换公式为: The body surface registration adopts a rigid body registration algorithm, that is, in two-dimensional space, the transformation formula of point (x 1 , y 1 ) to point (x 2 , y 2 ) through rigid body transformation is:
其中,θ是旋转角度,(tx,ty)T为平移量。 Among them, θ is the rotation angle, and (t x , ty ) T is the translation amount.
所述的转换矩阵T:[X2]=T[X1],其中:X1和X2分别对应手术对象二维超声图像空间和术前CT图像数据空间的某点坐标。 The transformation matrix T: [X 2 ]=T[X 1 ], wherein: X 1 and X 2 correspond to the coordinates of a certain point in the two-dimensional ultrasound image space of the surgical object and the preoperative CT image data space, respectively.
第四步、由于一个心动周期术前CT图像数据由若干个相位的术前图像组成,因此通过输入心电图信号将带有手术目标点的三维动态冠脉树中对应的若干个相位与当前心脏跳动相位相匹配,具体步骤包括: Step 4: Since the preoperative CT image data of one cardiac cycle consists of several phases of preoperative images, the corresponding several phases in the three-dimensional dynamic coronary tree with the surgical target point are compared with the current heart beat by inputting the electrocardiogram signal. Phase matching, the specific steps include:
4.1)通过超声探头对心脏采集一系列手术对象的手术对象实时图像,其中每一幅采集到的手术对象实时图像都对应某一个相应的术前CT图像、每一个相位的术前CT图像都对应了一系列的二维超声图像,对于相位i的术前CT图像数据来说,提取出心脏内壁的表面轮廓,即针对一个相位的术前CT图像i,Ti×T能进一步的校正该相位CT图像与手术对象实时二维 超声图像之间的配准误差,其中N为一个心动周期术前CT图像数据中术前CT图像的相位个数,具体流程为: 4.1) Acquisition of a series of real-time images of the surgical object to the heart through the ultrasonic probe, wherein each of the collected real-time images of the surgical object corresponds to a corresponding preoperative CT image, and each phase of the preoperative CT image corresponds to A series of two-dimensional ultrasound images, for the preoperative CT image data of phase i, the surface contour of the inner wall of the heart is extracted, that is, for a preoperative CT image i of a phase, Ti×T can further correct the phase CT The registration error between the image and the real-time two-dimensional ultrasound image of the surgical object, where N is the phase number of the preoperative CT image in the preoperative CT image data of one cardiac cycle, and the specific process is as follows:
通过一个标定好的超声探头来获取术中的二维超声图像,通过T将二维超声图像坐标系转换到术前CT图像数据坐标系,从而与术前CT图像进行融合。设Ti(i=1,2,…N)为在转换矩阵T的基础上针对术前影像包中的每一个相位的术前CT图像进行校正后的转换矩阵,即针对一个相位的、与一组转换矩阵中的Ti相对应的术前CT图像i,Ti×T能进一步的校正该相位CT图像与手术对象实时二维超声图像之间的配准误差。具体流程为:通过标定好的超声探头对心脏采集一系列的二维超声图像,由于心电图信号,每一幅采集的二维超声图像都对应某一个相应的术前CT图像。完成后,每一个相位的术前CT图像都对应了一系列的二维超声图像。对于相位i的术前CT图像数据来说,提取出心脏内壁的表面轮廓。然后该相位所对应的每一幅二维超声图像提取出心脏内壁的轮廓。从这一系列的二维超声图像中提取的内壁组成了一组点集,从术前CT图像中提取的心脏内壁表面轮廓是另一组点集。通过迭代最近点(ICP)算法将二维超声图像上的点集配准到术前CT图像上的点集上得到了一个新的转换矩阵Ti,而Ti×T在T的基础上,精度有了进一步的提高; A calibrated ultrasound probe is used to obtain intraoperative two-dimensional ultrasound images, and the two-dimensional ultrasound image coordinate system is converted to the preoperative CT image data coordinate system by T, so as to be fused with the preoperative CT image. Let Ti (i=1, 2,...N) be the transformation matrix after correcting the preoperative CT image of each phase in the preoperative image package on the basis of the transformation matrix T, that is, for one phase, and one The preoperative CT image i corresponding to Ti in the group transformation matrix, Ti×T can further correct the registration error between the phase CT image and the real-time two-dimensional ultrasound image of the surgical object. The specific process is as follows: a series of two-dimensional ultrasound images are collected on the heart through the calibrated ultrasound probe. Due to the electrocardiogram signal, each collected two-dimensional ultrasound image corresponds to a corresponding preoperative CT image. After completion, each phase of the preoperative CT image corresponds to a series of 2D ultrasound images. For the preoperative CT image data of phase i, the surface contour of the inner wall of the heart is extracted. Then each two-dimensional ultrasound image corresponding to the phase extracts the contour of the inner wall of the heart. The inner wall extracted from this series of 2D ultrasound images constitutes one set of points, and the surface contour of the inner wall of the heart extracted from the preoperative CT images is another set of points. Through the iterative closest point (ICP) algorithm, a new transformation matrix Ti is obtained by registering the point set on the two-dimensional ultrasound image to the point set on the preoperative CT image, and Ti×T is based on T, and the accuracy is improved. further improvement;
所述的标定好的超声探头是指:为了能将实时的二维超声图像融合进导航系统,需要求得从二维超声图像坐标系到导航仪坐标系的转换矩阵。设TMtd ← ui是从二维超声图像坐标系到固定在超声探头上的追踪设备(光学放光球或者电磁传感器)的坐标系的转换矩阵,TMui ← td是从超声探头上的追踪设备坐标系到世界坐标系(导航仪坐标系)的转换矩阵,二维超声图像中的一个点的坐标可以通过下面的公式转换到世界坐标系下的坐标。 The calibrated ultrasound probe means that in order to integrate the real-time two-dimensional ultrasound image into the navigation system, it is necessary to obtain a conversion matrix from the coordinate system of the two-dimensional ultrasound image to the coordinate system of the navigator. Let TM td ← ui be the transformation matrix from the two-dimensional ultrasound image coordinate system to the coordinate system of the tracking device (optical radiating sphere or electromagnetic sensor) fixed on the ultrasound probe, and TM ui ← td be the tracking device from the ultrasound probe The conversion matrix from the coordinate system to the world coordinate system (navigator coordinate system), the coordinates of a point in the two-dimensional ultrasound image can be converted to the coordinates in the world coordinate system by the following formula.
其中,(uk,uv)是这个点在二维超声图像坐标系中的坐标,(sx,sy)是x轴和y轴的比例系数,(xw,yw,zw)是其在世界坐标系中的坐标。超声探头标定就是要求得从二维超声图像坐标系到固定在超声探头上的追踪设备的坐标系的转换矩阵TMtd ← ui; Among them, (u k , u v ) is the coordinate of this point in the two-dimensional ultrasound image coordinate system, (s x , s y ) is the proportional coefficient of the x-axis and y-axis, (x w , y w , z w ) are its coordinates in the world coordinate system. Ultrasound probe calibration is required to obtain the transformation matrix TM td ← ui from the two-dimensional ultrasound image coordinate system to the coordinate system of the tracking device fixed on the ultrasound probe;
完成上述步骤后通过迭代最近点算法(ICP算法)将二维超声图像上的点集配准到术前CT图像上的点集,得到这两个点集之间的转换矩阵Ti,用于后续的精配准处理的起始矩阵,具体步骤包括:假设两个待配准的点集P和Q, pi和qi分别是两个点集中 的点,i=1…n,配准问题的关键就是求解最优解使得 最小时的R和T; After the above steps are completed, the point set on the two-dimensional ultrasound image is registered to the point set on the preoperative CT image through the iterative closest point algorithm (ICP algorithm), and the transformation matrix Ti between the two point sets is obtained for subsequent The starting matrix for fine registration processing, the specific steps include: assuming two point sets P and Q to be registered, p i and q i are points in two point sets respectively, i=1...n, the key to the registration problem is to find the optimal solution such that R and T at minimum hour;
在完成了ICP初配准后,每一个相位的术前CT图像i都各自得到了一个转换矩阵Ti×T。之后在导航阶段,将会实时的连续进行精配准,每一次精配准的初始转换矩阵就是Ti×T。对于某一个相位的术前CT图像,目标在于寻找一个使相似性测度最大的最优的转换矩阵,记为T′i,使其能最终满足对由于呼吸等因素造成的误差,从而满足对目标血管的定位要求,在此过程中采用的相似性测度为归一化的互信息。 After the initial ICP registration is completed, a transformation matrix Ti×T is obtained for each phase of the preoperative CT image i. After that, in the navigation phase, fine registration will be carried out continuously in real time, and the initial transformation matrix for each fine registration is Ti×T. For a preoperative CT image of a certain phase, the goal is to find an optimal transformation matrix that maximizes the similarity measure, denoted as T′i, so that it can finally satisfy the error caused by factors such as breathing, so as to meet the target The positioning of blood vessels requires that the similarity measure used in this process is the normalized mutual information.
归一化互信息定义为: The normalized mutual information is defined as:
其中,M为二维超声图像所在区域在术前CT上采样所得的图像灰度点集,R为实时的二维超声图像灰度点集,H(M)为M的香农熵, iM代表M图像像素点的灰度值, 代表M图像中像素点灰度值为iM的概率;H(R)为R的香农熵, iR代表R图像像素点的灰度值, 代表R图像中像素点灰度值为iR的概率;H(M,R)为M和R的联合熵,
整个配准过程也就是寻找T′i,使得NMI(M,R)最大。由于Ti×T是通过与术中二维超声图像的配准得到,已经具有了较小的误差,是一个较好的初始寻优位置,因此能快速的收敛到最优解,从而得到一个最终精确的转换矩阵。 The whole registration process is to find T'i, so that NMI (M, R) is the largest. Since Ti×T is obtained through registration with intraoperative two-dimensional ultrasound images, it already has a small error and is a better initial optimization position, so it can quickly converge to the optimal solution, thus obtaining a final exact transformation matrix.
第五步、通过体外光学定位将真实内窥镜与虚拟内窥镜图像进行融合。 The fifth step is to fuse the images of the real endoscope and the virtual endoscope through external optical positioning.
如图5所示,所述的融合是指:通过将两幅场景以不同的透明度相融合,便形成了一个增强虚拟现实环境,使得虚拟内窥镜看到的包含心脏跳动模型的虚拟场景与真实内窥镜看到的真实场景一致;同时从步骤4.3得到的一组转换矩阵中根据当前心脏跳动相位选择对应的转换矩阵Ti,从而建立图像空间与真实空间之间的映射转换关系,最终将该映射转换关系用于实时显示出器械与目标点之间的距离和相对位置关系,从而引导手术准确实施。 As shown in Figure 5, the fusion refers to: by fusing the two scenes with different transparency, an augmented virtual reality environment is formed, so that the virtual scene containing the beating heart model seen by the virtual endoscope is consistent with the The real scene seen by the real endoscope is consistent; at the same time, from the set of transformation matrices obtained in step 4.3, the corresponding transformation matrix Ti is selected according to the current heartbeat phase, so as to establish the mapping transformation relationship between the image space and the real space, and finally the The mapping conversion relationship is used to display the distance and relative positional relationship between the instrument and the target point in real time, so as to guide the accurate implementation of the operation.
如图3所示,所述的体外光学定位是指采用NDI光学定位仪进行红外线反射定位,从而实现对研究对象的三维定位。该NDI光学导航仪包括:光源8及接收器9,配准工具10。其 中光源及接收器最远有效距离为3000mm,最大面积可达1470×1856mm2。配准工具由反光球11部分及前端长配准针头12部分组成。患者在进行CT扫描前均匀贴附6-8个金属标记点,这样金属标记点在术前CT图像数据中高亮显示。患者在进行手术时以同样体位趟在手术台上,接生命检测设备,在进行全麻后,开始利用在图像中获取金属标记点在图像坐标系中的坐标,同时利用NDI光学导航仪获取的金属标记点在真实研究对象空间的坐标,利用刚体配准算法实现这个坐标系的配准,从而实现图像空间与手术对象空间的配准。 As shown in FIG. 3 , the in vitro optical positioning refers to the use of NDI optical positioning instrument for infrared reflection positioning, so as to realize the three-dimensional positioning of the research object. The NDI optical navigator includes: a light source 8 and a receiver 9 , and a registration tool 10 . Among them, the farthest effective distance between the light source and the receiver is 3000mm, and the maximum area can reach 1470×1856mm 2 . The registration tool is composed of a reflective ball 11 and a long registration needle 12 at the front end. Before the CT scan, the patient evenly attaches 6-8 metal markers, so that the metal markers are highlighted in the preoperative CT image data. During the operation, the patient lies on the operating table in the same position and connects to the life detection equipment. After general anesthesia, the patient begins to use the coordinates of the metal marker points in the image coordinate system obtained in the image, and at the same time use the NDI optical navigator to obtain The coordinates of the metal marker points in the real research object space, the registration of this coordinate system is realized by using the rigid body registration algorithm, so as to realize the registration of the image space and the operation object space.
通过光学定位仪实时传入的内窥镜与机械臂的位置信息,软件能在计算机屏幕中实时显示机械臂与模型的相对位置。为了实现真实内窥镜与虚拟内窥镜的融合,需要求得从世界坐标系到内窥镜坐标系的转换矩阵以及从内窥镜坐标系到内窥镜投影坐标系的矩阵,即需要进行内窥镜的标定。在完成内窥镜标定后,通过输入的内窥镜位置朝向等信息,实现中虚拟内窥镜与真实内窥镜的状态完全一致。从而使得虚拟内窥镜看到的包含心脏跳动模型的虚拟场景与真实内窥镜看到的真实场景一致,通过将两幅场景以不同的透明度相融合,便形成了一个增强虚拟现实环境,有效引导手术准确实施。 Through the real-time position information of the endoscope and the manipulator transmitted by the optical locator, the software can display the relative position of the manipulator and the model on the computer screen in real time. In order to realize the fusion of real endoscope and virtual endoscope, it is necessary to obtain the transformation matrix from the world coordinate system to the endoscope coordinate system and the matrix from the endoscope coordinate system to the endoscope projection coordinate system, that is, it is necessary to carry out Calibration of endoscopes. After the endoscope calibration is completed, through the input information such as the position and orientation of the endoscope, the state of the virtual endoscope and the real endoscope in the realization are exactly the same. Therefore, the virtual scene containing the beating heart model seen by the virtual endoscope is consistent with the real scene seen by the real endoscope. By combining the two scenes with different transparency, an augmented virtual reality environment is formed, which is effective To guide the accurate implementation of surgery.
上述的内窥镜标定:内窥镜图像的获取是一个3D场景在一个2D投影平面投影所得的结果。 The above-mentioned endoscope calibration: the acquisition of the endoscope image is the result obtained by projecting a 3D scene on a 2D projection plane.
其中,X=[x y z 1]T是3D场景中一个点的齐次坐标系,x、y、z分别表示点在x轴、y轴、z轴上的坐标,[u v 1]T代表了该点在2D投影平面中的坐标,u、v分别表示点在x轴、y轴上的坐标,λ是2D投影平面其次坐标系的缩放因子。上式中包含了一个从世界坐标系转换到内窥镜坐标系的转换矩阵Mext以及一个从内窥镜坐标系转换内窥镜投影坐标系的转换矩阵Mint。Mext是一个4×4转换矩阵,可以表示为: Among them, X=[xyz 1] T is the homogeneous coordinate system of a point in the 3D scene, x, y, and z represent the coordinates of the point on the x-axis, y-axis, and z-axis respectively, and [uv 1] T represents the The coordinates of the point in the 2D projection plane, u and v represent the coordinates of the point on the x-axis and y-axis respectively, and λ is the scaling factor of the secondary coordinate system of the 2D projection plane. The above formula includes a conversion matrix M ext for converting from the world coordinate system to the endoscope coordinate system and a conversion matrix M int for converting from the endoscope coordinate system to the endoscope projection coordinate system. M ext is a 4×4 transformation matrix, which can be expressed as:
其中,r1......r9为旋转因子,tx,ty,tz为平移向量。 Among them, r 1 ... r 9 are rotation factors, t x , ty , t z are translation vectors.
Mint可以表示为: M int can be expressed as:
其中,f为镜头焦点到镜面中心的距离,s为镜头视野的宽高比,(u0,v0)为镜面中心在2D投影坐标系下的坐标。内窥镜的标定即是要确定Mint和Mext这两个矩阵。 Where, f is the distance from the focal point of the lens to the center of the mirror, s is the aspect ratio of the field of view of the lens, and (u 0 , v 0 ) is the coordinate of the center of the mirror in the 2D projection coordinate system. The calibration of the endoscope is to determine the two matrices M int and M ext .
使用本定位技术可以较为快速、精确地实现对目标冠状动脉的定位,平均精确度达3mm左右,大大减少了机器人辅助冠状动脉搭桥中寻找病变冠脉的时间,使手术更加安全、高效。 Using this positioning technology can realize the positioning of the target coronary artery relatively quickly and accurately, with an average accuracy of about 3mm, which greatly reduces the time for finding coronary artery disease in robot-assisted coronary artery bypass grafting, and makes the operation safer and more efficient.
为了避免术前CT图像与手术对象实时二维超声配准过程中由于呼吸、心跳等因素造成的误差,将术中超声引入机器人冠状动脉搭桥手术导航,这是本发明的突出之处。 In order to avoid errors caused by factors such as respiration and heartbeat during the real-time two-dimensional ultrasound registration of the preoperative CT image and the surgical object, intraoperative ultrasound is introduced into robotic coronary artery bypass surgery navigation, which is the highlight of the present invention.
其所能获得的有益效果:1.实现了基于术中超声与术前CT配准结果的光学导航定位方法;2.解决了临床上一直以来依靠个人经验来定位搭桥目标点问题,使得手术操作更加安全、高效。 The beneficial effects it can obtain: 1. Realize the optical navigation positioning method based on the registration results of intraoperative ultrasound and preoperative CT; 2. Solve the problem of relying on personal experience to locate the bridge target point in clinical practice, making surgical operations Safer and more efficient.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210454220.2A CN102999902B (en) | 2012-11-13 | 2012-11-13 | Optical navigation positioning navigation method based on CT registration result |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210454220.2A CN102999902B (en) | 2012-11-13 | 2012-11-13 | Optical navigation positioning navigation method based on CT registration result |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102999902A CN102999902A (en) | 2013-03-27 |
CN102999902B true CN102999902B (en) | 2016-12-21 |
Family
ID=47928436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210454220.2A Active CN102999902B (en) | 2012-11-13 | 2012-11-13 | Optical navigation positioning navigation method based on CT registration result |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102999902B (en) |
Families Citing this family (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103169445B (en) * | 2013-04-16 | 2016-07-06 | 苏州朗开医疗技术有限公司 | The air navigation aid of a kind of endoscope and system |
CN103295455B (en) * | 2013-06-19 | 2016-04-13 | 北京理工大学 | Based on the ultrasonic training system of CT image simulation and location |
CN103371870B (en) * | 2013-07-16 | 2015-07-29 | 深圳先进技术研究院 | A kind of surgical navigation systems based on multimode images |
CN105611877A (en) * | 2013-09-18 | 2016-05-25 | 深圳迈瑞生物医疗电子股份有限公司 | Method and system for guided ultrasound image acquisition |
JP2017500943A (en) * | 2013-12-18 | 2017-01-12 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | System and method for registration of ultrasound and computed tomography images for ultrasonic thrombolysis procedures |
CN103948361B (en) * | 2014-04-14 | 2016-10-05 | 中国人民解放军总医院 | Endoscope's positioning and tracing method of no marks point and system |
CN103971574B (en) * | 2014-04-14 | 2017-01-18 | 中国人民解放军总医院 | Ultrasonic guidance tumor puncture training simulation system |
CN105078514A (en) * | 2014-04-22 | 2015-11-25 | 重庆海扶医疗科技股份有限公司 | Construction method and device of three-dimensional model, image monitoring method and device |
KR101638477B1 (en) | 2014-09-19 | 2016-07-11 | 주식회사 고영테크놀러지 | Optical tracking system and registration method for coordinate system in optical tracking system |
CN104323860B (en) * | 2014-11-07 | 2018-08-31 | 常州朗合医疗器械有限公司 | Navigation path planning device and method |
US20180082433A1 (en) * | 2015-03-31 | 2018-03-22 | Centre For Imaging Technology Commercialization (Cimtec) | Method and system for registering ultrasound and computed tomography images |
CA2949797C (en) * | 2015-06-05 | 2019-04-02 | Chieh Hsiao CHEN | Intraoperative tracking method |
WO2017020281A1 (en) * | 2015-08-05 | 2017-02-09 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic image processing system and method and device thereof, ultrasonic diagnostic device |
CN105411679B (en) * | 2015-11-23 | 2017-07-14 | 中国科学院深圳先进技术研究院 | One kind punctures path planning method and device for correcting |
CN108475428B (en) * | 2015-12-22 | 2022-04-29 | 皇家飞利浦有限公司 | System and method for heart model guided coronary artery segmentation |
CN107456278B (en) * | 2016-06-06 | 2021-03-05 | 北京理工大学 | Endoscopic surgery navigation method and system |
CN107481272A (en) * | 2016-06-08 | 2017-12-15 | 瑞地玛医学科技有限公司 | A kind of radiotherapy treatment planning image registration and the method and system merged |
EP4201340A1 (en) | 2016-06-20 | 2023-06-28 | BFLY Operations, Inc. | Automated image acquisition for assisting a user to operate an ultrasound device |
US11389254B2 (en) | 2016-08-18 | 2022-07-19 | Envizion Medical Ltd. | Insertion device positioning guidance system and method |
IL254009A0 (en) | 2016-08-18 | 2017-09-28 | Nutriseal Lp | Insertion device positioning guidance system and method |
CN106485739B (en) * | 2016-09-22 | 2019-06-11 | 哈尔滨工业大学 | Point set registration method based on L2 distance |
CN106491216B (en) * | 2016-10-28 | 2019-06-28 | 苏州朗开医疗技术有限公司 | It is a kind of to diagnose internal target object positioning system and medical treatment alignment system |
CN106725852A (en) * | 2016-12-02 | 2017-05-31 | 上海精劢医疗科技有限公司 | The operation guiding system of lung puncture |
CN106890025B (en) * | 2017-03-03 | 2020-02-28 | 浙江大学 | A minimally invasive surgical navigation system and navigation method |
IL301087B2 (en) * | 2017-05-01 | 2024-12-01 | Magic Leap Inc | Adapting content to a three-dimensional spatial environment |
CN111163837B (en) * | 2017-07-28 | 2022-08-02 | 医达科技公司 | Method and system for surgical planning in a mixed reality environment |
CN107689045B (en) * | 2017-09-06 | 2021-06-29 | 艾瑞迈迪医疗科技(北京)有限公司 | Image display method, device and system for endoscope minimally invasive surgery navigation |
CN107610109A (en) * | 2017-09-06 | 2018-01-19 | 艾瑞迈迪医疗科技(北京)有限公司 | Method for displaying image, the apparatus and system of endoscope micro-wound navigation |
CN107854177A (en) * | 2017-11-18 | 2018-03-30 | 上海交通大学医学院附属第九人民医院 | A kind of ultrasound and CT/MR image co-registrations operation guiding system and its method based on optical alignment registration |
CN108309450B (en) * | 2017-12-27 | 2021-04-27 | 刘洋 | Positioning and registering system and method for surgical navigation |
CN108272502A (en) * | 2017-12-29 | 2018-07-13 | 战跃福 | A kind of ablation needle guiding operating method and system of CT three-dimensional imagings guiding |
CN108324369B (en) * | 2018-02-01 | 2019-11-22 | 艾瑞迈迪医疗科技(北京)有限公司 | Method for registering and Use of Neuronavigation equipment in art based on face |
CN110368026B (en) * | 2018-04-13 | 2021-03-12 | 北京柏惠维康医疗机器人科技有限公司 | Operation auxiliary device and system |
CN110403698B (en) * | 2018-04-28 | 2020-10-30 | 北京柏惠维康科技有限公司 | Instrument intervention device and system |
US10548815B2 (en) | 2018-04-30 | 2020-02-04 | Envizion Medical Ltd. | Insertion device positioning guidance system and method |
CN108836479B (en) * | 2018-05-16 | 2020-01-24 | 山东大学 | A medical image registration method and surgical navigation system |
CN109035414A (en) * | 2018-06-20 | 2018-12-18 | 深圳大学 | Generation method, device, equipment and the storage medium of augmented reality operative image |
US20210343031A1 (en) * | 2018-08-29 | 2021-11-04 | Agency For Science, Technology And Research | Lesion localization in an organ |
CN108992084B (en) * | 2018-09-07 | 2023-08-01 | 广东工业大学 | Method for imaging by using combination of CT system and ultrasonic system and CT-ultrasonic inspection equipment |
CN109345632B (en) * | 2018-09-17 | 2023-04-07 | 深圳达闼科技控股有限公司 | Method for acquiring image, related device and readable storage medium |
JP7085093B2 (en) | 2018-10-17 | 2022-06-16 | エンヴィジョン メディカル リミテッド | Insertion device Positioning guidance system and method |
US11382701B2 (en) | 2018-10-17 | 2022-07-12 | Envizion Medical Ltd. | Insertion device positioning guidance system and method |
CN110070788A (en) * | 2019-03-18 | 2019-07-30 | 叶哲伟 | A kind of human body 3D meridian point method for visualizing based on mixed reality |
CN110075429B (en) * | 2019-04-26 | 2021-05-28 | 上海交通大学 | Ultrasonic transducer navigation method, navigation device, electronic device and readable storage medium |
US11416159B2 (en) * | 2019-04-29 | 2022-08-16 | EMC IP Holding Company LLC | Method and system for prioritizing critical data object storage during backup operations |
WO2020264003A1 (en) | 2019-06-25 | 2020-12-30 | Intuitive Surgical Operations, Inc. | System and method related to registration for a medical procedure |
CN110478050A (en) * | 2019-08-23 | 2019-11-22 | 北京仁馨医疗科技有限公司 | 3-D image and scope image fusing method, apparatus and system based on CT/MRI data |
CN110443749A (en) * | 2019-09-10 | 2019-11-12 | 真健康(北京)医疗科技有限公司 | A kind of dynamic registration method and device |
CN110731821B (en) * | 2019-09-30 | 2021-06-01 | 艾瑞迈迪医疗科技(北京)有限公司 | Method and guide bracket for minimally invasive tumor ablation based on CT/MRI |
CN110706357B (en) * | 2019-10-10 | 2023-02-24 | 青岛大学附属医院 | Navigation system |
CN110751681B (en) * | 2019-10-18 | 2022-07-08 | 西南科技大学 | Augmented reality registration method, device, equipment and storage medium |
CN110931121A (en) * | 2019-11-29 | 2020-03-27 | 重庆邮电大学 | Remote operation guiding device based on Hololens and operation method |
CN113129342A (en) * | 2019-12-31 | 2021-07-16 | 无锡祥生医疗科技股份有限公司 | Multi-modal fusion imaging method, device and storage medium |
CN111415404B (en) * | 2020-03-16 | 2021-06-29 | 广州柏视医疗科技有限公司 | Positioning method and device for intraoperative preset area, storage medium and electronic equipment |
CN113643226B (en) * | 2020-04-27 | 2024-01-19 | 成都术通科技有限公司 | Labeling method, labeling device, labeling equipment and labeling medium |
CN111724420A (en) * | 2020-05-14 | 2020-09-29 | 北京天智航医疗科技股份有限公司 | An intraoperative registration method, device, storage medium and server |
CN111612778B (en) * | 2020-05-26 | 2023-07-11 | 上海交通大学 | A preoperative CTA and intraoperative X-ray coronary artery registration method |
CN111870344B (en) * | 2020-05-29 | 2021-06-08 | 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) | Preoperative navigation method, system and terminal equipment |
CN111759463B (en) * | 2020-07-31 | 2022-03-15 | 南京普爱医疗设备股份有限公司 | Method for improving positioning precision of surgical mechanical arm |
CN112002018A (en) * | 2020-08-18 | 2020-11-27 | 云南省第一人民医院 | Intraoperative position navigation system, device and method based on mixed reality |
CN112258562A (en) * | 2020-09-17 | 2021-01-22 | 常州锦瑟医疗信息科技有限公司 | A registration system and registration method based on image features |
CN112155733B (en) * | 2020-09-29 | 2022-01-28 | 苏州微创畅行机器人有限公司 | Readable storage medium, bone modeling and registering system and bone surgery system |
CN112370161B (en) * | 2020-10-12 | 2022-07-26 | 珠海横乐医学科技有限公司 | Operation navigation method and medium based on ultrasonic image characteristic plane detection |
WO2022091210A1 (en) * | 2020-10-27 | 2022-05-05 | リバーフィールド株式会社 | Surgery assisting device |
CN112331311B (en) * | 2020-11-06 | 2022-06-03 | 青岛海信医疗设备股份有限公司 | Method and device for fusion display of video and preoperative model in laparoscopic surgery |
CN116710018A (en) * | 2020-12-30 | 2023-09-05 | 西安大医集团股份有限公司 | Position adjustment method, head display device and radiotherapy system |
CN113012230B (en) * | 2021-03-30 | 2022-09-23 | 华南理工大学 | Method for placing surgical guide plate under auxiliary guidance of AR in operation |
CN115245303A (en) * | 2021-04-25 | 2022-10-28 | 河北医科大学第二医院 | Image fusion system and method for endoscope three-dimensional navigation |
CN113349931B (en) * | 2021-06-18 | 2024-06-04 | 云南微乐数字医疗科技有限公司 | Focus registration method for high-precision operation navigation system |
CN113425411B (en) * | 2021-08-04 | 2022-05-10 | 成都科莱弗生命科技有限公司 | Device of pathological change location navigation |
CN113610826B (en) * | 2021-08-13 | 2024-12-24 | 武汉推想医疗科技有限公司 | Puncture positioning method and device, electronic device and storage medium |
US12254643B2 (en) * | 2021-11-18 | 2025-03-18 | Remex Medical Corp. | Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest |
CN114241021A (en) * | 2021-12-15 | 2022-03-25 | 商丘市第一人民医院 | A method and system for spinal surgery navigation based on spatial registration |
CN114404041B (en) * | 2022-01-19 | 2023-11-14 | 上海精劢医疗科技有限公司 | C-arm imaging parameter calibration system and method |
CN114387320B (en) * | 2022-03-25 | 2022-07-19 | 武汉楚精灵医疗科技有限公司 | Medical image registration method, device, terminal and computer-readable storage medium |
CN115281830B (en) * | 2022-08-19 | 2025-01-28 | 珠海赛纳数字医疗技术有限公司 | Data processing method, device and storage medium |
CN115462902A (en) * | 2022-10-27 | 2022-12-13 | 杭州柳叶刀机器人有限公司 | Method, device and electronic equipment for acquiring position of reference mark point |
CN115644951B (en) * | 2022-11-20 | 2025-01-28 | 成都真实维度科技有限公司 | A precise puncture sampling system based on real-time modeling |
CN115553818B (en) * | 2022-12-05 | 2023-03-28 | 湖南省人民医院(湖南师范大学附属第一医院) | Myocardial biopsy system based on fusion positioning |
CN117204950B (en) * | 2023-09-18 | 2024-05-10 | 普密特(成都)医疗科技有限公司 | Endoscope position guiding method, device, equipment and medium based on image characteristics |
CN116965848A (en) * | 2023-09-25 | 2023-10-31 | 中南大学 | A three-dimensional ultrasound imaging method, system, equipment and storage medium |
CN118280528B (en) * | 2024-01-24 | 2025-03-28 | 广东工业大学 | Augmented reality visualization method, system, device and medium for vascular anatomical structure |
CN118177975B (en) * | 2024-05-15 | 2024-08-09 | 浙江伽奈维医疗科技有限公司 | Ultrasonic navigation rapid fusion device and method |
CN118898595B (en) * | 2024-07-23 | 2025-05-06 | 冰晶智能医疗科技(北京)有限公司 | 3D positioning labeling method, device, equipment and medium for intracardiac ultrasonic image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1748646A (en) * | 2004-08-12 | 2006-03-22 | 通用电气公司 | Method and apparatus for interventional tools for planning, positioning and navigation of medical interventional procedures |
CN101681504A (en) * | 2006-11-27 | 2010-03-24 | 皇家飞利浦电子股份有限公司 | System and method for fusing real-time ultrasound images with pre-acquired medical images |
CN102224525A (en) * | 2008-11-25 | 2011-10-19 | 皇家飞利浦电子股份有限公司 | Image provision for registration |
-
2012
- 2012-11-13 CN CN201210454220.2A patent/CN102999902B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1748646A (en) * | 2004-08-12 | 2006-03-22 | 通用电气公司 | Method and apparatus for interventional tools for planning, positioning and navigation of medical interventional procedures |
CN101681504A (en) * | 2006-11-27 | 2010-03-24 | 皇家飞利浦电子股份有限公司 | System and method for fusing real-time ultrasound images with pre-acquired medical images |
CN102224525A (en) * | 2008-11-25 | 2011-10-19 | 皇家飞利浦电子股份有限公司 | Image provision for registration |
Non-Patent Citations (3)
Title |
---|
Registration of freehand 3D ultrasound and magnetic resonance liver images;G.P. Penney et al;《Medical Image Analysis》;20040331;第8卷(第1期);全文 * |
The implementation of an integrated computer-assisted system for minimally invasive cardiac surgery;Junfeng Cai et al;《The International Journal of Medical Robotics and Computer Assisted Surgery》;20100128;第6卷(第1期);全文 * |
Three-Dimensional Ultrasound-Based Navigation Combined with Preoperative CT During Abdominal Interventions: A Feasibility Study;J.H. Kaspersen et al;《Cardiovasc Intervent Radiol》;20030625;第26卷;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN102999902A (en) | 2013-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102999902B (en) | Optical navigation positioning navigation method based on CT registration result | |
US10898057B2 (en) | Apparatus and method for airway registration and navigation | |
US20190272632A1 (en) | Method and a system for registering a 3d pre acquired image coordinates system with a medical positioning system coordinate system and with a 2d image coordinate system | |
Mountney et al. | Three-dimensional tissue deformation recovery and tracking | |
JP2966089B2 (en) | Interactive device for local surgery inside heterogeneous tissue | |
WO2021114226A1 (en) | Surgical navigation system employing intrahepatic blood vessel registration | |
CN104856720B (en) | A kind of robot assisted ultrasonic scanning system based on RGB D sensors | |
CN103479431B (en) | Non-intrusive minimally invasive operation navigation system | |
CN108420529A (en) | The surgical navigational emulation mode guided based on image in magnetic tracking and art | |
CN101474075B (en) | Minimally Invasive Surgery Navigation System | |
CN103948361B (en) | Endoscope's positioning and tracing method of no marks point and system | |
CN107714082A (en) | A kind of ultrasonic probe caliberating device and method based on optical alignment | |
CN105046644B (en) | Ultrasonic and CT image registration method and system based on linear correlation | |
CN110288653B (en) | A multi-angle ultrasound image fusion method, system and electronic device | |
CN106236264B (en) | Gastrointestinal surgery navigation method and system based on optical tracking and image matching | |
CN113229937A (en) | Method and system for realizing surgical navigation by using real-time structured light technology | |
Deligianni et al. | Nonrigid 2-D/3-D registration for patient specific bronchoscopy simulation with statistical shape modeling: Phantom validation | |
CN115919464B (en) | Tumor localization method, system, device and tumor development prediction method | |
JP2022517807A (en) | Systems and methods for medical navigation | |
US20220175457A1 (en) | Endoscopic image registration system for robotic surgery | |
Shao et al. | Facial augmented reality based on hierarchical optimization of similarity aspect graph | |
Wang et al. | Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy | |
Yang et al. | Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery | |
Zhang et al. | 3D medical model registration using scale-invariant coherent point drift algorithm for AR | |
Lu et al. | Virtual-real registration of augmented reality technology used in the cerebral surgery lesion localization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |