[go: up one dir, main page]

CN112906205B - Virtual learning method for total hip replacement surgery - Google Patents

Virtual learning method for total hip replacement surgery Download PDF

Info

Publication number
CN112906205B
CN112906205B CN202110126745.2A CN202110126745A CN112906205B CN 112906205 B CN112906205 B CN 112906205B CN 202110126745 A CN202110126745 A CN 202110126745A CN 112906205 B CN112906205 B CN 112906205B
Authority
CN
China
Prior art keywords
virtual
force feedback
training
equipment
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110126745.2A
Other languages
Chinese (zh)
Other versions
CN112906205A (en
Inventor
张日威
何燕
蔡述庭
熊晓明
郭靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110126745.2A priority Critical patent/CN112906205B/en
Publication of CN112906205A publication Critical patent/CN112906205A/en
Application granted granted Critical
Publication of CN112906205B publication Critical patent/CN112906205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a virtual learning method for total hip replacement surgery, which comprises the steps of initializing a system, establishing communication connection between an input end and a controller, and completing mapping between a force feedback device and the controller in a real-time virtual scene; in the operation process, the input end interacts with the surgical instrument model in the virtual scene in a three-dimensional graphic input control mode, and simultaneously, an operator can timely see a training picture through image feedback and is provided with key information prompts in the training process. The invention can obtain vivid and visual perception, improve the understanding of the abstract principle, relieve the problems of expensive and tense medical and operation resources in various regions of China and reduce the operation risk.

Description

一种用于全髋关节置换手术的虚拟学习方法A virtual learning method for total hip replacement surgery

技术领域technical field

本发明涉及全髋关节置换的技术领域,尤其涉及到一种用于全髋关节置换手术的虚拟学习方法。The invention relates to the technical field of total hip replacement, in particular to a virtual learning method for total hip replacement.

背景技术Background technique

手术发展历程一共经历了三个阶段:开放性手术、人工微创手术和机器人辅助手术。手术正朝着一个更微细化、灵巧化和智能化的方向发展。The development of surgery has gone through three stages: open surgery, minimally invasive manual surgery, and robot-assisted surgery. Surgery is developing towards a more miniaturized, dexterous and intelligent direction.

开放性手术即传统的开刀,从身体表面切开暴露出骨头、内脏器官。开放性手术的优点是让医生直接通过肉眼充分了解病灶区域的解剖信息,对手术有一个良好的把握。但是这种手术的缺陷是伤口大、疼痛大、出血多、住院时间长以及伤口愈合速度慢,甚至潜在地增大术后并发症的风险。微创手术,即微小创伤的手术。主要通过利用腹腔镜、胸腔镜等现代医疗器械及相关设备进行的手术。微创手术以创口小、失血少及住院周期短等优势在外科手术治疗中得到了广泛的应用。然而,人工微创手术存在定位精度不高、空间和环境限制、术中疲劳、工作时间长造成手部运动震颤等问题,机器人辅助手术已成为国际前沿研究热点之一。Open surgery is traditional surgery, in which bones and internal organs are exposed by cutting from the surface of the body. The advantage of open surgery is that the doctor can fully understand the anatomical information of the lesion area directly through the naked eye, and have a good grasp of the operation. However, the disadvantages of this kind of surgery are large wound, heavy pain, heavy bleeding, long hospital stay and slow wound healing, and even potentially increase the risk of postoperative complications. Minimally invasive surgery, that is, minimally invasive surgery. It is mainly performed by using laparoscopy, thoracoscopic and other modern medical instruments and related equipment. Minimally invasive surgery has been widely used in surgical treatment due to its advantages of small wound, less blood loss and short hospitalization period. However, manual minimally invasive surgery has problems such as low positioning accuracy, space and environmental constraints, intraoperative fatigue, and hand tremors caused by long working hours. Robot-assisted surgery has become one of the international frontier research hotspots.

随着自动化和机器人技术的发展,手术机器人已经开始渗透到手术计划、微创定位、无创治疗的各个环节。手术机器人通过滤除手部颤抖,将手部动作缩放成更小的动作,控制操纵器完成一系列手术操作,提高了外科医生的准确性。然而,在机器人辅助手术中,医生的操作习惯与传统手术不一样,主要表现为主从控制模式不如纯手工操作灵活方便。因此,医师操作技能需要经过反复训练才能熟练控制机器人。常见的训练模式有人机交互模式和虚拟学习模式。人机交互模式是直接控制操纵杆操纵机器人完成指定动作,对成本要求较高,且不能满足单一系统手术对象多样化的特点,因此不能广泛使用。而虚拟学习系统是提前提高熟练度,再过渡到人机交互模式完成后续操作。虚拟学习系统以计算机软硬件为基础,以相关技术手段为辅助,通过对已知或未知世界的模拟,使人们真实地感受到一种先进的计算机应用技术。虚拟训练器提供了一个多源信息融合互动的三维动态可视化和实体行为交互系统仿真,对外科医生的手术训练、手术效果预测、手术方案制定和导航都有重要意义。With the development of automation and robotics, surgical robots have begun to penetrate into all aspects of surgical planning, minimally invasive positioning, and non-invasive treatment. The surgical robot improves the surgeon's accuracy by filtering out hand tremors, scaling hand movements into smaller movements, and controlling the manipulator to complete a series of surgical operations. However, in robot-assisted surgery, the doctor's operating habits are different from traditional surgery, mainly because the master-slave control mode is not as flexible and convenient as pure manual operation. Therefore, the physician's operating skills need repeated training before he can proficiently control the robot. Common training modes include machine-computer interaction mode and virtual learning mode. The human-computer interaction mode is to directly control the joystick to manipulate the robot to complete the specified action, which requires high cost and cannot meet the characteristics of diversification of surgical objects in a single system, so it cannot be widely used. The virtual learning system is to improve the proficiency in advance, and then transition to the human-computer interaction mode to complete the follow-up operations. The virtual learning system is based on computer software and hardware, assisted by related technical means, and through the simulation of the known or unknown world, people can truly feel an advanced computer application technology. The virtual trainer provides a multi-source information fusion and interactive 3D dynamic visualization and physical behavior interaction system simulation, which is of great significance to the surgeon's surgical training, surgical effect prediction, surgical plan formulation and navigation.

目前现有的技术:Currently available technologies:

1)数字化虚拟仿真牙科培训方法,用于口腔临床技能培训。通过系统操作完成所需时间、目标完成程度、失误计分、综合计分等评估效果,是一种新颖、有效的口腔实验教学方法。1) The digital virtual simulation dental training method is used for oral clinical skill training. It is a novel and effective dental experiment teaching method to evaluate the effect through system operation completion time, target completion degree, error scoring, and comprehensive scoring.

2)下颌骨手术虚拟学习方法,可以在用户操作端实现抓取、切割、钻孔、缝合等多种操作模式。对于切割和钻孔手术,系统实时渲染虚拟场景中的最佳规划路径,并以百分比的形式显示手术进度。当误差率超过10%时,及时跟踪切割线并宣布失败。2) The virtual learning method of mandibular surgery can realize multiple operation modes such as grasping, cutting, drilling, and suturing at the user operation end. For cutting and drilling operations, the system renders the best planned path in the virtual scene in real time and displays the progress of the operation as a percentage. When the error rate exceeds 10%, track the cutting line in time and declare failure.

3)基于触觉反馈的心肌细胞手术的学习方法,对训练外科医生的手眼协调能力和执行高精度注射任务的能力起着关键作用。3) The learning method of cardiomyocyte surgery based on tactile feedback plays a key role in training surgeons' hand-eye coordination and ability to perform high-precision injection tasks.

4)虚拟骨科手术模拟器学习方法,使外科医生能够在个人计算机上进行骨折置换手术的训练,而不需要使用任何昂贵的硬件设备。4) The virtual orthopedic surgery simulator learning method enables surgeons to perform fracture replacement surgery training on a personal computer without using any expensive hardware equipment.

然而,目前还没有应用于主从控制模式下的全髋关节置换手术的虚拟学习方法,导致在真实手术情况下一旦出现失误就会宣布手术失败。However, there is currently no virtual learning method applied to total hip arthroplasty under master-slave control mode, resulting in the failure of the operation if there is a mistake in the real operation situation.

发明内容Contents of the invention

本发明的目的在于克服现有技术存在的以上问题,提供一种用于全髋关节置换手术的虚拟学习方法,。The purpose of the present invention is to overcome the above problems in the prior art and provide a virtual learning method for total hip replacement surgery.

为实现上述目的,本发明所提供的技术方案为:In order to achieve the above object, the technical scheme provided by the present invention is:

一种用于全髋关节置换手术的虚拟学习方法,首先系统初始化,建立输入端与控制器之间的通信连接,并在实时虚拟场景中完成力反馈装置与控制器的映射;在操作过程中,输入端以三维图形输入的控制方式与虚拟场景中的手术器械模型进行交互,并在同时让操作者通过图像反馈及时看到训练画面,并在训练过程中配备关键信息提示。A virtual learning method for total hip replacement surgery. First, the system is initialized, the communication connection between the input terminal and the controller is established, and the mapping between the force feedback device and the controller is completed in the real-time virtual scene; during the operation , the input terminal interacts with the surgical instrument model in the virtual scene through the control mode of three-dimensional graphic input, and at the same time allows the operator to see the training screen in time through image feedback, and is equipped with key information prompts during the training process.

进一步地,具体过程如下:Further, the specific process is as follows:

S1、连接力反馈设备和控制器;S1. Connect the force feedback device and the controller;

S2、在Unity5.2.3或以上的版本安装“Unity 5Haptic Plugin for GeomagicOpenHaptics 3.3”与力反馈设备进行通讯;S2. Install "Unity 5Haptic Plugin for GeomagicOpenHaptics 3.3" in Unity5.2.3 or above to communicate with the force feedback device;

S3、调用“Haptic Plugin for Geomagic OpenHaptics 3.3”的编程接口获取力反馈设备的相关参数;S3. Call the programming interface of "Haptic Plugin for Geomagic OpenHaptics 3.3" to obtain the relevant parameters of the force feedback device;

S4、通过基于3D图形输入实现力反馈设备与Unity中设置的虚拟设备完成实时交互;S4. Realize the real-time interaction between the force feedback device and the virtual device set in Unity through 3D graphic input;

S5、选择姿态角输入方式与设备完成数据交流;S5. Select the attitude angle input method to complete data exchange with the device;

S6、对插件的工作空间、工作模式进行初始化;S6. Initialize the workspace and work mode of the plug-in;

S7、根据相机的位置不断更新工作空间,并将虚拟力反馈设备与获取的数值完成实时通讯联系起来;S7. Continuously update the working space according to the position of the camera, and connect the virtual force feedback device with the obtained value to complete real-time communication;

S8、建立虚拟场景中的骨头模型;S8, establishing the bone model in the virtual scene;

S9、创建虚拟场景中手术场景和手术器械;S9, creating a surgical scene and surgical instruments in the virtual scene;

S10、设置训练对象模型上虚拟关键点;S10. Setting virtual key points on the training object model;

S11、设计图像反馈信息;S11. Design image feedback information;

S12、设计相机调焦功能;S12, designing a camera focusing function;

S13、开始训练以及评估。S13. Start training and evaluation.

进一步地,所述步骤S3中,编程接口分为Haptic Device API和Haptic LibraryAPI两部分;Further, in the step S3, the programming interface is divided into two parts: Haptic Device API and Haptic LibraryAPI;

其中,Haptic Device API提供设备的底层接口,通过回调函数直接访问设备状态查询表中的各种设备的实时参数;Among them, the Haptic Device API provides the underlying interface of the device, and directly accesses the real-time parameters of various devices in the device status query table through the callback function;

Haptic Library提供上层接口,在运行过程中,上位机程序通过编程接口,以1000hz的频率获取力反馈设备包含位置、姿态、关节角、末端速度、关节角速度在内的信息,并持续不断将信息发送给控制器实现闭环控制。The Haptic Library provides an upper-layer interface. During operation, the host computer program obtains the information of the force feedback device including position, attitude, joint angle, terminal velocity, and joint angular velocity at a frequency of 1000 Hz through the programming interface, and continuously sends the information Implement closed-loop control for the controller.

进一步地,所述步骤S5,在基于3D图形输入与力反馈通讯过程中,通过插件提供的“hdInitDevice”函数完成设备初始化,其中默认参数为“HD_DEFAULT_DEVICE”;然后利用“hdGetCurrentDevice”函数找到当前设备以及“hdGetDoublev”回调函数读取设备参数,其中参数包括当前平动位置信息、当前平动速度与加速度信息和当前姿态角四种;其中“hdGetDoublev”的调用形式为“(参数名,返回值类型)”;接着使用“hdBeginFrame”和“hdEndFrame”作为数据访问的起点和终点以保证访问的一致性,从而实现力反馈设备与控制器的数据同步更新;在测试中,对于包括平动的位置、速度与加速度在内的信息直接通过Haptic Device API提供的访问接口读取。Further, in the step S5, in the process of communication based on 3D graphics input and force feedback, the device initialization is completed through the "hdInitDevice" function provided by the plug-in, wherein the default parameter is "HD_DEFAULT_DEVICE"; then use the "hdGetCurrentDevice" function to find the current device and The "hdGetDoublev" callback function reads device parameters, which include four types of current translation position information, current translation velocity and acceleration information, and current attitude angle; the calling form of "hdGetDoublev" is "(parameter name, return value type) "; Then use "hdBeginFrame" and "hdEndFrame" as the starting point and end point of data access to ensure the consistency of access, so as to realize the synchronous update of the data of the force feedback device and the controller; in the test, for the position and speed including translation Information including acceleration is directly read through the access interface provided by the Haptic Device API.

进一步地,对于通过姿态角信息的获取,通过Haptic Device API访问接口预先得到16元末端姿态数组,然后转换到等效轴角坐标系中各个轴的速度信息;以等效轴角坐标系表示法描述的16元末端姿态数组如下式(1)所示:Further, for the acquisition of attitude angle information, the 16-element terminal attitude array is pre-obtained through the Haptic Device API access interface, and then converted to the speed information of each axis in the equivalent axis-angle coordinate system; expressed in the equivalent axis-angle coordinate system The described 16-element terminal pose array is shown in the following formula (1):

Figure BDA0002923753980000041
Figure BDA0002923753980000041

其中,RK(θ)为单位矩阵,矩阵的元素称为16元,每一个用t[]表示,K为当前姿态相对于基坐标系的姿态角,通过差分可近似得到姿态角在x,y,z三个方向的速度信息;Among them, R K (θ) is the identity matrix, and the elements of the matrix are called 16 elements, each of which is represented by t[], K is the attitude angle of the current attitude relative to the base coordinate system, and the attitude angle can be approximated by difference at x, Velocity information in the three directions of y and z;

由1可得式2和式3:Formula 2 and Formula 3 can be obtained from 1:

Figure BDA0002923753980000042
Figure BDA0002923753980000042

θ为相应绕着坐标轴的旋转角度;θ is the corresponding rotation angle around the coordinate axis;

Figure BDA0002923753980000051
Figure BDA0002923753980000051

进一步地,所述步骤S6具体如下:Further, the step S6 is specifically as follows:

把力反馈设备在真实场景的活动空间近似为一个立方体形式,将其尺寸数据通过“SetHapticWorkSpace”中的“ConvertFloat3ToIntPtr”指令由“float3”数组转换成“IntPtr”;The active space of the force feedback device in the real scene is approximated as a cube, and its size data is converted from the "float3" array to "IntPtr" through the "ConvertFloat3ToIntPtr" command in "SetHapticWorkSpace";

然后在“GetHapticWorkSpace”中利用“ConvertIntPtrToFloat3”将“IntPtr”又转换成Unity编辑器中的“float 3”数组进而确定空间尺寸;Then use "ConvertIntPtrToFloat3" in "GetHapticWorkSpace" to convert "IntPtr" into the "float 3" array in the Unity editor to determine the space size;

接下来,根据相机的位置使用插件中的“UpdateHapticWorkspace”函数更新工作空间以及“IndicateMode”函数设置训练交互模式;Next, use the "UpdateHapticWorkspace" function in the plug-in to update the workspace and the "IndicateMode" function to set the training interaction mode according to the position of the camera;

下一步在Unity3D中将创建的物体状态设置为“Touchable Face”,其为“front”,“back”,“front and back”中的任意一种;In the next step, set the state of the created object to "Touchable Face" in Unity3D, which is any one of "front", "back", "front and back";

设置力反馈过程中,在“Environment constantforce”,“viscosity”,“springeffect”和“friction effect”的脚本中创建并设置相关属性,包括幅度、持续时间、增益;设置物体需要获取所有带有″Tochable″标签的对象数组,并获取对象的网格属性,然后绘制几何体,读取几何体特征进而启动所有不同对象的力反馈事件。In the process of setting force feedback, create and set related properties in the scripts of "Environment constantforce", "viscosity", "spring effect" and "friction effect", including amplitude, duration, and gain; to set the object, you need to obtain all objects with "Tochable ″ tag object array, and get the mesh properties of the object, then draw the geometry, read the geometry features and start the force feedback events of all the different objects.

进一步地,所述步骤S8建立虚拟场景中的骨头模型,分为CT图像分割和三维建模两步;Further, said step S8 establishes the bone model in the virtual scene, which is divided into two steps of CT image segmentation and three-dimensional modeling;

其中,所述CT图像分割为基于mimics的阈值法、区域生长完成骨头CT数据的图像分割;在minics界面导入患者的CT图集后,选择阈值进行二值化处理,将CT图集中灰度值处于阈值范围内的像素保留下来;阈值分割主要利用图像中要提取的目标区域与其背景在灰度特性上的差异,将CT图像分成目标区域和背景区域从而产生相应的二值图像;然后利用区域增长工具将二值化图像分割成多块,同时去除漂浮像素。Wherein, the CT image segmentation is based on the threshold method of mimics, and the image segmentation of the bone CT data is completed by region growing; after importing the patient's CT atlas in the minics interface, the threshold is selected for binarization, and the gray value of the CT atlas The pixels within the threshold range are retained; threshold segmentation mainly uses the difference in grayscale characteristics between the target area to be extracted in the image and its background, and divides the CT image into the target area and the background area to generate a corresponding binary image; then use the area The grow tool splits the binarized image into multiple blocks while removing floating pixels.

进一步地,所述步骤S9中,手术环境通过贴图形式插入,力反馈模型则利用Unity3D中自带的简单立体模型组合而成;在“Hierarchy”面板中创建一个主相机用于拍摄操作训练过程中看到的运行面画,在“Inspector”中增加“Skybox”组件来设计贴图样式,以及将Camera组件中的“Clear Flags”切换到“Skybox”模式;手术环境由前、后、左、右、上和下六张贴图拼凑而成的立体空间,而虚拟手术工具由一个球体和一个胶囊组装而成,其中球体的圆心作为整个手术工具的定位坐标。Further, in the step S9, the surgical environment is inserted in the form of textures, and the force feedback model is combined with the simple three-dimensional model that comes with Unity3D; a main camera is created in the "Hierarchy" panel for the shooting operation training process See the running picture, add the "Skybox" component in the "Inspector" to design the texture style, and switch the "Clear Flags" in the Camera component to the "Skybox" mode; the operation environment is divided into front, back, left, right, The three-dimensional space composed of the upper and lower six maps, and the virtual surgical tool is assembled from a sphere and a capsule, and the center of the sphere serves as the positioning coordinates of the entire surgical tool.

进一步地,所述步骤S12,在相机中通过脚本来设置Camera组件中“Field ofview”,并在运行过程中通过鼠标按键的形式进行调焦。Further, in the step S12, the "Field of view" in the Camera component is set through a script in the camera, and the focus is adjusted in the form of a mouse button during operation.

进一步地,所述步骤S13中,实验主要分为两步进行,在Game面板运行过程中,首先进行调焦使得训练画面切换到操作者最舒适的角度;然后控制力反馈设备以正确的角度接近关键点,直到两个关键点都变蓝才算完成;训练过程中通过Game面板上的用时来评估操作者训练的熟悉程度。Further, in the step S13, the experiment is mainly divided into two steps. During the operation of the Game panel, the focus is first adjusted so that the training screen is switched to the most comfortable angle for the operator; then the force feedback device is controlled to approach the operator at the correct angle. The key point is not completed until both key points turn blue; during the training process, the familiarity of the operator's training is evaluated by the time spent on the Game panel.

与现有技术相比,本方案原理及优点如下:Compared with the existing technology, the principle and advantages of this scheme are as follows:

(1)通过“OpenHaptics”工具包,开发者可以将力或触觉反馈设备应用于广泛的领域,如手术模拟和医疗训练、航空航天和军事训练、盲人或视障人士的辅助技术以及游戏娱乐等。(1) Through the "OpenHaptics" toolkit, developers can apply force or haptic feedback devices to a wide range of fields, such as surgical simulation and medical training, aerospace and military training, assistive technology for the blind or visually impaired, and game entertainment .

(2)采用基于3D图形输入完成力反馈设备与虚拟设备的协调操作,相对基于键盘字符输入的方式,可以灵活采集到力反馈设备的输入信号,并且快速完成交互过程。(2) The coordinated operation of the force feedback device and the virtual device is completed based on 3D graphics input. Compared with the method based on keyboard character input, the input signal of the force feedback device can be flexibly collected and the interaction process can be completed quickly.

(3)选择姿态角输入方式与设备完成数据交流,相对平动输入方式来说,充分利用力反馈设备六自由度的优势完成数据输入。(3) Choose the attitude angle input method to complete the data exchange with the device. Compared with the translation input method, make full use of the advantages of the six degrees of freedom of the force feedback device to complete the data input.

(4)在学习系统中可以模拟手术器械的质量以及虚拟物体的触感,从而更真实逼真的体验手术场景。(4) The quality of surgical instruments and the touch of virtual objects can be simulated in the learning system, so as to experience the surgical scene more realistically.

(5)虚拟股骨头模型的建立严格按照病人CT图像进行的建模以及骨头上的关键点信息是按照目前全髋关节置换手术的要求创建的,整个训练模型具有真实性。(5) The establishment of the virtual femoral head model is strictly based on the patient's CT image and the key point information on the bone is created in accordance with the current requirements of total hip replacement surgery. The entire training model is authentic.

(6)训练场景可以面向多样化的手术环境,而且训练复杂度也可以根据实际过程中的难度在Unity轻松设置。图像反馈环节的时间和距离实时显示可以在用户操作过程中给与一些路径指导,从而辅助完成训练。(6) The training scene can be oriented to a variety of surgical environments, and the training complexity can also be easily set in Unity according to the difficulty in the actual process. The real-time display of time and distance in the image feedback link can give some path guidance during the user's operation, thereby assisting in the completion of training.

(7)虚拟场景中通过鼠标按键方式进行相机调焦可以在训练过程中轻松更加需求切换场景的画面信息。(7) In the virtual scene, the camera focus can be adjusted through the mouse button, which can easily change the picture information of the scene during the training process.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的服务作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the services that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only For some embodiments of the present invention, those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1为用于全髋关节置换手术的虚拟学习系统的示意图;1 is a schematic diagram of a virtual learning system for total hip replacement surgery;

图2为基于mimics虚拟骨头模型场景示意图;Figure 2 is a schematic diagram of a scene based on mimics virtual bone model;

图3为虚拟仿真训练场景示意图。Fig. 3 is a schematic diagram of a virtual simulation training scene.

具体实施方式Detailed ways

下面结合具体实施例对本发明作进一步说明:The present invention will be further described below in conjunction with specific embodiment:

参照图1-图3所示,一种用于全髋关节置换手术的虚拟学习方法,首先系统初始化,建立输入端与控制器之间的通信连接,并在实时虚拟场景中完成力反馈装置与控制器的映射;在操作过程中,输入端以三维图形输入的控制方式与虚拟场景中的手术器械模型进行交互,并在同时让操作者通过图像反馈及时看到训练画面,并在训练过程中配备关键信息提示。Referring to Figures 1-3, a virtual learning method for total hip replacement surgery. First, the system is initialized, the communication connection between the input terminal and the controller is established, and the force feedback device and the controller are completed in the real-time virtual scene. Mapping of the controller; during the operation, the input terminal interacts with the surgical instrument model in the virtual scene in the control mode of three-dimensional graphics input, and at the same time allows the operator to see the training screen in time through image feedback, and during the training process Equipped with key information prompts.

输入端与控制器之间的通信并映射虚拟场景的过程主要利用OpenHaptics接口访问力反馈设备的数据,将其封装成插件并传递给Unity3D线程,从而完成Unity接口控制。The communication between the input terminal and the controller and the process of mapping the virtual scene mainly use the OpenHaptics interface to access the data of the force feedback device, package it into a plug-in and pass it to the Unity3D thread, so as to complete the Unity interface control.

具体过程如下:The specific process is as follows:

S1、连接力反馈设备和控制器;使用提供的USB接口连接力反馈设备和控制器,然后将电源线连接插座与力反馈设备背部的连接器进行供电。连接完成之后,如果蓝色发光二极管快速闪烁两次代表控制器与力反馈设备连接成功。S1. Connect the force feedback device and the controller; use the provided USB interface to connect the force feedback device and the controller, and then connect the power cord to the socket and the connector on the back of the force feedback device for power supply. After the connection is completed, if the blue LED flashes twice quickly, it means that the controller and the force feedback device are successfully connected.

S2、在Unity5.2.3或以上的版本安装“Unity 5Haptic Plugin for GeomagicOpenHaptics 3.3”与力反馈设备进行通讯。S2. Install "Unity 5Haptic Plugin for GeomagicOpenHaptics 3.3" on Unity5.2.3 or above to communicate with force feedback devices.

S3、调用“Haptic Plugin for Geomagic OpenHaptics 3.3”的编程接口获取力反馈设备的相关参数。编程接口分为Haptic Device API(HDAPI)和Haptic Library API(HLAPI)两部分。S3. Call the programming interface of "Haptic Plugin for Geomagic OpenHaptics 3.3" to obtain the relevant parameters of the force feedback device. The programming interface is divided into two parts: Haptic Device API (HDAPI) and Haptic Library API (HLAPI).

HDAPI提供设备的底层接口,能够通过回调函数直接访问设备状态查询表中的各种设备的实时参数;HDAPI provides the underlying interface of the device, and can directly access the real-time parameters of various devices in the device status query table through the callback function;

HLAPI为熟悉OpenGL的编程人员提供了一个上层接口。在运行过程中,上位机程序通过编程接口,以1000hz的频率获取力反馈设备的位置、姿态、关节角、末端速度、关节角速度等信息,并持续不断将信息发送给控制器实现闭环控制。HLAPI provides an upper interface for programmers familiar with OpenGL. During operation, the host computer program obtains information such as the position, attitude, joint angle, terminal velocity, and joint angular velocity of the force feedback device at a frequency of 1000 Hz through the programming interface, and continuously sends the information to the controller to achieve closed-loop control.

S4、通过基于3D图形输入实现力反馈设备与Unity中设置的虚拟设备完成实时交互。S4. The real-time interaction between the force feedback device and the virtual device set in Unity is realized through 3D graphic input.

S5、选择姿态角输入方式与设备完成数据交流;具体地,在基于3D图形输入与力反馈通讯过程中,通过插件提供的“hdInitDevice”函数完成设备初始化,其中默认参数为“HD_DEFAULT_DEVICE”;然后利用“hdGetCurrentDevice”函数找到当前设备以及“hdGetDoublev”回调函数读取设备参数,其中参数包括当前平动位置信息、当前平动速度与加速度信息和当前姿态角四种;其中“hdGetDoublev”的调用形式为“(参数名,返回值类型)”;接着使用“hdBeginFrame”和“hdEndFrame”作为数据访问的起点和终点以保证访问的一致性,从而实现力反馈设备与控制器的数据同步更新;在测试中,对于包括平动的位置、速度与加速度在内的信息直接通过Haptic Device API提供的访问接口读取。S5. Select the attitude angle input method to complete data exchange with the device; specifically, in the process of 3D graphic input and force feedback communication, complete device initialization through the "hdInitDevice" function provided by the plug-in, where the default parameter is "HD_DEFAULT_DEVICE"; then use The "hdGetCurrentDevice" function finds the current device and the "hdGetDoublev" callback function reads the device parameters. The parameters include current translation position information, current translation velocity and acceleration information, and current attitude angle; the calling form of "hdGetDoublev" is " (parameter name, return value type)"; then use "hdBeginFrame" and "hdEndFrame" as the starting point and end point of data access to ensure the consistency of access, so as to realize the synchronous update of the data of the force feedback device and the controller; in the test, The information including the position, velocity and acceleration of the translation is directly read through the access interface provided by the Haptic Device API.

对于通过姿态角信息的获取,通过Haptic Device API访问接口预先得到16元末端姿态数组,然后转换到等效轴角坐标系中各个轴的速度信息;以等效轴角坐标系表示法描述的16元末端姿态数组如下式(1)所示:For the acquisition of attitude angle information, the 16-element terminal attitude array is pre-obtained through the Haptic Device API access interface, and then converted to the velocity information of each axis in the equivalent axis-angle coordinate system; The element end pose array is shown in the following formula (1):

Figure BDA0002923753980000091
Figure BDA0002923753980000091

其中,RK(θ)为单位矩阵,矩阵的元素称为16元,每一个用t[]表示,K为当前姿态相对于基坐标系的姿态角,通过差分可近似得到姿态角在x,y,z三个方向的速度信息;Among them, R K (θ) is the identity matrix, and the elements of the matrix are called 16 elements, each of which is represented by t[], K is the attitude angle of the current attitude relative to the base coordinate system, and the attitude angle can be approximated by difference at x, Velocity information in the three directions of y and z;

由1可得式2和式3:Formula 2 and Formula 3 can be obtained from 1:

Figure BDA0002923753980000092
Figure BDA0002923753980000092

θ为相应绕着坐标轴的旋转角度;θ is the corresponding rotation angle around the coordinate axis;

Figure BDA0002923753980000093
Figure BDA0002923753980000093

S6、对插件的工作空间、工作模式进行初始化,具体过程如下:S6. Initialize the workspace and work mode of the plug-in, the specific process is as follows:

把力反馈设备在真实场景的活动空间近似为一个立方体形式,将其尺寸数据(例如长宽高都为20cm)通过“SetHapticWorkSpace”中的“ConvertFloat3ToIntPtr”指令由“float 3”数组转换成“IntPtr”;Approximate the active space of the force feedback device in the real scene as a cube, and convert its size data (for example, the length, width and height are 20cm) from the "float 3" array to "IntPtr" through the "ConvertFloat3ToIntPtr" command in "SetHapticWorkSpace" ;

然后在“GetHapticWorkSpace”中利用“ConvertIntPtrToFloat3”将“IntPtr”又转换成Unity编辑器中的“float 3”数组进而确定空间尺寸;Then use "ConvertIntPtrToFloat3" in "GetHapticWorkSpace" to convert "IntPtr" into the "float 3" array in the Unity editor to determine the space size;

接下来,根据相机的位置使用插件中的“UpdateHapticWorkspace”函数更新工作空间以及“IndicateMode”函数设置训练交互模式;Next, use the "UpdateHapticWorkspace" function in the plug-in to update the workspace and the "IndicateMode" function to set the training interaction mode according to the position of the camera;

下一步在Unity3D中将创建的物体状态设置为“Touchable Face”,其为“front”,“back”,“front and back”中的任意一种;In the next step, set the state of the created object to "Touchable Face" in Unity3D, which is any one of "front", "back", "front and back";

设置力反馈过程中,在“Environment constantforce”,“viscosity”,“springeffect”和“friction effect”的脚本中创建并设置相关属性,包括幅度、持续时间、增益;设置物体需要获取所有带有″Tochable″标签的对象数组,并获取对象的网格属性,然后绘制几何体,读取几何体特征进而启动所有不同对象的力反馈事件。In the process of setting force feedback, create and set related properties in the scripts of "Environment constantforce", "viscosity", "spring effect" and "friction effect", including amplitude, duration, and gain; to set the object, you need to obtain all objects with "Tochable ″ tag object array, and get the mesh properties of the object, then draw the geometry, read the geometry features and start the force feedback events of all the different objects.

S7、在“void Update”循环中不断更新工作空间。根据相机的位置不断更新工作空间,并将虚拟力反馈设备与获取的数值完成实时通讯联系起来。S7. Continuously update the workspace in the "void Update" loop. The working space is constantly updated according to the position of the camera, and the virtual force feedback device is connected with the obtained values to complete real-time communication.

S8、建立虚拟场景中的骨头模型,具体分为CT图像分割和三维建模两步;S8, establishing the bone model in the virtual scene, specifically divided into two steps of CT image segmentation and three-dimensional modeling;

其中,所述CT图像分割为基于mimics的阈值法、区域生长完成骨头CT数据的图像分割;在minics界面导入患者的CT图集后,选择阈值进行二值化处理,将CT图集中灰度值处于阈值范围内的像素保留下来;阈值分割主要利用图像中要提取的目标区域与其背景在灰度特性上的差异,将CT图像分成目标区域和背景区域从而产生相应的二值图像;然后利用区域增长工具将二值化图像分割成多块,同时去除漂浮像素。Wherein, the CT image segmentation is based on the threshold method of mimics, and the image segmentation of the bone CT data is completed by region growing; after importing the patient's CT atlas in the minics interface, the threshold is selected for binarization, and the gray value of the CT atlas The pixels within the threshold range are retained; threshold segmentation mainly uses the difference in grayscale characteristics between the target area to be extracted in the image and its background, and divides the CT image into the target area and the background area to generate a corresponding binary image; then use the area The grow tool splits the binarized image into multiple blocks while removing floating pixels.

下一步对分割后的序列CT图像进行计算处理,恢复其三维结构。常用的三维重建算法分为面绘制和体绘制两大类。面绘制方式主要在图像上提取感兴趣的一个或多个组织轮廓,并借助中间几何平面元及算法生成网格模型供后续处理。面绘制的算法主要有断层表面重构、移动立方体法等,其中断层表面重构算法作为面绘制的经典算法,有着最为广泛的应用。体绘制方式:直接通过算法利用采集的体素进行重采样,继而完成三维重建。常用的体绘制方法主要有光线投射法、抛雪球法、错切变换法和纹理映射法。体绘制优点可以生成三维模型的每一个细节,并具有图像质量高、便于并行处理的特点。缺点是需要处理的数据量大,计算时间长,算法复杂,从而影响三维重建的效率。而面绘制只需要处理所有数据中的一小部分,因此计算速度快;但是面绘制切断了结构轮廓与整体结构的联系,可能会出现丢失关键点信息。本文提出的虚拟学习系统对三维物体细节要求比较高,因此采用基于光线投射法的体绘制的方式进行骨头模型建模。本专利提出的训练对象模型主要由mimics创建的虚拟骨头模型和Solid work创建的半透明人体模型组成。将两个模型以“stl”格式保存,并且将它们在3D builder软件中转换成“obj”格式导入到Unity3D场景中。In the next step, computational processing is performed on the segmented serial CT images to restore their three-dimensional structures. Commonly used 3D reconstruction algorithms are divided into two categories: surface rendering and volume rendering. The surface rendering method mainly extracts one or more tissue contours of interest on the image, and generates a grid model for subsequent processing with the help of intermediate geometric plane elements and algorithms. Algorithms for surface rendering mainly include fault surface reconstruction, moving cube method, etc. Among them, the fault surface reconstruction algorithm, as a classic algorithm for surface rendering, is the most widely used. Volume rendering method: directly use the collected voxels to resample through the algorithm, and then complete the 3D reconstruction. Commonly used volume rendering methods mainly include ray casting method, snowball throwing method, staggered transformation method and texture mapping method. The advantages of volume rendering can generate every detail of the 3D model, and has the characteristics of high image quality and easy parallel processing. The disadvantage is that the amount of data to be processed is large, the calculation time is long, and the algorithm is complex, which affects the efficiency of 3D reconstruction. Surface rendering only needs to process a small part of all data, so the calculation speed is fast; however, surface rendering cuts off the connection between the structure outline and the overall structure, and key point information may be lost. The virtual learning system proposed in this paper has relatively high requirements for the details of three-dimensional objects, so the bone model is modeled by volume rendering based on the ray-casting method. The training object model proposed in this patent is mainly composed of a virtual bone model created by mimics and a translucent human body model created by Solid work. Save the two models in "stl" format, and convert them into "obj" format in the 3D builder software and import them into the Unity3D scene.

S9、创建虚拟场景中手术场景和手术器械;S9, creating a surgical scene and surgical instruments in the virtual scene;

手术环境通过贴图形式插入,力反馈模型则利用Unity3D中自带的简单立体模型组合而成;在“Hierarchy”面板中创建一个主相机用于拍摄操作训练过程中看到的运行面画,在“Inspector”中增加“Skybox”组件来设计贴图样式,以及将Camera组件中的“ClearFlags”切换到“Skybox”模式;手术环境由前、后、左、右、上和下六张贴图拼凑而成的立体空间,而虚拟手术工具由一个球体和一个胶囊组装而成,其中球体的圆心作为整个手术工具的定位坐标。The surgical environment is inserted in the form of textures, and the force feedback model is combined using the simple three-dimensional model that comes with Unity3D; create a main camera in the "Hierarchy" panel to shoot the running picture seen during the operation training process, in the " Add the "Skybox" component in "Inspector" to design the texture style, and switch the "ClearFlags" in the Camera component to "Skybox" mode; the surgical environment is made up of six textures: front, back, left, right, up and down three-dimensional space, and the virtual surgical tool is assembled from a sphere and a capsule, wherein the center of the sphere serves as the positioning coordinates of the entire surgical tool.

S10、设置训练对象模型上虚拟关键点;S10. Setting virtual key points on the training object model;

在全髋关节置换手术中,通常需要在患者髋臼处安装一个金属髋臼杯并且通过2-3颗螺钉进行固定,从而与股骨头假体连接。然而髋臼杯与螺钉的位置是整个手术的关键,它直接影响手术的成败。根据手术经验通常在髋臼12点方向臼上缘2cm处垂直插入克氏针(此处骨板较厚,克氏针不易移位)用于定位髋臼杯,因此本实施例设计两个红色的虚拟的关键点用来模拟克氏针定位。由于关键点在Game面板中比较小和不易观察,因此以该点为圆心、半径为1cm的球体进行模拟,其中圆心坐标即为关键点的位置坐标。In total hip replacement surgery, it is usually necessary to install a metal acetabular cup in the patient's acetabulum and fix it with 2-3 screws to connect with the femoral head prosthesis. However, the position of the acetabular cup and screws is the key to the whole operation, which directly affects the success or failure of the operation. According to surgical experience, Kirschner wires are usually inserted vertically at 2cm above the upper edge of the acetabulum in the direction of 12 o'clock (the bone plate is thicker here, and the Kirschner wires are not easy to shift) to position the acetabular cup. Therefore, two red sockets are designed in this embodiment. The virtual keypoints of are used to simulate Kirschner wire positioning. Since the key point is relatively small and difficult to observe in the Game panel, a sphere with this point as the center and a radius of 1cm is used for simulation, and the coordinates of the center of the circle are the position coordinates of the key point.

S11、设计图像反馈信息;S11. Design image feedback information;

图像反馈信息由操作的时间显示和手术工具到关键点的实时距离组成。时间显示在“fixedupdate”中以固定频率更新,并将整数形式转换成分和秒形式;当时间超过60秒会实现颜色由白色变成红色以提醒用户时间超出预期。距离实时提示则是在移动过程中由手术工具和关键点的位置坐标在“Transform”中“Position”的差值获取。当虚拟手术器械与圆心坐标距离小于3cm时,让显示的数值由红色变成绿色提示术者马上达到目标;当距离小于1cm且手术器械角度满足外展角45度和前倾角15度时,让球体颜色由红色变成蓝色提示已经到底目标位置。The image feedback information consists of the time display of the operation and the real-time distance of the surgical tool to the key point. The time display is updated at a fixed frequency in "fixedupdate", and the integer form is converted into components and seconds; when the time exceeds 60 seconds, the color will change from white to red to remind the user that the time exceeds expectations. The real-time distance prompt is obtained by the difference between the position coordinates of the surgical tool and the key points in "Position" in "Transform" during the movement process. When the distance between the virtual surgical instrument and the coordinates of the center of the circle is less than 3cm, let the displayed value change from red to green to prompt the operator to reach the goal immediately; The color of the sphere changes from red to blue to indicate that the target position has been reached.

S12、设计相机调焦功能;考虑到Game面板中场景往往不是进行训练的最佳观察位置,因此在相机中新建一个脚本用来设置Camera组件中“Field of view”,并在运行过程中通过鼠标按键的形式进行调焦。S12. Design the camera focus function; considering that the scene in the Game panel is often not the best observation position for training, so create a new script in the camera to set the "Field of view" in the Camera component, and use the mouse during operation Press the button to adjust the focus.

S13、开始训练以及评估;实验主要分为两步进行,在Game面板运行过程中,首先进行调焦使得训练画面切换到操作者最舒适的角度;然后控制力反馈设备以正确的角度接近关键点,直到两个关键点都变蓝才算完成。训练过程中通过Game面板上的用时来评估操作者训练的熟悉程度。S13. Start training and evaluation; the experiment is mainly divided into two steps. During the operation of the Game panel, firstly adjust the focus to switch the training screen to the most comfortable angle for the operator; then control the force feedback device to approach the key point at the correct angle , until both keypoints are blue. During the training process, the operator's familiarity with the training is evaluated by the time spent on the Game panel.

以上所述之实施例子只为本发明之较佳实施例,并非以此限制本发明的实施范围,故凡依本发明之形状、原理所作的变化,均应涵盖在本发明的保护范围内。The implementation examples described above are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Therefore, all changes made according to the shape and principle of the present invention should be covered within the scope of protection of the present invention.

Claims (8)

1. A virtual learning method for total hip replacement surgery is characterized in that a system is initialized, communication connection between an input end and a controller is established, and mapping of a force feedback device and the controller is completed in a real-time virtual scene; in the operation process, the input end interacts with a surgical instrument model in a virtual scene in a three-dimensional graphic input control mode, and simultaneously an operator can see a training picture in time through image feedback and is provided with key information prompts in the training process;
the specific process is as follows:
s1, connecting a force feedback device and a controller;
s2, installing 'Unity 5Haptic plug for Geomagic OpenHaptics 3.3' in the version of Unity5.2.3 or above to communicate with the force feedback equipment;
s3, calling a programming interface of 'Haptic plug for Geomagic OpenHaptics 3.3' to acquire relevant parameters of the force feedback equipment;
s4, realizing real-time interaction between the force feedback equipment and virtual equipment set in the Unity through 3D (three-dimensional) graphic input;
s5, selecting an attitude angle input mode to complete data communication with equipment;
s6, initializing a working space and a working mode of the plug-in;
s7, continuously updating a working space according to the position of the camera, and completing real-time communication connection between the virtual force feedback equipment and the obtained numerical value;
s8, establishing a bone model in a virtual scene;
s9, creating an operation scene and an operation instrument in the virtual scene;
s10, setting virtual key points on a training object model;
s11, designing image feedback information;
s12, designing a camera focusing function;
s13, starting training and evaluating;
in the step S3, a programming interface is divided into a Haptic Device API and a Haptic Library API;
the Haptic Device API provides a bottom-layer interface of the equipment, and directly accesses real-time parameters of various equipment in the equipment state query table through a callback function;
the Haptic Library provides an upper layer interface, and in the operation process, an upper computer program obtains information of the force feedback equipment including position, attitude, joint angle, terminal speed and joint angular speed at the frequency of 1000hz through a programming interface and continuously sends the information to the controller to realize closed-loop control.
2. The virtual learning method for total hip replacement surgery as claimed in claim 1, wherein step S5, in the 3D graphical input and force feedback based communication process, DEVICE initialization is performed by the "hdInitDevice" function provided by the plug-in, wherein the DEFAULT parameter is "HD _ DEFAULT _ DEVICE"; then, finding the current equipment and reading equipment parameters by using an hdGetCurrentdevice function and an hdGetDoublev callback function, wherein the parameters comprise current translation position information, current translation speed and acceleration information and a current attitude angle; wherein the invocation form of the hdGetDoublev is "(parameter name, return value type)"; then using the 'hdBeginFrame' and the 'hdEndFrame' as a starting point and an end point of data access to ensure the consistency of the access, thereby realizing the synchronous update of the data of the force feedback equipment and the controller; in the test, information including the position, velocity and acceleration of the translation is read directly through the access interface provided by the Haptic Device API.
3. The virtual learning method for total hip replacement surgery as claimed in claim 2, wherein for the acquisition of the attitude angle information, a 16-element end attitude array is obtained in advance through a Haptic Device API access interface, and then converted into the speed information of each axis in the equivalent axis angular coordinate system; the 16-element terminal attitude array described by the equivalent axis angular coordinate system is shown as the following formula (1):
Figure FDA0003824159090000021
wherein R is K (θ) is a unit matrix, the elements of the matrix are called 16 elements, each using t [ 2 ]]Expressing that K is the attitude angle of the current attitude relative to a base coordinate system, and obtaining the speed information of the attitude angle in the x, y and z directions through difference;
formulae (2) and (3) are obtainable from (1):
Figure FDA0003824159090000031
theta is the corresponding rotation angle around the coordinate axis;
Figure FDA0003824159090000032
4. the virtual learning method for total hip replacement surgery as claimed in claim 1, wherein the step S6 is as follows:
approximating the activity space of the force feedback device in a real scene to a cube form, converting the size data thereof from the "float3" array to "IntPtr" through the "convertfoat 3to IntPtr" instruction in "sethapticWorkSpace";
then, converting the IntPtr into a float3 array in a Unity editor by using ConvertIntPtrToFloat3 in GetHapticWorkSpace to determine the space size;
next, according to the position of the camera, using an 'UpdateHapticWorkspace' function in the plug-in to update a working space and an 'indicateMode' function to set a training interaction mode;
next, setting the created object state to be "touchbleface" which is any one of "front", "back", "front and back" in Unity 3D;
in the process of setting force feedback, relevant attributes including amplitude, duration and gain are created and set in scripts of 'Environment constant force', 'vision', 'spring effect' and 'massage effect'; setting an object, wherein the object array with the 'desk' tag needs to be acquired, acquiring the grid attribute of the object, then drawing a geometric body, and reading the characteristics of the geometric body so as to start the force feedback events of all different objects.
5. The virtual learning method for total hip replacement surgery as claimed in claim 1, wherein the step S8 establishes a bone model in a virtual scene, and is divided into two steps of CT image segmentation and three-dimensional modeling;
wherein the CT image segmentation is based on a threshold method of mimics, and the region growth is used for completing the image segmentation of the bone CT data; after a CT atlas of a patient is imported into a minics interface, selecting a threshold value to carry out binarization processing, and reserving pixels of which the gray values are within the threshold value range in the CT atlas; the threshold segmentation mainly utilizes the difference of a target area to be extracted in an image and the background thereof on the gray characteristic to divide a CT image into the target area and the background area so as to generate a corresponding binary image; and then, segmenting the binary image into a plurality of blocks by using a region growing tool, and removing floating pixels.
6. The virtual learning method for total hip replacement surgery as claimed in claim 1, wherein in step S9, the surgical environment is inserted in a map form, and the force feedback model is combined by using a simple stereo model owned by Unity 3D; creating a main Camera in a 'Hierarchy' panel for shooting a running face painting seen in the operation training process, adding a 'Skybox' component in an 'observer' to design a chartlet style, and switching 'ClearFlags' in a Camera component to a 'Skybox' mode; the operation environment is a three-dimensional space formed by splicing front, back, left, right, upper and lower six sticking pictures, the virtual operation tool is formed by assembling a sphere and a capsule, and the circle center of the sphere is used as the positioning coordinate of the whole operation tool.
7. The virtual learning method for total hip replacement surgery as claimed in claim 1, wherein step S12, the Camera assembly is set up with "Field of view" by script in the Camera, and focusing is performed by mouse button during operation.
8. The virtual learning method for total hip replacement surgery according to claim 1, wherein in step S13, the experiment is mainly performed in two steps, and during the operation of the Game panel, focusing is performed first to switch the training screen to the angle most comfortable for the operator; then controlling the force feedback equipment to approach the key points at a correct angle until the two key points turn blue; the familiarity of the operator training is assessed by the time spent on the Game panel during the training process.
CN202110126745.2A 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery Active CN112906205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110126745.2A CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110126745.2A CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Publications (2)

Publication Number Publication Date
CN112906205A CN112906205A (en) 2021-06-04
CN112906205B true CN112906205B (en) 2023-01-20

Family

ID=76121113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110126745.2A Active CN112906205B (en) 2021-01-29 2021-01-29 Virtual learning method for total hip replacement surgery

Country Status (1)

Country Link
CN (1) CN112906205B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113633376B (en) * 2021-08-06 2024-03-15 吉林大学 Naked eye three-dimensional virtual replacement method for total hip joint
CN115273583A (en) * 2022-05-16 2022-11-01 华中科技大学同济医学院附属协和医院 A multi-person interactive orthopaedic clinical teaching method based on mixed reality
CN117711611B (en) * 2024-02-05 2024-04-19 四川省医学科学院·四川省人民医院 MDT remote consultation system and method based on scene fusion and mr

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4587738B2 (en) * 2003-08-25 2010-11-24 ソニー株式会社 Robot apparatus and robot posture control method
CN104679918A (en) * 2013-11-26 2015-06-03 中国科学院深圳先进技术研究院 Method, device and system for simulating haptic interaction between bone drill and bone
CN104537939B (en) * 2014-12-31 2017-05-03 佛山市中医院 Virtual method and device for pedicle screw implantation
ES2945711T3 (en) * 2017-08-15 2023-07-06 Holo Surgical Inc Surgical navigation system to provide an augmented reality image during the operation
US11574561B2 (en) * 2018-05-18 2023-02-07 Marion Surgical Virtual reality surgical system including a surgical tool assembly with haptic feedback

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109192030A (en) * 2018-09-26 2019-01-11 郑州大学第附属医院 True hysteroscope Minimally Invasive Surgery simulation training system and method based on virtual reality

Also Published As

Publication number Publication date
CN112906205A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112906205B (en) Virtual learning method for total hip replacement surgery
US10943505B2 (en) Hybrid image/scene renderer with hands free control
Tendick et al. A virtual environment testbed for training laparoscopic surgical skills
CN102207997B (en) Robot minimally invasive surgery simulation system based on force feedback
Rosen et al. Evolution of virtual reality [Medicine]
US20100167250A1 (en) Surgical training simulator having multiple tracking systems
Ecke et al. Virtual reality: preparation and execution of sinus surgery
Heredia‐Pérez et al. Virtual reality simulation of robotic transsphenoidal brain tumor resection: Evaluating dynamic motion scaling in a master‐slave system
WO1996016389A1 (en) Medical procedure simulator
CN113379929B (en) A Virtual Reality Solution for Bone Tissue Repair Based on Physical Simulation
CN1685381B (en) Device and method for generating a virtual anatomic environment
CN114333482B (en) Virtual anatomy teaching system based on mixed reality technology
CN115607294A (en) Surgical robot system and data processing method
CN111276022A (en) Gastroscope simulation operation system based on VR technique
Rosen et al. 14 Virtual Reality and Surgery
Zhang et al. Development of a virtual training system for master-slave hip replacement surgery
Rasool et al. Image-driven haptic simulation of arthroscopic surgery
Nakao et al. Planning and training of minimally invasive surgery by integrating soft tissue cuts with surgical views reproduction
Salb et al. Preoperative planning and training simulation for risk reducing surgery
Liu et al. Application of Human-Computer Interaction Technology in Laparoscopic Surgery
Moghani et al. SuFIA-BC: Generating High Quality Demonstration Data for Visuomotor Policy Learning in Surgical Subtasks
Li et al. Key technology of Laparoscopic Surgery Training System Based on Virtual Reality Technology
Mukai et al. Development of a PC‐based real‐time surgical simulator
Chacko et al. Virtual surgery on geometric model of real human organ data
Mukit Intraoperative Virtual Surgical Planning Through Medical Mixed Reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant