[go: up one dir, main page]

CN119367059A - Surgical robot system, surgical robot drilling guidance method and device - Google Patents

Surgical robot system, surgical robot drilling guidance method and device Download PDF

Info

Publication number
CN119367059A
CN119367059A CN202310926427.3A CN202310926427A CN119367059A CN 119367059 A CN119367059 A CN 119367059A CN 202310926427 A CN202310926427 A CN 202310926427A CN 119367059 A CN119367059 A CN 119367059A
Authority
CN
China
Prior art keywords
hole
lesion area
arm
surgical robot
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310926427.3A
Other languages
Chinese (zh)
Inventor
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202310926427.3A priority Critical patent/CN119367059A/en
Publication of CN119367059A publication Critical patent/CN119367059A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

本发明涉及一种手术机器人系统、手术机器人打孔指引方法与装置,获取病灶区域的第一影像数据,根据第一影像数据,建立病灶区域的三维模型;获取扫描设备工作环境的第二影像数据,根据病灶区域的三维模型和第二影像数据,确定视觉设备坐标系下持械臂的孔位位置。整个过程中,医生可通过视觉设备直观地观察到持械臂的打孔位置和病灶区域,从实现准确的手术机器人打孔指引。

The present invention relates to a surgical robot system, a surgical robot drilling guidance method and device, which obtains first image data of a lesion area, establishes a three-dimensional model of the lesion area based on the first image data, obtains second image data of a scanning device working environment, and determines the hole position of a mechanical arm in a visual device coordinate system based on the three-dimensional model of the lesion area and the second image data. During the whole process, the doctor can visually observe the drilling position of the mechanical arm and the lesion area through the visual device, thereby realizing accurate surgical robot drilling guidance.

Description

Surgical robot system, surgical robot punching guiding method and device
Technical Field
The present application relates to the field of robot-assisted surgery, and in particular, to a surgical robot system, a surgical robot punch guiding method, a surgical robot punch guiding device, a storage medium, and a computer program product.
Background
With the development of scientific technology, minimally invasive surgical robots are increasingly being widely used in minimally invasive surgery. The surgical robot is designed to accurately implement complex surgical operations in a minimally invasive manner. The surgical instrument enters the body cavity through a specific hole on the body surface to approach the focus, and the doctor controls the surgical robot to finish the operation. The hole site of the patient's body surface for the surgical instrument to pass through is usually planned preoperatively by a doctor according to the focus and operation type etc., and perforated empirically by the doctor.
However, the experience of doctors is difficult to combine with the configuration and working space of minimally invasive surgical robots, resulting in the inability of conventional surgical robotic systems to achieve accurate punch guidance.
Disclosure of Invention
Based on this, it is necessary to provide a surgical robot system, a surgical robot punch guide method, a device, a storage medium and a computer program product capable of realizing accurate punch guide, aiming at the technical problem that the surgical robot system cannot realize accurate punch guide in the prior art.
In a first aspect, the present application provides a surgical robotic system comprising a mirror holding arm, and:
the scanning unit is used for acquiring first image data of the focus area;
the modeling unit is used for establishing a three-dimensional model of the focus area according to the first image data;
The visual unit is used for acquiring second image data of the working environment of the scanning equipment, and the scanning equipment is used for generating first image data of the focus area by scanning;
And the processing unit is used for determining the position of a first hole site under a coordinate system of the visual equipment according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is the hole site of the lens holding arm.
In one embodiment, the determining, by the processing unit, the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the focal region and the second image data includes:
acquiring a first pose relationship between the scanning equipment and the focus area;
Acquiring a second pose relationship between the scanning equipment and the vision equipment;
Obtaining the position information of the three-dimensional model of the focus area under a visual equipment coordinate system according to the first pose relation and the second pose relation;
and determining the position of the first hole site under the coordinate system of the visual equipment according to the position information of the three-dimensional model of the focus area under the coordinate system of the visual equipment.
In one embodiment, the processing unit is further configured to obtain a pre-punching position of the focal region, and the determining, according to the position information of the three-dimensional model of the focal region in the vision equipment coordinate system, the position of the first hole site in the vision equipment coordinate system includes:
And correcting the pre-punching position according to the position of the three-dimensional model of the focus area under the visual equipment coordinate system to obtain the position of the first hole site under the visual equipment coordinate system.
In one embodiment, the surgical robot system further includes a holding arm, and the processing unit is further configured to obtain a position of a second hole according to the position of the first hole, where the position of the second hole is a hole of the holding arm.
In one embodiment, the obtaining the position of the second hole according to the position of the first hole includes:
acquiring a preset distance between the first hole site and the second hole site;
And acquiring the position of a second hole site according to the preset distance and the position of the first hole site.
In one embodiment, the obtaining the position of the second hole according to the preset distance and the position of the first hole includes:
Determining the pose of the tail end of the mechanical arm according to the preset distance, the position of the first hole site and the focus area;
carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm;
If the inverse kinematics has a solution, determining that the contact point between the connecting line of the pose of the tail end of the mechanical arm and the focus area and the body surface is the position of the second hole site.
In one embodiment, the determining the pose of the distal end of the arm according to the preset distance, the position of the first hole site, and the focal region includes:
determining the axis of the sleeve of the manipulator according to the preset distance, the position of the first hole site and the focus area;
selecting a rotation degree of freedom of the mechanical holding sleeve around the mechanical holding arm sleeve axis to determine the middle position of the joint stroke;
and determining the tail end pose of the manipulator according to the middle position of the joint stroke.
In one embodiment, the number of the preset distances is a plurality, and the obtaining the position of the second hole according to the preset distance and the position of the first hole includes:
determining the positions of a plurality of initial second hole sites according to the preset distances and the positions of the first hole sites;
calculating the arm spacing between the lens holding arm and the mechanical holding arm in a plurality of initial second hole sites;
And determining a hole site with the largest arm distance between the lens holding arm and the mechanical holding arm as a second hole site.
In one embodiment, the calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole positions includes:
Carrying out inverse kinematics solution on the mechanical holding arm according to the tail end pose of the mechanical holding arm to obtain the target pose of the mechanical holding arm;
and calculating the arm spacing between the lens holding arm and the mechanical holding arm in a plurality of initial second hole positions according to the target pose of the mechanical holding arm and the pose of the lens holding arm.
In a second aspect, the present application also provides a surgical robot perforation guiding method, the method comprising:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
In one embodiment, the determining the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the focal region and the second image data includes:
acquiring a first pose relationship between the scanning equipment and the focus area;
Acquiring a second pose relationship between the scanning equipment and the vision equipment;
Obtaining the position information of the three-dimensional model of the focus area under a visual equipment coordinate system according to the first pose relation and the second pose relation;
and determining the position of the first hole site under the coordinate system of the visual equipment according to the position information of the three-dimensional model of the focus area under the coordinate system of the visual equipment.
In one embodiment, the surgical robot perforation guiding method further includes:
acquiring a preset distance between a first hole site and a second hole site, wherein the second hole site is a hole site of a bearing arm;
determining the pose of the tail end of the manipulator according to the preset distance and the focus area;
carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm;
If the inverse kinematics has a solution, determining that the contact point between the connecting line of the pose of the tail end of the mechanical arm and the focus area and the body surface is the position of the second hole site.
In a third aspect, the application also provides a surgical robot punching guide device, which comprises a scanning assembly, a vision assembly and a control assembly;
the scanning component generates first image data of a focus area and sends the first image data to the control component;
the control assembly generates a location of a first hole site in the vision assembly using the surgical robot punch guidance method described above.
In one embodiment, the scanning assembly comprises an ultrasonic probe, wherein a visual target is arranged on the ultrasonic probe, and the visual target emits a cursor signal to irradiate the focus area;
The vision component receives the cursor signal on the focus area and sends the position information of the cursor signal to the control component.
In one embodiment, the visual component comprises:
The camera element is used for collecting the cursor signal on the focus area and sending the position information of the cursor signal to the control component;
the angle adjusting element is used for bearing the image pickup element and is connected with the control assembly, and the control assembly controls the angle adjusting element to adjust the image tracking angle of the image pickup element;
And the mixed reality element displays the hole site under the coordinate system of the visual component pushed by the control component.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
The surgical robot system, the surgical robot punching guiding method, the device, the storage medium and the computer program product are used for acquiring first image data of a focus area, establishing a three-dimensional model of the focus area according to the first image data, acquiring second image data of a working environment of scanning equipment, and determining the position of a hole site of a manipulator under a coordinate system of vision equipment according to the three-dimensional model of the focus area and the second image data. In the whole process, a doctor can intuitively observe the punching position and the focus area of the manipulator through visual equipment, thereby realizing accurate punching guidance of the surgical robot.
Drawings
FIG. 1 is a schematic view of an application scenario of a surgical robotic system of the present application in one embodiment;
FIG. 2 is a block diagram of a surgical robotic system in one embodiment;
FIG. 3 is a schematic diagram of a scanning device, a vision device, and a focal region relative relationship in one embodiment;
FIG. 4 is a schematic illustration of the relative positions of the manipulator and mirror-holding arms in a surgical robotic system according to one embodiment;
FIG. 5 is a schematic diagram illustrating the relative position of a focal point and a lens holder according to one embodiment;
FIG. 6 is a schematic diagram of a pre-perforation location determination process in one embodiment;
FIG. 7 is a flow diagram of a surgical robot punch guidance method in one embodiment;
FIG. 8 is a schematic structural view of a surgical robotic punch guide in one embodiment;
FIG. 9 is a schematic diagram of a scanning assembly and a vision assembly according to one embodiment.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit of the invention, whereby the invention is not limited to the specific embodiments disclosed below.
In one embodiment, as shown in fig. 1, a surgical robotic system is provided that includes a mirror holding arm. The present embodiment will be described by taking the surgical robot system for assisting a doctor in guiding a drilling site as an example. In this embodiment, the surgical robotic system further comprises a scanning unit, a modeling unit, a vision unit, and a processing unit, which may be integrated in one or more consoles. In practical application, the surgical robot system further comprises a scanning device and a vision device, one or more main control consoles integrated with the system units are respectively connected with the scanning device and the vision device, a patient lies on an operation table, the scanning device collects medical images of the patient and sends the collected medical images to the computer device, the main control consoles process the obtained hole positions of the mirror holding arms under the coordinate system of the vision device and send the data to the vision device, a doctor wears the vision device, the vision device displays the hole positions of the mirror arms, and thus the whole surgical robot system can realize accurate punching guidance.
Specifically, as shown in fig. 2, in one embodiment, a surgical robotic system includes:
The scanning unit 200 is configured to acquire first image data of a focal region.
The scanning device scans and images a scanning object to obtain medical image data, and the scanning unit 200 analyzes the medical image data to obtain first image data of a focus area. Specifically, the scan object is an object perforated by the present surgical robot. Taking a perforated object as an example of a patient, the scanning object is a surgical patient, and the medical image of the patient is scanned by a scanning device. The focus area refers to the area where lesions occur on the body of the subject perforated by the current surgical robot. For example, the location of a lesion in the kidney or gall bladder in the abdominal cavity of the subject may be scanned. Further, the scanning unit 200 may firstly intercept an image of a preset scanning area from the medical image data sent by the scanning device, where the preset scanning area refers to an area corresponding to the current operation, and specifically, the preset scanning area may be a scanning area that is pre-marked on the subject, for example, an area such as an abdomen, a chest, a back, etc. of the subject, for example, the image data of the abdomen of the patient.
The modeling unit 400 is configured to build a three-dimensional model of the lesion area according to the first image data.
And taking the first image data as basic data, adopting virtual three-dimensional modeling to construct a virtual three-dimensional model, and identifying and obtaining a corresponding focus area in the organism of the patient on the virtual three-dimensional model. In this way, the calculation and planning of perforation locations for lesion areas in the virtual three-dimensional model in subsequent processing may be facilitated.
The vision unit 600 is configured to acquire second image data of an operating environment of a scanning device, where the scanning device is a scanning device that scans first image data of a lesion area.
The scanning device operating environment refers to the scene environment when the scanning device scans the patient to generate medical image data. In particular, as already mentioned above, the scanning device scans the patient to generate a medical image, while at the same time image data of the scanning device is acquired during the scanning operation, so that the phase position relation with the scanning device can be accurately located later. Further, in order to facilitate accurate determination of the relative pose relationship between the vision device and the scanning device in the following process, accurate hole positions of the holding arm are displayed in the vision device.
The processing unit 800 is configured to determine a position of a first hole in the coordinate system of the vision device according to the three-dimensional model of the focal region and the second image data, where the first hole is a hole of the lens-holding arm.
The vision device can assist the physician in visually observing the surrounding environment while presenting a specific perforation location thereon for the physician to view. For example, the vision device may be a mixed reality helmet (MR helmet for short) worn by a doctor. Specifically, the hole position of the lens holding arm in the surgical robot system can be determined based on the three-dimensional model of the focus area, at the moment, the position of the hole position of the lens holding arm is constructed under the coordinate system of the scanning device, and finally, in order to accurately map the hole position under the coordinate system of the vision device, the relative pose relationship between the scanning device and the vision device is further determined, namely, the point position mapping relationship between the coordinate system of the scanning device and the coordinate system of the vision device is definitely determined. On the basis of determining the pose relationship, mapping the position of the hole position of the lens holding arm determined based on the three-dimensional model of the focus area into a vision equipment coordinate system to determine the hole position of the lens holding arm under the vision equipment coordinate system.
Further, when the hole position of the arm is determined based on the three-dimensional model of the focus area, the hole position is calculated in advance in the virtual three-dimensional model based on the lesion position in the body of the patient and in combination with the configuration of the surgical robot system. For example, the lesion location is located on an organ in the abdominal region and the perforation location corresponds to the body surface of the abdomen. After the hole positions of the lens holding arms under the visual equipment coordinate system are determined, data can be pushed to the visual equipment, so that a doctor can display accurate hole positions of the lens holding arms in a display interface of the identification equipment after wearing the visual equipment, namely, the doctor can intuitively observe the punching positions, and accurate punching guidance is realized.
The surgical robot system acquires first image data of a focus area, establishes a three-dimensional model of the focus area according to the first image data, acquires second image data of a working environment of scanning equipment, and determines the position of a hole site of a manipulator under a coordinate system of visual equipment according to the three-dimensional model of the focus area and the second image data. In the whole process, a doctor can intuitively observe the punching position and the focus area of the manipulator through visual equipment, thereby realizing accurate punching guidance of the surgical robot.
In one embodiment, the processing unit 800 determines the position of the first hole under the coordinate system of the vision device according to the three-dimensional model and the second image data of the focal region, wherein the determining comprises obtaining a first pose relationship between the scanning device and the focal region, obtaining a second pose relationship between the scanning device and the vision device, obtaining position information of the three-dimensional model of the focal region under the coordinate system of the vision device according to the first pose relationship and the second pose relationship, and determining the position of the first hole under the coordinate system of the vision device according to the position information of the three-dimensional model of the focal region under the coordinate system of the vision device.
The scanning device is a device for scanning a patient and generating an image of a preset scanning area, and may be any of an ultrasound imaging device, an X-ray device, and a magnetic resonance imaging device (abbreviated as MRI device). In this embodiment, the scanning device is an ultrasound imaging device. The first pose relationship between the scanning device and the focus area corresponding to the scanning object refers to the pose relationship between the scanning device and the focus area of the scanning object. For example, there may be a relationship between the relative position and pose between the ultrasound probe of the ultrasound imaging device and the focal region of the patient.
The relative pose relationship of the scanning device and the vision device refers to a relationship between the relative position and pose of a structure of the scanning device for scanning a pre-scribed scanning area on the patient's body and the vision device. For example, the relationship between the position and the posture of the ultrasound probe of the ultrasound imaging device and the MR helmet can be established. The second pose relationship of the scanning device and the vision device refers to a relationship between the relative positions and poses of the scanning device and the vision device. For example, the relationship between the position and the posture of the ultrasound probe of the ultrasound imaging device and the MR helmet can be established. The position of the lens holding arm in the three-dimensional model of the focus area can be mapped to the position of the vision equipment coordinate system by combining the first pose relation and the second pose relation, and then the position of the lens holding arm hole in the three-dimensional model of the focus area can be mapped to the position of the lens holding arm in the vision equipment coordinate system, so that the position of the lens holding arm in the vision equipment coordinate system is obtained.
For example, as will be understood with reference to fig. 3, the processing unit 800 may obtain a first pose relationship according to a relative pose relationship between an ultrasound probe on an ultrasound imaging device and a lesion area corresponding to a patient, where the first pose relationship is a relative pose relationship between the probe and the lesion area. The processing unit 800 may further obtain a second pose relationship according to a relative pose relationship between the probe and the MR helmet on the ultrasound imaging device, where the second pose relationship is a pose relationship between the probe and the MR helmet. The processing unit 800 may map the three-dimensional model of the focal region to the vision device coordinate system according to the relationship between the first pose relationship and the second pose relationship, and determine the position of the lens holding arm in the vision device coordinate system based on the position of the lens holding arm determined in the three-dimensional model of the focal region.
In this embodiment, the pose relationship between the focal region and the MR helmet is obtained through the conversion of the pose relationship among the focal region, the scanning device and the vision device, so as to convert the focal region position information into the position information under the coordinate system of the vision device. The pose includes both positional information and attitude information, and thus the pose relationship includes a relationship between positions and a relationship between attitudes.
It should be noted that, in this embodiment, the vision device may include a depth camera and a degree of freedom pan-tilt, where the depth camera may detect three-dimensional information of an environment, and the degree of freedom pan-tilt may adjust a posture of the depth camera, for example, an angle of the depth camera may be adjusted by rotating the degree of freedom pan-tilt, so as to adjust a position of a field of view of the depth camera. Based on the fact that the depth camera can only detect three-dimensional information of surrounding environment and cannot detect intra-cavity information of a patient, the position information of the probe in the scanning device can be obtained through the depth camera, the position information of a focus area can be obtained through the probe, and then the pose relation between the depth camera and the focus area can be calculated by taking the probe in the scanning device as a reference.
In one embodiment, the processing unit 800 is further configured to obtain a pre-punching position of the focal region, and determining the position of the first hole site under the vision equipment coordinate system according to the position information of the three-dimensional model of the focal region under the vision equipment coordinate system includes correcting the pre-punching position according to the position of the three-dimensional model of the focal region under the vision equipment coordinate system to obtain the position of the first hole site under the vision equipment coordinate system.
The pre-punch position of the lesion area refers to a predetermined punch position of the lens holding arm before performing an operation based on a three-dimensional model of the lesion area. In practical application, the punching position is preliminarily determined by checking before the operation, and when the operation is actually performed, the preset punching position is corrected in order to further improve the operation precision, and after the correction in the three-dimensional model of the focus area is completed, the position of the first hole position under the coordinate system of the vision equipment is obtained according to the position of the three-dimensional model of the focus area under the coordinate system of the vision equipment.
In one embodiment, the surgical robot system further comprises a holding arm, and the processing unit is further configured to obtain a position of a second hole according to the position of the first hole, where the position of the second hole is a hole of the holding arm.
As shown in fig. 4, the surgical robot system includes a lens holding arm 420 and a mechanical holding arm 410, wherein the lens holding arm 420 is fixedly connected with an endoscope 422 and drives the endoscope 422 to move, and the mechanical holding arm 410 is fixedly connected with a surgical instrument 412 and drives the surgical instrument 412 to move. After the hole position of the arm 420 is determined, the position of the hole of the arm 410 needs to be further determined based on the position of the hole of the arm 420. Specifically, since a relative distance relationship is generally maintained between the arm 420 and the arm 410, the arm 410 position can be determined based on the relative distance after the arm 420 position is obtained.
In one embodiment, the obtaining the position of the second hole site according to the position of the first hole site includes obtaining a preset distance between the first hole site and the second hole site, and obtaining the position of the second hole site according to the preset distance and the position of the first hole site.
The preset distance is a predetermined distance, which may be derived specifically based on a priori experience. Specifically, the minimum punching distance and the maximum punching distance between the manipulator and the mirror holding arm can be obtained based on prior experience, and then the proper preset distance is determined based on the motion track, the size parameter, the patient size parameter and the like of the manipulator and the mirror holding arm in the minimum punching distance and the maximum punching distance. More specifically, in a certain application scenario, the minimum punching distance between the mechanical arm and the lens holding arm may be 6 cm, and the maximum punching distance between the mechanical arm and the lens holding arm may be 10 cm.
In one embodiment, obtaining the position of the second hole site according to the preset distance and the position of the first hole site comprises determining the pose of the tail end of the holding arm according to the preset distance, the position of the first hole site and the focus area, performing inverse kinematics solution on the holding arm according to the pose of the tail end of the holding arm, and determining the contact point between the connecting line of the pose of the tail end of the holding arm and the focus area and the body surface as the position of the second hole site if the inverse kinematics solution exists.
After the focal area is determined, i.e. the focal point is determined, as the surgical instrument fixed by the arm needs to be directed to the focal point, and the distance between the arm and the arm is fixed by the preset distance, the pose of the end of the arm can be determined based on the position of the first hole site. And on the basis of obtaining the pose of the tail section of the holding arm, carrying out inverse kinematics solution on the holding arm according to the pose of the tail end of the holding arm. Specifically, the inverse kinematics solution here refers to a solution corresponding to the requirement that the maximum constraint of the arm spacing between the telecentric mechanisms of the holding arm and the preset distance between the holding arm and the mirror holding arm are met in the motion process of the holding arm. If the inverse kinematics is solved, the method shows that the proper target joint angle of the holding arm can be calculated, and then the arm distance value of the telecentric mechanism of the holding arm can be obtained. Under the condition of inverse kinematics solution, the connecting line between the pose of the tail end of the arm and the focus area is further marked, and the contact point between the connecting line and the body surface of the patient is the hole site of the arm.
In one embodiment, determining the pose of the distal end of the arm according to the preset distance, the position of the first hole site and the focal region includes determining an axis of the arm-holding sleeve according to the preset distance, the position of the first hole site and the focal region, selecting the rotational freedom degree of the arm-holding sleeve around the axis of the arm-holding sleeve to determine the median position of the joint stroke, and determining the pose of the distal end of the arm-holding according to the median position of the joint stroke.
With continued reference to fig. 5, consider that if the preoperative positioning step can ensure that the axis of the mechanical holding arm (the puncture outfit sleeve) points to the focal point, the end instrument points to the focal point at this time, and the process of adjusting the pose of the telecentric mechanism of the mechanical holding arm can be simplified. Under the concept, in this embodiment, after the preset distance between the mechanical arm and the mirror arm, the hole position of the mirror arm and the axis of the sleeve of the mechanical arm are determined, the intermediate position of the joint stroke is selected by the degree of freedom of the sleeve rotating around the sleeve, and the pose of the tail end of the mechanical arm can be determined.
In one embodiment, the number of the preset distances is a plurality, the position of the second hole site is obtained according to the preset distances and the positions of the first hole site, the position of the plurality of initial second hole sites is determined according to the preset distances and the positions of the first hole site, the arm distance between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites is calculated, and the hole site with the largest arm distance between the lens holding arm and the mechanical holding arm is determined to be the second hole site.
The preset distance between the mechanical holding arm and the mirror holding arm can be set to be a plurality of distances according to the actual situation, and on the premise that the number of the preset distances is a plurality of distances, the positions of a plurality of alternative second hole sites can be determined according to the plurality of preset distances and the positions of the first hole sites, so that the positions of a plurality of initial second hole sites are obtained. In this case, it is necessary to select, from among the positions of the plurality of initial second hole sites, the hole site having the largest arm pitch between the mirror holding arm and the mechanical holding arm as the final second hole site. Specifically, the maximum arm distance between the lens holding arm and the mechanical holding arm means that the possibility of collision between the mechanical holding arm and the lens holding arm is minimum, and the determined second hole site is the optimal second hole site. Furthermore, the preset distances can be calculated continuously and iteratively according to the equal step spacing, in the number k of cyclic iterative searches, by taking Δp as the step increment of the preset distance, and selecting the hole site corresponding to the maximum arm spacing between the mirror holding arm and the mechanical holding arm as the second hole site.
In one embodiment, calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites comprises performing inverse kinematics solution on the mechanical holding arm according to the pose of the tail end of the mechanical holding arm to obtain the target pose of the mechanical holding arm, and calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites according to the target pose of the mechanical holding arm and the pose of the lens holding arm.
And (3) carrying out inverse kinematics solution on the holding arm based on the terminal pose of the holding arm, and calculating the target joint angle of the holding arm through the inverse kinematics of the holding arm to obtain the target pose of the holding arm. In order to further obtain the arm spacing between the lens holding arm and the mechanical holding arm, the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites needs to be calculated according to the target pose of the mechanical holding arm, the pose of the lens holding arm and different initial second hole sites.
In order to further explain the technical principle of the surgical robot system and the working process thereof in detail, a description will be given below of a determination process of a preset punching position.
In one embodiment, the preset perforation position is obtained by determining an operation area according to position information of a focus area, determining a candidate perforation area according to the position and the size of the operation area and structural parameters of a mechanical arm in the operation robot, screening a feasible perforation area from the candidate perforation area according to first image information, and determining the preset perforation position in the feasible perforation area according to the structural parameters of the mechanical arm in the operation robot.
The focal region position information is a specific position of a focal point in the patient, for example, the focal point may be a gallbladder, a stomach, etc. in the abdominal cavity of the patient. By the specific location of the focal point, the general area of the procedure, i.e., the general location and size of the surgical area, can be determined. Candidate perforation areas may be determined by the approximate location and size of the surgical area, in combination with the length of the robotic arm in the surgical robot, and the spacing between the robotic arms. And then screening out the areas capable of punching, namely feasible punching areas, from the candidate punching areas. A preset punch position is determined in the feasible punch area based on the length of the robotic arms in the surgical robot, and the spacing between the robotic arms. The mechanical arm comprises a lens holding arm used for holding the endoscope and a mechanical holding arm used for holding the surgical instrument. Typically, the number of holding arms is one, and the number of holding arms is three. Through the processing flow of the embodiment, the accuracy of the preset punching position can be improved, so that the operation risk can be reduced.
In one embodiment, selecting a feasible puncturing area from the candidate puncturing area according to the first image information includes:
Identifying a risk perforation location based on the first image information, wherein the risk perforation location includes a perforation location through a bone, a blood vessel, or a nerve; and eliminating the risk punching positions from the candidate punching areas, and screening out feasible punching areas.
Since there are structures such as blood vessels, nerves and bones in the human body, if holes are made at the positions having the structures such as blood vessels, nerves and bones, there is a risk of operation, and therefore, it is necessary to screen out these positions. In this embodiment, the positions with blood vessels, nerves or bones need to be screened out based on the first image information, so that a feasible perforation area is obtained. Thus, the risk of perforation is reduced, and the reliability and safety performance of the operation are ensured.
In one embodiment, determining the location of the pre-punch location hole site of the lesion area in the viable punch area based on the structural parameters of the robotic arm comprises:
the method comprises the steps of obtaining length parameters of a lens holding arm based on structural parameters of the mechanical arm, screening a feasible punching area of the lens holding arm in a feasible punching area according to the length parameters of the lens holding arm and preset depth-of-field limiting constraint, selecting a lens holding arm punching point and a mechanical holding arm punching point in the feasible punching area of the lens holding arm, and collecting the lens holding arm punching point and the mechanical holding arm punching point to obtain a pre-punching position of a focus area.
Since the accuracy of the hole punching position of the lens holding arm is lower than that of the mechanical holding arm, the hole punching position of the lens holding arm can be determined first. In the application, when the endoscope is arranged in the abdominal cavity, objects in the abdominal cavity can be clearly imaged within the depth of field. Thus, the punching precision of the endoscope arm can be ensured, and the specific situation that the clear lesion position can be obtained through the endoscope during operation can be further ensured, and the smooth operation process can be further ensured. Further, a point closest to the lesion area within the viable perforation area of the arm may be selected as the perforation point of the arm (endoscope sleeve). By selecting the point closest to the focus area as the punching point of the endoscope holding arm, the condition of the lesion point can be well observed after the endoscope is placed into the abdominal cavity.
In one embodiment, as shown in fig. 6, the present application further provides another method for determining a punching position of a manipulator, which specifically includes the following steps:
And acquiring first image data of the focus area, and determining the operation area according to the position information of the focus area. And determining candidate punching areas according to the position and the size of the operation area and the structural parameters of the mechanical arm in the operation robot. Based on the first image information, a risk perforation location is identified, wherein the risk perforation location includes a perforation location through a bone, a blood vessel, or a nerve. And eliminating the risk punching positions from the candidate punching areas, and screening out feasible punching areas. Based on structural parameters of a mechanical arm in the surgical robot, length parameters of the lens holding arm are obtained. And screening the feasible punching area of the lens holding arm according to the length parameter of the lens holding arm and the preset depth of field limiting constraint. And selecting a hole punching point of the lens holding arm and a hole punching point of the mechanical holding arm in a feasible hole punching area of the lens holding arm. Collecting and holding the punching points of the mirror arm and the punching points of the mechanical arm in the surgical robot to obtain the preset punching positions. And acquiring the punching position of the preset punching position under the coordinate system of the visual equipment according to the converted focus area position information and the preset punching position in the focus area. Pushing the punching position under the coordinate system of the visual equipment to the visual equipment.
Based on the same inventive concept, as shown in fig. 7, the embodiment of the application further provides a surgical robot punching guiding method, which comprises the following steps:
s200, acquiring first image data of a focus area;
s400, establishing a three-dimensional model of a focus area according to the first image data;
S600, acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of a focus area;
S800, determining the position of a first hole site under a coordinate system of the vision equipment according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is the hole site of the lens holding arm.
In one embodiment, determining the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the lesion area and the second image data includes:
The method comprises the steps of obtaining a first pose relation between scanning equipment and a focus area, obtaining a second pose relation between the scanning equipment and vision equipment, obtaining position information of a three-dimensional model of the focus area under a vision equipment coordinate system according to the first pose relation and the second pose relation, and determining the position of a first hole site under the vision equipment coordinate system according to the position information of the three-dimensional model of the focus area under the vision equipment coordinate system.
In one embodiment, the surgical robot punching guiding method further comprises the steps of obtaining a preset distance between a first hole site and a second hole site, wherein the second hole site is a hole site of the holding arm, determining the tail end pose of the holding arm according to the preset distance and the focus area, carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm, and determining the position of the second hole site of a contact point of a connecting line of the tail end pose of the holding arm and the focus area with the body surface if the inverse kinematics solution exists.
The implementation of the solution provided by the surgical robot perforation guidance method in the above embodiment is similar to the implementation described in the surgical robot system, so specific limitations in the above embodiment of the above provided one or more surgical robot perforation guidance methods may be referred to the above limitations in the surgical robot system, and will not be repeated here.
Based on the same inventive concept, as shown in fig. 8, an embodiment of the present application further provides a surgical robot punching guiding apparatus, where the whole apparatus specifically includes a scanning assembly 820, a vision assembly 840, and a control assembly 860;
the scanning module 820 generates first image data of the lesion area and sends the first image data to the control module 860. The control module 860 generates the position of the first hole site in the vision module 840 using the surgical robot punch guidance method as described above.
In one embodiment, as shown in fig. 8 and 9, the scanning component 820 includes an ultrasonic probe 822, a visual target is disposed on the ultrasonic probe 822, the visual target emits a cursor signal to irradiate a focal region, and the visual component 840 receives the cursor signal on the focal region and sends position information of the cursor signal to the control component.
The scanning component 820 comprises an ultrasonic probe 822, wherein a visual target is arranged on the ultrasonic probe 822, the visual target emits a cursor signal to irradiate a focus area of a scanning object, and the visual component 840 receives the cursor signal on the focus area and sends position information of the cursor signal to the control component.
The visual target is a structure capable of emitting a cursor signal, and specifically, the optical signal may be an infrared cursor or a light emitted by a light emitting diode. In the present embodiment, the scanning assembly 820 is described as an example of an ultrasound apparatus, and in the present embodiment, the focal region includes a focal position in the abdominal cavity and a body surface of a patient corresponding to the focal position. The cursor signal emitted by the visual target irradiates on the body surface corresponding to the focus position of the scanning object. Specifically, when the patient is scanned by the ultrasonic device, the probe of the ultrasonic device abuts against the body surface of the patient, and at this time, the cursor signal emitted from the visual target on the probe just irradiates the body surface. Since the depth camera is capable of detecting three-dimensional information of the environment, the depth camera is capable of receiving the cursor signal and transmitting the position information of the received cursor signal to the processing component.
In this embodiment, by setting a visual target capable of sending out a cursor signal on the ultrasonic probe 822, the cursor signal is received by the visual component 840, and the position information of the cursor signal can be sent to the processing component, so as to obtain the relative pose relationship between the scanning component 820 and the visual component 840. The pose relationship between the ultrasound probe 822 and the focal region can be obtained by the ultrasound probe 822 and the focal region detected by the ultrasound probe 822.
In one embodiment, the control assembly is further configured to control the scanning assembly 820 to emit a cursor signal onto the focal region of the scanned object.
The cursor signal refers to a visible light identification signal, and may be, for example, a red bright spot or a visible light with a certain pattern. The irradiation of the cursor signal to the focal region of the scan object means that the cursor signal emitted by the scan component 820 is irradiated to the body surface corresponding to the focal region of the scan object. For example, when scanning the abdomen of a patient, a red color or a cursor having a specific pattern emitted from an ultrasonic target on the ultrasonic probe 822 can be irradiated onto a body surface corresponding to the abdomen of the patient.
The vision component 840 is controlled to continuously track the cursor signal over the focal region such that the cursor signal is within the field of view of the vision component 840. Controlling the vision assembly 840 to continuously track the cursor signal on the focal region means that the cursor signal is always within the field of view of the vision assembly 840 by controlling the field of view of the vision assembly 840. For example, the field of view of the vision assembly 840 may be adjusted by controlling the vision assembly 840 to move or rotate such that the cursor signal is always within the field of view of the vision assembly 840.
Only if the cursor signal is always within the field of view of the vision component 840, that is, within the field of view of the depth camera, it is ensured that the vision component 840 can always detect the specific position of the ultrasonic probe 822 on the body surface, and thus, the processing component can always obtain the pose relationship between the ultrasonic probe 822 and the vision component 840. The pose relationship of the vision assembly 840 to the focal region is thus derived from the pose relationship between the ultrasound probe 822 and the focal region. Therefore, in the present embodiment, the processing component controls the vision component 840 to continuously track the cursor signal on the focal region, so that the cursor signal is in the visual field range of the vision component 840, so that the processing component can always obtain the pose relationship between the vision component 840 and the focal region, thereby ensuring the smooth operation of the punching guide and ensuring the operation reliability of the surgical robot system.
In one embodiment, as shown in fig. 9, the vision component 840 includes an image pickup element 842, an angle adjusting element 844 and a mixed reality element 846, wherein the image pickup element 842 receives a cursor signal on a focal region and sends position information of the cursor signal to the processing component, the angle adjusting element 844 is connected to the image pickup element 842 and is in communication connection with the processing component, the processing component controls the angle adjusting element 844 to adjust an image tracking angle of the image pickup element 842, the mixed reality element 846 is in communication connection with the processing component, and the mixed reality element 846 displays a punching position under a coordinate system of the vision component 840 pushed by the processing component.
It should be noted that, the image pickup element 842 may be the depth camera, the angle adjusting element 844 may be the degree of freedom cradle head, and the mixed reality element 846 may present the information of the punching position in front of the eyes wearing the MR helmet in a mixed reality manner. The detected cursor signal position information is sent to the processing component by tracking the cursor signal emitted by the visual target on the ultrasound probe 822 by the camera element 842, i.e., the depth camera. By connecting the image pickup element 842 to the angle adjustment element 844, the image pickup element 842 can change position with rotation of the angle adjustment element 844, thereby adjusting the angle and position of the field of view. By communicatively coupling the angle adjustment member 844 with the processing assembly, the angle of the angle adjustment member 844132 can be controlled by the processing assembly to effect adjustment of the image tracking angle of the image capture member 842.
Illustratively, the degree-of-freedom cradle head has two degrees of freedom, a pitch degree of freedom and a yaw degree of freedom, respectively. When the doctor wears the MR helmet, the doctor adjusts the head pose such that the cursor signal emitted from the visual target in the ultrasonic probe 822 is located within the depth camera field of view of the MR helmet, which is the initialization process of the vision component 840. After initialization is completed, the degree-of-freedom cradle head adjusting function of the depth camera is started, the depth camera can detect a cursor signal emitted by a visual target, and after the processing component acquires an image of the cursor signal sent by the depth camera, the processing component can identify the position and the gesture of the cursor signal and control the depth camera to track the cursor signal in real time, so that the processing component can always obtain the pose relation between the visual component 840 and the ultrasonic target, and the reliability of punching guide work is guaranteed.
In one embodiment, the processing component is further configured to control the vision component 840 to track a cursor signal on the focal region, and adjust the image tracking angle of the vision component 840 if the cursor signal is not within the field of view of the vision component 840.
The image tracking angle refers to the angle of the field of view of the vision assembly 840. For example, when the vision component 840 is an MR helmet, a depth camera on the MR helmet is used to detect the cursor signal, and when the cursor signal is outside the lens of the depth camera, the cursor signal can be located within the field of view of the vision component 840 by adjusting the angle of the depth camera, or adjusting the angle of the MR helmet.
And pushing a prompt message beyond the visual field range if the image tracking angle exceeds the preset angle limit value.
The preset angle limit value refers to a preset maximum value or minimum value for adjusting the image tracking angle, and when the value is exceeded, the image tracking angle cannot be continuously adjusted. The out-of-view indication message is an indication signal sent to the user so that the user can adjust the image tracking angle conveniently. For example, when the degree of freedom pan-tilt of the depth camera moves to the limit of the travel, the degree of freedom pan-tilt cannot continue to move, and the doctor needs to be reminded to adjust the angle of the MR helmet, so that the cursor signal is located in the lens of the depth camera.
It can be appreciated that the vision component 840 tracks the cursor signal in real time, and sends the acquired image information to the processing component, the processing component identifies whether the image information has the cursor signal, and when the image information has the cursor signal and the cursor signal is in the center area of the field of view, the processing component only needs to control the vision component 840 to track the cursor signal in real time. When the processing component recognizes that there is a cursor signal in the image information, but the cursor signal is not in the central region of the field of view, the processing component calculates the deviation between the current cursor signal and the boundary of the central region, and controls the angle adjusting element 844 to move to drive the image pickup element 842, so as to adjust the image tracking angle of the image pickup element 842, so that the cursor signal is in the central region of the field of view. When the processing component controls the angle adjusting element 844 to move, the angle adjusting element 844 reaches the limit value and cannot move, the image tracking angle is indicated to exceed the limit value of the preset angle, and at the moment, the image tracking angle cannot be continuously adjusted. The processing component will push a prompt message to the vision component 840 that is out of the field of view to alert the doctor to adjust the orientation of the head-worn vision component 840, thereby adjusting the field of view of the vision component 840 such that the cursor signal is located within the center region of the field of view. Specifically, the adjustment direction may be determined by the intersection of the cursor signal with the center region, and the adjustment distance of the angle adjustment element 844 may be determined based on the distance of the cursor signal from the center of the field of view, thereby determining the adjustment direction and the adjustment distance of the angle adjustment element 844.
Illustratively, when the cursor signal is at the boundary to the left of the center region, then it is necessary to move the angle adjustment element to the left in order to bring the cursor signal within the center region of the field of view. If the angle adjusting element cannot move leftwards, the image tracking angle exceeds the preset angle limit value, at the moment, the processing component pushes a prompt message to the vision component, namely the MR helmet, so that a doctor adjusts the angle of the head, and the cursor signal is located in the central area of the visual field. Therefore, the visual component can be effectively ensured to track the cursor signal all the time, and the smooth proceeding of punching guide work is ensured.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, implements the steps of the above method.
In an embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the above method.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (17)

1.一种手术机器人系统,包括持镜臂,其特征在于,包括:1. A surgical robot system, comprising a scope holding arm, characterized in that it comprises: 扫描单元,用于获取病灶区域的第一影像数据;A scanning unit, used for acquiring first image data of a lesion area; 建模单元,用于根据所述第一影像数据,建立病灶区域的三维模型;A modeling unit, used for establishing a three-dimensional model of the lesion area according to the first image data; 视觉单元,用于获取扫描设备工作环境的第二影像数据,所述扫描设备为扫描生成所述病灶区域的第一影像数据的扫描设备;A visual unit, used for acquiring second image data of a working environment of a scanning device, wherein the scanning device is a scanning device that scans and generates the first image data of the lesion area; 处理单元,用于根据所述病灶区域的三维模型和所述第二影像数据,确定视觉设备坐标系下第一孔位的位置,所述第一孔位为所述持镜臂的孔位。A processing unit is used to determine the position of a first hole position in the visual device coordinate system according to the three-dimensional model of the lesion area and the second image data, wherein the first hole position is the hole position of the mirror holding arm. 2.根据权利要求1所述的手术机器人系统,其特征在于,所述处理单元根据所述病灶区域的三维模型和所述第二影像数据,确定视觉设备坐标系下第一孔位的位置包括:2. The surgical robot system according to claim 1, wherein the processing unit determines the position of the first hole in the visual device coordinate system according to the three-dimensional model of the lesion area and the second image data, comprising: 获取所述扫描设备与所述病灶区域的第一位姿关系;Acquiring a first pose relationship between the scanning device and the lesion area; 获取所述扫描设备与所述视觉设备的第二位姿关系;Acquire a second posture relationship between the scanning device and the visual device; 根据所述第一位姿关系和所述第二位姿关系,得到所述病灶区域的三维模型在视觉设备坐标系下的位置信息;According to the first posture relationship and the second posture relationship, obtaining position information of the three-dimensional model of the lesion area in the visual device coordinate system; 根据所述病灶区域的三维模型在视觉设备坐标系下的位置信息,确定视觉设备坐标系下第一孔位的位置。According to the position information of the three-dimensional model of the lesion area in the visual device coordinate system, the position of the first hole in the visual device coordinate system is determined. 3.根据权利要求2所述的手术机器人系统,其特征在于,所述处理单元还用于获取所述病灶区域的预打孔位置;3. The surgical robot system according to claim 2, characterized in that the processing unit is further used to obtain the pre-drilling position of the lesion area; 所述根据所述病灶区域的三维模型在视觉设备坐标系下的位置信息确定视觉设备坐标系下第一孔位的位置包括:Determining the position of the first hole in the visual device coordinate system according to the position information of the three-dimensional model of the lesion area in the visual device coordinate system includes: 根据所述病灶区域的三维模型在视觉设备坐标系下的位置,对所述预打孔位置进行修正,得到视觉设备坐标系下第一孔位的位置。According to the position of the three-dimensional model of the lesion area in the visual device coordinate system, the pre-drilling position is corrected to obtain the position of the first hole in the visual device coordinate system. 4.根据权利要求1所述的手术机器人系统,其特征在于,所述手术机器人系统还包括持械臂,所述处理单元还用于根据所述第一孔位的位置获取第二孔位的位置,所述第二孔位的位置为所述持械臂的孔位。4. The surgical robot system according to claim 1 is characterized in that the surgical robot system also includes a robotic arm, and the processing unit is further used to obtain the position of the second hole position based on the position of the first hole position, and the position of the second hole position is the hole position of the robotic arm. 5.根据权利要求4所述的手术机器人系统,其特征在于,所述根据所述第一孔位的位置获取第二孔位的位置包括:5. The surgical robot system according to claim 4, wherein acquiring the position of the second hole position according to the position of the first hole position comprises: 获取第一孔位和第二孔位之间的预设距离;Obtaining a preset distance between the first hole position and the second hole position; 根据所述预设距离与所述第一孔位的位置,获取第二孔位的位置。The position of the second hole is acquired according to the preset distance and the position of the first hole. 6.根据权利要求5所述的手术机器人系统,其特征在于,所述根据所述预设距离与所述第一孔位的位置,获取第二孔位的位置包括:6. The surgical robot system according to claim 5, wherein obtaining the position of the second hole according to the preset distance and the position of the first hole comprises: 根据所述预设距离、所述第一孔位的位置以及所述病灶区域,确定持械臂末端位姿;Determining the position of the end of the robotic arm according to the preset distance, the position of the first hole and the lesion area; 根据所述持械臂末端位姿对持械臂进行逆运动学求解;Performing inverse kinematics solution on the robotic arm according to the end posture of the robotic arm; 若所述逆运动学有解,则确定所述持械臂末端位姿与所述病灶区域的连线与体表的接触点为所述第二孔位的位置。If the inverse kinematics has a solution, the contact point between the line connecting the end posture of the robotic arm and the lesion area and the body surface is determined as the position of the second hole. 7.根据权利要求6所述的手术机器人系统,其特征在于,所述根据所述预设距离、所述第一孔位的位置以及所述病灶区域,确定持械臂末端位姿包括:7. The surgical robot system according to claim 6, wherein determining the position of the end of the robotic arm according to the preset distance, the position of the first hole and the lesion area comprises: 根据所述预设距离、所述第一孔位的位置、以及所述病灶区域,确定持械臂套管轴线;Determining the axis of the sleeve of the robotic arm according to the preset distance, the position of the first hole, and the lesion area; 基于所述持械臂套管轴线,选取持械套管绕自身旋转自由度确定关节行程中位;Based on the axis of the sleeve of the robotic arm, the rotational freedom of the robotic arm sleeve around itself is selected to determine the middle position of the joint travel; 根据所述关节行程中位,确定持械臂的末端位姿。According to the middle position of the joint travel, the end position of the robotic arm is determined. 8.根据权利要求6所述的手术机器人系统,其特征在于,所述预设距离的数量为多个;8. The surgical robot system according to claim 6, wherein the number of the preset distances is multiple; 所述根据所述预设距离与第一孔位的位置获取第二孔位的位置,包括:The obtaining the position of the second hole position according to the preset distance and the position of the first hole position includes: 根据多个预设距离与第一孔位的位置,确定多个初始第二孔位的位置;Determining the positions of a plurality of initial second hole positions according to a plurality of preset distances and the positions of the first hole positions; 计算多个初始第二孔位中所述持镜臂与持械臂的臂间距;Calculating the arm distances between the mirror holding arm and the mechanical holding arm in a plurality of initial second hole positions; 确定所述持镜臂与所述持械臂的臂间距最大的孔位为第二孔位。The hole position at which the arm distance between the mirror holding arm and the mechanical holding arm is the largest is determined as the second hole position. 9.根据权利要求8所述的手术机器人系统,其特征在于,所述计算多个初始第二孔位中所述持镜臂与持械臂的臂间距,包括:9. The surgical robot system according to claim 8, wherein the calculating the arm distance between the scope holding arm and the instrument holding arm in the plurality of initial second hole positions comprises: 根据所述持械臂末端位姿对持械臂进行逆运动学求解,得到所述持械臂的目标位姿;Performing inverse kinematics solution on the robotic arm according to the end posture of the robotic arm to obtain the target posture of the robotic arm; 根据所述持械臂的目标位姿与所述持镜臂的位姿计算多个初始第二孔位中所述持镜臂与所述持械臂的臂间距。The arm distances between the mirror holding arm and the robot arm in a plurality of initial second hole positions are calculated according to the target posture of the robot arm and the posture of the mirror holding arm. 10.一种手术机器人打孔指引方法,其特征在于,所述方法包括:10. A surgical robot drilling guidance method, characterized in that the method comprises: 获取病灶区域的第一影像数据;Acquiring first image data of a lesion area; 根据所述第一影像数据,建立病灶区域的三维模型;Establishing a three-dimensional model of the lesion area according to the first image data; 获取扫描设备工作环境的第二影像数据,所述扫描设备为扫描生成所述病灶区域的第一影像数据的扫描设备;Acquire second image data of a working environment of a scanning device, wherein the scanning device is a scanning device that scans and generates the first image data of the lesion area; 根据所述病灶区域的三维模型和所述第二影像数据,确定视觉设备坐标系下第一孔位的位置,所述第一孔位为持镜臂的孔位。According to the three-dimensional model of the lesion area and the second image data, the position of the first hole position in the visual device coordinate system is determined, and the first hole position is the hole position of the mirror holding arm. 11.根据权利要求10所述的手术机器人打孔指引方法,其特征在于,所述根据所述病灶区域的三维模型和所述第二影像数据,确定视觉设备坐标系下第一孔位的位置包括:11. The surgical robot drilling guidance method according to claim 10, characterized in that the step of determining the position of the first hole in the visual device coordinate system according to the three-dimensional model of the lesion area and the second image data comprises: 获取所述扫描设备与所述病灶区域的第一位姿关系;Acquiring a first pose relationship between the scanning device and the lesion area; 获取所述扫描设备与所述视觉设备的第二位姿关系;Acquire a second posture relationship between the scanning device and the visual device; 根据所述第一位姿关系和所述第二位姿关系,得到所述病灶区域的三维模型在视觉设备坐标系下的位置信息;According to the first posture relationship and the second posture relationship, obtaining position information of the three-dimensional model of the lesion area in the visual device coordinate system; 根据所述病灶区域的三维模型在视觉设备坐标系下的位置信息,确定视觉设备坐标系下第一孔位的位置。According to the position information of the three-dimensional model of the lesion area in the visual device coordinate system, the position of the first hole in the visual device coordinate system is determined. 12.根据权利要求10所述的手术机器人打孔指引方法,其特征在于,还包括:12. The surgical robot drilling guidance method according to claim 10, characterized in that it also includes: 获取第一孔位和第二孔位之间的预设距离,所述第二孔位为持械臂的孔位;Obtaining a preset distance between a first hole position and a second hole position, where the second hole position is a hole position of the robotic arm; 根据所述预设距离和所述病灶区域,确定持械臂末端位姿;Determining the position of the end of the robotic arm according to the preset distance and the lesion area; 根据所述持械臂末端位姿对持械臂进行逆运动学求解;Performing inverse kinematics solution on the robotic arm according to the end posture of the robotic arm; 若所述逆运动学有解,则确定所述持械臂末端位姿与所述病灶区域的连线与体表的接触点为所述第二孔位的位置。If the inverse kinematics has a solution, the contact point between the line connecting the end posture of the robotic arm and the lesion area and the body surface is determined as the position of the second hole. 13.一种手术机器人打孔指引装置,其特征在于,包括扫描组件、视觉组件以及控制组件;13. A surgical robot drilling guidance device, characterized by comprising a scanning component, a visual component and a control component; 所述扫描组件生成病灶区域的第一影像数据,发送所述第一影像数据至所述控制组件;The scanning component generates first image data of the lesion area, and sends the first image data to the control component; 所述控制组件采用如权利要求10至12任意一项所述的手术机器人打孔指引方法在所述视觉组件中生成第一孔位的位置。The control component uses the surgical robot drilling guidance method as described in any one of claims 10 to 12 to generate the position of the first hole in the visual component. 14.根据权利要求13所述的手术机器人打孔指引装置,其特征在于,所述扫描组件包括超声探头,所述超声探头上设置有视觉靶标;所述视觉靶标发射光标信号照射至所述病灶区域;14. The surgical robot drilling guidance device according to claim 13, characterized in that the scanning component comprises an ultrasonic probe, and a visual target is provided on the ultrasonic probe; the visual target emits a cursor signal to illuminate the lesion area; 所述视觉组件接收所述病灶区域上的所述光标信号,并发送所述光标信号的位置信息至所述控制组件。The visual component receives the cursor signal on the lesion area and sends the position information of the cursor signal to the control component. 15.根据权利要求14所述的手术机器人打孔指引装置,其特征在于,所述视觉组件包括:15. The surgical robot drilling guidance device according to claim 14, characterized in that the visual component comprises: 摄像元件,采集所述病灶区域上的所述光标信号,并发送所述光标信号的位置信息至所述控制组件;An imaging element, collecting the cursor signal on the lesion area, and sending the position information of the cursor signal to the control component; 角度调节元件,承载所述摄像元件,并与所述控制组件连接,所述控制组件控制所述角度调节元件调整所述摄像元件的图像跟踪角度;An angle adjustment element, carrying the imaging element and connected to the control component, wherein the control component controls the angle adjustment element to adjust the image tracking angle of the imaging element; 混合现实元件,显示所述控制组件推送的视觉组件坐标系下的孔位。The mixed reality component displays the hole position in the visual component coordinate system pushed by the control component. 16.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求10至12中任一项所述的方法的步骤。16. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 10 to 12 are implemented. 17.一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求10至12中任一项所述的方法的步骤。17. A computer program product, comprising a computer program, characterized in that when the computer program is executed by a processor, the steps of the method according to any one of claims 10 to 12 are implemented.
CN202310926427.3A 2023-07-25 2023-07-25 Surgical robot system, surgical robot drilling guidance method and device Pending CN119367059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310926427.3A CN119367059A (en) 2023-07-25 2023-07-25 Surgical robot system, surgical robot drilling guidance method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310926427.3A CN119367059A (en) 2023-07-25 2023-07-25 Surgical robot system, surgical robot drilling guidance method and device

Publications (1)

Publication Number Publication Date
CN119367059A true CN119367059A (en) 2025-01-28

Family

ID=94331152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310926427.3A Pending CN119367059A (en) 2023-07-25 2023-07-25 Surgical robot system, surgical robot drilling guidance method and device

Country Status (1)

Country Link
CN (1) CN119367059A (en)

Similar Documents

Publication Publication Date Title
US20220096185A1 (en) Medical devices, systems, and methods using eye gaze tracking
US20240050156A1 (en) Surgical Systems And Methods For Providing Surgical Guidance With A Head-Mounted Device
US20240335247A1 (en) Surgery robot system and use method therefor
EP0571827B1 (en) System and method for augmentation of endoscopic surgery
US20220378526A1 (en) Robotic positioning of a device
JP7115493B2 (en) Surgical arm system and surgical arm control system
US20240189049A1 (en) Systems and methods for point of interaction displays in a teleoperational assembly
US12201266B2 (en) Location pad surrounding at least part of patient eye and having optical tracking elements
CN115192195A (en) Computer-readable storage medium, electronic device, and surgical robot system
CN119367059A (en) Surgical robot system, surgical robot drilling guidance method and device
CN115998427A (en) Surgical robot system, safety control method, slave device, and readable medium
CN115429438A (en) Supporting device fixed point follow-up adjusting system and surgical robot system
CN115177365A (en) Computer-readable storage medium, electronic device and surgical robot system
WO2024229649A1 (en) Non-invasive patient tracker for surgical procedure
EP4483831A1 (en) Robotic assembly for a surgical system
CN117372667A (en) Pose adjusting method and device of image acquisition assembly and controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination