Disclosure of Invention
Based on this, it is necessary to provide a surgical robot system, a surgical robot punch guide method, a device, a storage medium and a computer program product capable of realizing accurate punch guide, aiming at the technical problem that the surgical robot system cannot realize accurate punch guide in the prior art.
In a first aspect, the present application provides a surgical robotic system comprising a mirror holding arm, and:
the scanning unit is used for acquiring first image data of the focus area;
the modeling unit is used for establishing a three-dimensional model of the focus area according to the first image data;
The visual unit is used for acquiring second image data of the working environment of the scanning equipment, and the scanning equipment is used for generating first image data of the focus area by scanning;
And the processing unit is used for determining the position of a first hole site under a coordinate system of the visual equipment according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is the hole site of the lens holding arm.
In one embodiment, the determining, by the processing unit, the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the focal region and the second image data includes:
acquiring a first pose relationship between the scanning equipment and the focus area;
Acquiring a second pose relationship between the scanning equipment and the vision equipment;
Obtaining the position information of the three-dimensional model of the focus area under a visual equipment coordinate system according to the first pose relation and the second pose relation;
and determining the position of the first hole site under the coordinate system of the visual equipment according to the position information of the three-dimensional model of the focus area under the coordinate system of the visual equipment.
In one embodiment, the processing unit is further configured to obtain a pre-punching position of the focal region, and the determining, according to the position information of the three-dimensional model of the focal region in the vision equipment coordinate system, the position of the first hole site in the vision equipment coordinate system includes:
And correcting the pre-punching position according to the position of the three-dimensional model of the focus area under the visual equipment coordinate system to obtain the position of the first hole site under the visual equipment coordinate system.
In one embodiment, the surgical robot system further includes a holding arm, and the processing unit is further configured to obtain a position of a second hole according to the position of the first hole, where the position of the second hole is a hole of the holding arm.
In one embodiment, the obtaining the position of the second hole according to the position of the first hole includes:
acquiring a preset distance between the first hole site and the second hole site;
And acquiring the position of a second hole site according to the preset distance and the position of the first hole site.
In one embodiment, the obtaining the position of the second hole according to the preset distance and the position of the first hole includes:
Determining the pose of the tail end of the mechanical arm according to the preset distance, the position of the first hole site and the focus area;
carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm;
If the inverse kinematics has a solution, determining that the contact point between the connecting line of the pose of the tail end of the mechanical arm and the focus area and the body surface is the position of the second hole site.
In one embodiment, the determining the pose of the distal end of the arm according to the preset distance, the position of the first hole site, and the focal region includes:
determining the axis of the sleeve of the manipulator according to the preset distance, the position of the first hole site and the focus area;
selecting a rotation degree of freedom of the mechanical holding sleeve around the mechanical holding arm sleeve axis to determine the middle position of the joint stroke;
and determining the tail end pose of the manipulator according to the middle position of the joint stroke.
In one embodiment, the number of the preset distances is a plurality, and the obtaining the position of the second hole according to the preset distance and the position of the first hole includes:
determining the positions of a plurality of initial second hole sites according to the preset distances and the positions of the first hole sites;
calculating the arm spacing between the lens holding arm and the mechanical holding arm in a plurality of initial second hole sites;
And determining a hole site with the largest arm distance between the lens holding arm and the mechanical holding arm as a second hole site.
In one embodiment, the calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole positions includes:
Carrying out inverse kinematics solution on the mechanical holding arm according to the tail end pose of the mechanical holding arm to obtain the target pose of the mechanical holding arm;
and calculating the arm spacing between the lens holding arm and the mechanical holding arm in a plurality of initial second hole positions according to the target pose of the mechanical holding arm and the pose of the lens holding arm.
In a second aspect, the present application also provides a surgical robot perforation guiding method, the method comprising:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
In one embodiment, the determining the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the focal region and the second image data includes:
acquiring a first pose relationship between the scanning equipment and the focus area;
Acquiring a second pose relationship between the scanning equipment and the vision equipment;
Obtaining the position information of the three-dimensional model of the focus area under a visual equipment coordinate system according to the first pose relation and the second pose relation;
and determining the position of the first hole site under the coordinate system of the visual equipment according to the position information of the three-dimensional model of the focus area under the coordinate system of the visual equipment.
In one embodiment, the surgical robot perforation guiding method further includes:
acquiring a preset distance between a first hole site and a second hole site, wherein the second hole site is a hole site of a bearing arm;
determining the pose of the tail end of the manipulator according to the preset distance and the focus area;
carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm;
If the inverse kinematics has a solution, determining that the contact point between the connecting line of the pose of the tail end of the mechanical arm and the focus area and the body surface is the position of the second hole site.
In a third aspect, the application also provides a surgical robot punching guide device, which comprises a scanning assembly, a vision assembly and a control assembly;
the scanning component generates first image data of a focus area and sends the first image data to the control component;
the control assembly generates a location of a first hole site in the vision assembly using the surgical robot punch guidance method described above.
In one embodiment, the scanning assembly comprises an ultrasonic probe, wherein a visual target is arranged on the ultrasonic probe, and the visual target emits a cursor signal to irradiate the focus area;
The vision component receives the cursor signal on the focus area and sends the position information of the cursor signal to the control component.
In one embodiment, the visual component comprises:
The camera element is used for collecting the cursor signal on the focus area and sending the position information of the cursor signal to the control component;
the angle adjusting element is used for bearing the image pickup element and is connected with the control assembly, and the control assembly controls the angle adjusting element to adjust the image tracking angle of the image pickup element;
And the mixed reality element displays the hole site under the coordinate system of the visual component pushed by the control component.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring first image data of a focus area;
establishing a three-dimensional model of the focus area according to the first image data;
Acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of the focus area;
And determining the position of a first hole site under a vision equipment coordinate system according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is a hole site for holding a lens arm.
The surgical robot system, the surgical robot punching guiding method, the device, the storage medium and the computer program product are used for acquiring first image data of a focus area, establishing a three-dimensional model of the focus area according to the first image data, acquiring second image data of a working environment of scanning equipment, and determining the position of a hole site of a manipulator under a coordinate system of vision equipment according to the three-dimensional model of the focus area and the second image data. In the whole process, a doctor can intuitively observe the punching position and the focus area of the manipulator through visual equipment, thereby realizing accurate punching guidance of the surgical robot.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit of the invention, whereby the invention is not limited to the specific embodiments disclosed below.
In one embodiment, as shown in fig. 1, a surgical robotic system is provided that includes a mirror holding arm. The present embodiment will be described by taking the surgical robot system for assisting a doctor in guiding a drilling site as an example. In this embodiment, the surgical robotic system further comprises a scanning unit, a modeling unit, a vision unit, and a processing unit, which may be integrated in one or more consoles. In practical application, the surgical robot system further comprises a scanning device and a vision device, one or more main control consoles integrated with the system units are respectively connected with the scanning device and the vision device, a patient lies on an operation table, the scanning device collects medical images of the patient and sends the collected medical images to the computer device, the main control consoles process the obtained hole positions of the mirror holding arms under the coordinate system of the vision device and send the data to the vision device, a doctor wears the vision device, the vision device displays the hole positions of the mirror arms, and thus the whole surgical robot system can realize accurate punching guidance.
Specifically, as shown in fig. 2, in one embodiment, a surgical robotic system includes:
The scanning unit 200 is configured to acquire first image data of a focal region.
The scanning device scans and images a scanning object to obtain medical image data, and the scanning unit 200 analyzes the medical image data to obtain first image data of a focus area. Specifically, the scan object is an object perforated by the present surgical robot. Taking a perforated object as an example of a patient, the scanning object is a surgical patient, and the medical image of the patient is scanned by a scanning device. The focus area refers to the area where lesions occur on the body of the subject perforated by the current surgical robot. For example, the location of a lesion in the kidney or gall bladder in the abdominal cavity of the subject may be scanned. Further, the scanning unit 200 may firstly intercept an image of a preset scanning area from the medical image data sent by the scanning device, where the preset scanning area refers to an area corresponding to the current operation, and specifically, the preset scanning area may be a scanning area that is pre-marked on the subject, for example, an area such as an abdomen, a chest, a back, etc. of the subject, for example, the image data of the abdomen of the patient.
The modeling unit 400 is configured to build a three-dimensional model of the lesion area according to the first image data.
And taking the first image data as basic data, adopting virtual three-dimensional modeling to construct a virtual three-dimensional model, and identifying and obtaining a corresponding focus area in the organism of the patient on the virtual three-dimensional model. In this way, the calculation and planning of perforation locations for lesion areas in the virtual three-dimensional model in subsequent processing may be facilitated.
The vision unit 600 is configured to acquire second image data of an operating environment of a scanning device, where the scanning device is a scanning device that scans first image data of a lesion area.
The scanning device operating environment refers to the scene environment when the scanning device scans the patient to generate medical image data. In particular, as already mentioned above, the scanning device scans the patient to generate a medical image, while at the same time image data of the scanning device is acquired during the scanning operation, so that the phase position relation with the scanning device can be accurately located later. Further, in order to facilitate accurate determination of the relative pose relationship between the vision device and the scanning device in the following process, accurate hole positions of the holding arm are displayed in the vision device.
The processing unit 800 is configured to determine a position of a first hole in the coordinate system of the vision device according to the three-dimensional model of the focal region and the second image data, where the first hole is a hole of the lens-holding arm.
The vision device can assist the physician in visually observing the surrounding environment while presenting a specific perforation location thereon for the physician to view. For example, the vision device may be a mixed reality helmet (MR helmet for short) worn by a doctor. Specifically, the hole position of the lens holding arm in the surgical robot system can be determined based on the three-dimensional model of the focus area, at the moment, the position of the hole position of the lens holding arm is constructed under the coordinate system of the scanning device, and finally, in order to accurately map the hole position under the coordinate system of the vision device, the relative pose relationship between the scanning device and the vision device is further determined, namely, the point position mapping relationship between the coordinate system of the scanning device and the coordinate system of the vision device is definitely determined. On the basis of determining the pose relationship, mapping the position of the hole position of the lens holding arm determined based on the three-dimensional model of the focus area into a vision equipment coordinate system to determine the hole position of the lens holding arm under the vision equipment coordinate system.
Further, when the hole position of the arm is determined based on the three-dimensional model of the focus area, the hole position is calculated in advance in the virtual three-dimensional model based on the lesion position in the body of the patient and in combination with the configuration of the surgical robot system. For example, the lesion location is located on an organ in the abdominal region and the perforation location corresponds to the body surface of the abdomen. After the hole positions of the lens holding arms under the visual equipment coordinate system are determined, data can be pushed to the visual equipment, so that a doctor can display accurate hole positions of the lens holding arms in a display interface of the identification equipment after wearing the visual equipment, namely, the doctor can intuitively observe the punching positions, and accurate punching guidance is realized.
The surgical robot system acquires first image data of a focus area, establishes a three-dimensional model of the focus area according to the first image data, acquires second image data of a working environment of scanning equipment, and determines the position of a hole site of a manipulator under a coordinate system of visual equipment according to the three-dimensional model of the focus area and the second image data. In the whole process, a doctor can intuitively observe the punching position and the focus area of the manipulator through visual equipment, thereby realizing accurate punching guidance of the surgical robot.
In one embodiment, the processing unit 800 determines the position of the first hole under the coordinate system of the vision device according to the three-dimensional model and the second image data of the focal region, wherein the determining comprises obtaining a first pose relationship between the scanning device and the focal region, obtaining a second pose relationship between the scanning device and the vision device, obtaining position information of the three-dimensional model of the focal region under the coordinate system of the vision device according to the first pose relationship and the second pose relationship, and determining the position of the first hole under the coordinate system of the vision device according to the position information of the three-dimensional model of the focal region under the coordinate system of the vision device.
The scanning device is a device for scanning a patient and generating an image of a preset scanning area, and may be any of an ultrasound imaging device, an X-ray device, and a magnetic resonance imaging device (abbreviated as MRI device). In this embodiment, the scanning device is an ultrasound imaging device. The first pose relationship between the scanning device and the focus area corresponding to the scanning object refers to the pose relationship between the scanning device and the focus area of the scanning object. For example, there may be a relationship between the relative position and pose between the ultrasound probe of the ultrasound imaging device and the focal region of the patient.
The relative pose relationship of the scanning device and the vision device refers to a relationship between the relative position and pose of a structure of the scanning device for scanning a pre-scribed scanning area on the patient's body and the vision device. For example, the relationship between the position and the posture of the ultrasound probe of the ultrasound imaging device and the MR helmet can be established. The second pose relationship of the scanning device and the vision device refers to a relationship between the relative positions and poses of the scanning device and the vision device. For example, the relationship between the position and the posture of the ultrasound probe of the ultrasound imaging device and the MR helmet can be established. The position of the lens holding arm in the three-dimensional model of the focus area can be mapped to the position of the vision equipment coordinate system by combining the first pose relation and the second pose relation, and then the position of the lens holding arm hole in the three-dimensional model of the focus area can be mapped to the position of the lens holding arm in the vision equipment coordinate system, so that the position of the lens holding arm in the vision equipment coordinate system is obtained.
For example, as will be understood with reference to fig. 3, the processing unit 800 may obtain a first pose relationship according to a relative pose relationship between an ultrasound probe on an ultrasound imaging device and a lesion area corresponding to a patient, where the first pose relationship is a relative pose relationship between the probe and the lesion area. The processing unit 800 may further obtain a second pose relationship according to a relative pose relationship between the probe and the MR helmet on the ultrasound imaging device, where the second pose relationship is a pose relationship between the probe and the MR helmet. The processing unit 800 may map the three-dimensional model of the focal region to the vision device coordinate system according to the relationship between the first pose relationship and the second pose relationship, and determine the position of the lens holding arm in the vision device coordinate system based on the position of the lens holding arm determined in the three-dimensional model of the focal region.
In this embodiment, the pose relationship between the focal region and the MR helmet is obtained through the conversion of the pose relationship among the focal region, the scanning device and the vision device, so as to convert the focal region position information into the position information under the coordinate system of the vision device. The pose includes both positional information and attitude information, and thus the pose relationship includes a relationship between positions and a relationship between attitudes.
It should be noted that, in this embodiment, the vision device may include a depth camera and a degree of freedom pan-tilt, where the depth camera may detect three-dimensional information of an environment, and the degree of freedom pan-tilt may adjust a posture of the depth camera, for example, an angle of the depth camera may be adjusted by rotating the degree of freedom pan-tilt, so as to adjust a position of a field of view of the depth camera. Based on the fact that the depth camera can only detect three-dimensional information of surrounding environment and cannot detect intra-cavity information of a patient, the position information of the probe in the scanning device can be obtained through the depth camera, the position information of a focus area can be obtained through the probe, and then the pose relation between the depth camera and the focus area can be calculated by taking the probe in the scanning device as a reference.
In one embodiment, the processing unit 800 is further configured to obtain a pre-punching position of the focal region, and determining the position of the first hole site under the vision equipment coordinate system according to the position information of the three-dimensional model of the focal region under the vision equipment coordinate system includes correcting the pre-punching position according to the position of the three-dimensional model of the focal region under the vision equipment coordinate system to obtain the position of the first hole site under the vision equipment coordinate system.
The pre-punch position of the lesion area refers to a predetermined punch position of the lens holding arm before performing an operation based on a three-dimensional model of the lesion area. In practical application, the punching position is preliminarily determined by checking before the operation, and when the operation is actually performed, the preset punching position is corrected in order to further improve the operation precision, and after the correction in the three-dimensional model of the focus area is completed, the position of the first hole position under the coordinate system of the vision equipment is obtained according to the position of the three-dimensional model of the focus area under the coordinate system of the vision equipment.
In one embodiment, the surgical robot system further comprises a holding arm, and the processing unit is further configured to obtain a position of a second hole according to the position of the first hole, where the position of the second hole is a hole of the holding arm.
As shown in fig. 4, the surgical robot system includes a lens holding arm 420 and a mechanical holding arm 410, wherein the lens holding arm 420 is fixedly connected with an endoscope 422 and drives the endoscope 422 to move, and the mechanical holding arm 410 is fixedly connected with a surgical instrument 412 and drives the surgical instrument 412 to move. After the hole position of the arm 420 is determined, the position of the hole of the arm 410 needs to be further determined based on the position of the hole of the arm 420. Specifically, since a relative distance relationship is generally maintained between the arm 420 and the arm 410, the arm 410 position can be determined based on the relative distance after the arm 420 position is obtained.
In one embodiment, the obtaining the position of the second hole site according to the position of the first hole site includes obtaining a preset distance between the first hole site and the second hole site, and obtaining the position of the second hole site according to the preset distance and the position of the first hole site.
The preset distance is a predetermined distance, which may be derived specifically based on a priori experience. Specifically, the minimum punching distance and the maximum punching distance between the manipulator and the mirror holding arm can be obtained based on prior experience, and then the proper preset distance is determined based on the motion track, the size parameter, the patient size parameter and the like of the manipulator and the mirror holding arm in the minimum punching distance and the maximum punching distance. More specifically, in a certain application scenario, the minimum punching distance between the mechanical arm and the lens holding arm may be 6 cm, and the maximum punching distance between the mechanical arm and the lens holding arm may be 10 cm.
In one embodiment, obtaining the position of the second hole site according to the preset distance and the position of the first hole site comprises determining the pose of the tail end of the holding arm according to the preset distance, the position of the first hole site and the focus area, performing inverse kinematics solution on the holding arm according to the pose of the tail end of the holding arm, and determining the contact point between the connecting line of the pose of the tail end of the holding arm and the focus area and the body surface as the position of the second hole site if the inverse kinematics solution exists.
After the focal area is determined, i.e. the focal point is determined, as the surgical instrument fixed by the arm needs to be directed to the focal point, and the distance between the arm and the arm is fixed by the preset distance, the pose of the end of the arm can be determined based on the position of the first hole site. And on the basis of obtaining the pose of the tail section of the holding arm, carrying out inverse kinematics solution on the holding arm according to the pose of the tail end of the holding arm. Specifically, the inverse kinematics solution here refers to a solution corresponding to the requirement that the maximum constraint of the arm spacing between the telecentric mechanisms of the holding arm and the preset distance between the holding arm and the mirror holding arm are met in the motion process of the holding arm. If the inverse kinematics is solved, the method shows that the proper target joint angle of the holding arm can be calculated, and then the arm distance value of the telecentric mechanism of the holding arm can be obtained. Under the condition of inverse kinematics solution, the connecting line between the pose of the tail end of the arm and the focus area is further marked, and the contact point between the connecting line and the body surface of the patient is the hole site of the arm.
In one embodiment, determining the pose of the distal end of the arm according to the preset distance, the position of the first hole site and the focal region includes determining an axis of the arm-holding sleeve according to the preset distance, the position of the first hole site and the focal region, selecting the rotational freedom degree of the arm-holding sleeve around the axis of the arm-holding sleeve to determine the median position of the joint stroke, and determining the pose of the distal end of the arm-holding according to the median position of the joint stroke.
With continued reference to fig. 5, consider that if the preoperative positioning step can ensure that the axis of the mechanical holding arm (the puncture outfit sleeve) points to the focal point, the end instrument points to the focal point at this time, and the process of adjusting the pose of the telecentric mechanism of the mechanical holding arm can be simplified. Under the concept, in this embodiment, after the preset distance between the mechanical arm and the mirror arm, the hole position of the mirror arm and the axis of the sleeve of the mechanical arm are determined, the intermediate position of the joint stroke is selected by the degree of freedom of the sleeve rotating around the sleeve, and the pose of the tail end of the mechanical arm can be determined.
In one embodiment, the number of the preset distances is a plurality, the position of the second hole site is obtained according to the preset distances and the positions of the first hole site, the position of the plurality of initial second hole sites is determined according to the preset distances and the positions of the first hole site, the arm distance between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites is calculated, and the hole site with the largest arm distance between the lens holding arm and the mechanical holding arm is determined to be the second hole site.
The preset distance between the mechanical holding arm and the mirror holding arm can be set to be a plurality of distances according to the actual situation, and on the premise that the number of the preset distances is a plurality of distances, the positions of a plurality of alternative second hole sites can be determined according to the plurality of preset distances and the positions of the first hole sites, so that the positions of a plurality of initial second hole sites are obtained. In this case, it is necessary to select, from among the positions of the plurality of initial second hole sites, the hole site having the largest arm pitch between the mirror holding arm and the mechanical holding arm as the final second hole site. Specifically, the maximum arm distance between the lens holding arm and the mechanical holding arm means that the possibility of collision between the mechanical holding arm and the lens holding arm is minimum, and the determined second hole site is the optimal second hole site. Furthermore, the preset distances can be calculated continuously and iteratively according to the equal step spacing, in the number k of cyclic iterative searches, by taking Δp as the step increment of the preset distance, and selecting the hole site corresponding to the maximum arm spacing between the mirror holding arm and the mechanical holding arm as the second hole site.
In one embodiment, calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites comprises performing inverse kinematics solution on the mechanical holding arm according to the pose of the tail end of the mechanical holding arm to obtain the target pose of the mechanical holding arm, and calculating the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites according to the target pose of the mechanical holding arm and the pose of the lens holding arm.
And (3) carrying out inverse kinematics solution on the holding arm based on the terminal pose of the holding arm, and calculating the target joint angle of the holding arm through the inverse kinematics of the holding arm to obtain the target pose of the holding arm. In order to further obtain the arm spacing between the lens holding arm and the mechanical holding arm, the arm spacing between the lens holding arm and the mechanical holding arm in the plurality of initial second hole sites needs to be calculated according to the target pose of the mechanical holding arm, the pose of the lens holding arm and different initial second hole sites.
In order to further explain the technical principle of the surgical robot system and the working process thereof in detail, a description will be given below of a determination process of a preset punching position.
In one embodiment, the preset perforation position is obtained by determining an operation area according to position information of a focus area, determining a candidate perforation area according to the position and the size of the operation area and structural parameters of a mechanical arm in the operation robot, screening a feasible perforation area from the candidate perforation area according to first image information, and determining the preset perforation position in the feasible perforation area according to the structural parameters of the mechanical arm in the operation robot.
The focal region position information is a specific position of a focal point in the patient, for example, the focal point may be a gallbladder, a stomach, etc. in the abdominal cavity of the patient. By the specific location of the focal point, the general area of the procedure, i.e., the general location and size of the surgical area, can be determined. Candidate perforation areas may be determined by the approximate location and size of the surgical area, in combination with the length of the robotic arm in the surgical robot, and the spacing between the robotic arms. And then screening out the areas capable of punching, namely feasible punching areas, from the candidate punching areas. A preset punch position is determined in the feasible punch area based on the length of the robotic arms in the surgical robot, and the spacing between the robotic arms. The mechanical arm comprises a lens holding arm used for holding the endoscope and a mechanical holding arm used for holding the surgical instrument. Typically, the number of holding arms is one, and the number of holding arms is three. Through the processing flow of the embodiment, the accuracy of the preset punching position can be improved, so that the operation risk can be reduced.
In one embodiment, selecting a feasible puncturing area from the candidate puncturing area according to the first image information includes:
Identifying a risk perforation location based on the first image information, wherein the risk perforation location includes a perforation location through a bone, a blood vessel, or a nerve; and eliminating the risk punching positions from the candidate punching areas, and screening out feasible punching areas.
Since there are structures such as blood vessels, nerves and bones in the human body, if holes are made at the positions having the structures such as blood vessels, nerves and bones, there is a risk of operation, and therefore, it is necessary to screen out these positions. In this embodiment, the positions with blood vessels, nerves or bones need to be screened out based on the first image information, so that a feasible perforation area is obtained. Thus, the risk of perforation is reduced, and the reliability and safety performance of the operation are ensured.
In one embodiment, determining the location of the pre-punch location hole site of the lesion area in the viable punch area based on the structural parameters of the robotic arm comprises:
the method comprises the steps of obtaining length parameters of a lens holding arm based on structural parameters of the mechanical arm, screening a feasible punching area of the lens holding arm in a feasible punching area according to the length parameters of the lens holding arm and preset depth-of-field limiting constraint, selecting a lens holding arm punching point and a mechanical holding arm punching point in the feasible punching area of the lens holding arm, and collecting the lens holding arm punching point and the mechanical holding arm punching point to obtain a pre-punching position of a focus area.
Since the accuracy of the hole punching position of the lens holding arm is lower than that of the mechanical holding arm, the hole punching position of the lens holding arm can be determined first. In the application, when the endoscope is arranged in the abdominal cavity, objects in the abdominal cavity can be clearly imaged within the depth of field. Thus, the punching precision of the endoscope arm can be ensured, and the specific situation that the clear lesion position can be obtained through the endoscope during operation can be further ensured, and the smooth operation process can be further ensured. Further, a point closest to the lesion area within the viable perforation area of the arm may be selected as the perforation point of the arm (endoscope sleeve). By selecting the point closest to the focus area as the punching point of the endoscope holding arm, the condition of the lesion point can be well observed after the endoscope is placed into the abdominal cavity.
In one embodiment, as shown in fig. 6, the present application further provides another method for determining a punching position of a manipulator, which specifically includes the following steps:
And acquiring first image data of the focus area, and determining the operation area according to the position information of the focus area. And determining candidate punching areas according to the position and the size of the operation area and the structural parameters of the mechanical arm in the operation robot. Based on the first image information, a risk perforation location is identified, wherein the risk perforation location includes a perforation location through a bone, a blood vessel, or a nerve. And eliminating the risk punching positions from the candidate punching areas, and screening out feasible punching areas. Based on structural parameters of a mechanical arm in the surgical robot, length parameters of the lens holding arm are obtained. And screening the feasible punching area of the lens holding arm according to the length parameter of the lens holding arm and the preset depth of field limiting constraint. And selecting a hole punching point of the lens holding arm and a hole punching point of the mechanical holding arm in a feasible hole punching area of the lens holding arm. Collecting and holding the punching points of the mirror arm and the punching points of the mechanical arm in the surgical robot to obtain the preset punching positions. And acquiring the punching position of the preset punching position under the coordinate system of the visual equipment according to the converted focus area position information and the preset punching position in the focus area. Pushing the punching position under the coordinate system of the visual equipment to the visual equipment.
Based on the same inventive concept, as shown in fig. 7, the embodiment of the application further provides a surgical robot punching guiding method, which comprises the following steps:
s200, acquiring first image data of a focus area;
s400, establishing a three-dimensional model of a focus area according to the first image data;
S600, acquiring second image data of a working environment of a scanning device, wherein the scanning device is used for scanning and generating first image data of a focus area;
S800, determining the position of a first hole site under a coordinate system of the vision equipment according to the three-dimensional model of the focus area and the second image data, wherein the first hole site is the hole site of the lens holding arm.
In one embodiment, determining the position of the first hole site in the vision equipment coordinate system according to the three-dimensional model of the lesion area and the second image data includes:
The method comprises the steps of obtaining a first pose relation between scanning equipment and a focus area, obtaining a second pose relation between the scanning equipment and vision equipment, obtaining position information of a three-dimensional model of the focus area under a vision equipment coordinate system according to the first pose relation and the second pose relation, and determining the position of a first hole site under the vision equipment coordinate system according to the position information of the three-dimensional model of the focus area under the vision equipment coordinate system.
In one embodiment, the surgical robot punching guiding method further comprises the steps of obtaining a preset distance between a first hole site and a second hole site, wherein the second hole site is a hole site of the holding arm, determining the tail end pose of the holding arm according to the preset distance and the focus area, carrying out inverse kinematics solution on the holding arm according to the tail end pose of the holding arm, and determining the position of the second hole site of a contact point of a connecting line of the tail end pose of the holding arm and the focus area with the body surface if the inverse kinematics solution exists.
The implementation of the solution provided by the surgical robot perforation guidance method in the above embodiment is similar to the implementation described in the surgical robot system, so specific limitations in the above embodiment of the above provided one or more surgical robot perforation guidance methods may be referred to the above limitations in the surgical robot system, and will not be repeated here.
Based on the same inventive concept, as shown in fig. 8, an embodiment of the present application further provides a surgical robot punching guiding apparatus, where the whole apparatus specifically includes a scanning assembly 820, a vision assembly 840, and a control assembly 860;
the scanning module 820 generates first image data of the lesion area and sends the first image data to the control module 860. The control module 860 generates the position of the first hole site in the vision module 840 using the surgical robot punch guidance method as described above.
In one embodiment, as shown in fig. 8 and 9, the scanning component 820 includes an ultrasonic probe 822, a visual target is disposed on the ultrasonic probe 822, the visual target emits a cursor signal to irradiate a focal region, and the visual component 840 receives the cursor signal on the focal region and sends position information of the cursor signal to the control component.
The scanning component 820 comprises an ultrasonic probe 822, wherein a visual target is arranged on the ultrasonic probe 822, the visual target emits a cursor signal to irradiate a focus area of a scanning object, and the visual component 840 receives the cursor signal on the focus area and sends position information of the cursor signal to the control component.
The visual target is a structure capable of emitting a cursor signal, and specifically, the optical signal may be an infrared cursor or a light emitted by a light emitting diode. In the present embodiment, the scanning assembly 820 is described as an example of an ultrasound apparatus, and in the present embodiment, the focal region includes a focal position in the abdominal cavity and a body surface of a patient corresponding to the focal position. The cursor signal emitted by the visual target irradiates on the body surface corresponding to the focus position of the scanning object. Specifically, when the patient is scanned by the ultrasonic device, the probe of the ultrasonic device abuts against the body surface of the patient, and at this time, the cursor signal emitted from the visual target on the probe just irradiates the body surface. Since the depth camera is capable of detecting three-dimensional information of the environment, the depth camera is capable of receiving the cursor signal and transmitting the position information of the received cursor signal to the processing component.
In this embodiment, by setting a visual target capable of sending out a cursor signal on the ultrasonic probe 822, the cursor signal is received by the visual component 840, and the position information of the cursor signal can be sent to the processing component, so as to obtain the relative pose relationship between the scanning component 820 and the visual component 840. The pose relationship between the ultrasound probe 822 and the focal region can be obtained by the ultrasound probe 822 and the focal region detected by the ultrasound probe 822.
In one embodiment, the control assembly is further configured to control the scanning assembly 820 to emit a cursor signal onto the focal region of the scanned object.
The cursor signal refers to a visible light identification signal, and may be, for example, a red bright spot or a visible light with a certain pattern. The irradiation of the cursor signal to the focal region of the scan object means that the cursor signal emitted by the scan component 820 is irradiated to the body surface corresponding to the focal region of the scan object. For example, when scanning the abdomen of a patient, a red color or a cursor having a specific pattern emitted from an ultrasonic target on the ultrasonic probe 822 can be irradiated onto a body surface corresponding to the abdomen of the patient.
The vision component 840 is controlled to continuously track the cursor signal over the focal region such that the cursor signal is within the field of view of the vision component 840. Controlling the vision assembly 840 to continuously track the cursor signal on the focal region means that the cursor signal is always within the field of view of the vision assembly 840 by controlling the field of view of the vision assembly 840. For example, the field of view of the vision assembly 840 may be adjusted by controlling the vision assembly 840 to move or rotate such that the cursor signal is always within the field of view of the vision assembly 840.
Only if the cursor signal is always within the field of view of the vision component 840, that is, within the field of view of the depth camera, it is ensured that the vision component 840 can always detect the specific position of the ultrasonic probe 822 on the body surface, and thus, the processing component can always obtain the pose relationship between the ultrasonic probe 822 and the vision component 840. The pose relationship of the vision assembly 840 to the focal region is thus derived from the pose relationship between the ultrasound probe 822 and the focal region. Therefore, in the present embodiment, the processing component controls the vision component 840 to continuously track the cursor signal on the focal region, so that the cursor signal is in the visual field range of the vision component 840, so that the processing component can always obtain the pose relationship between the vision component 840 and the focal region, thereby ensuring the smooth operation of the punching guide and ensuring the operation reliability of the surgical robot system.
In one embodiment, as shown in fig. 9, the vision component 840 includes an image pickup element 842, an angle adjusting element 844 and a mixed reality element 846, wherein the image pickup element 842 receives a cursor signal on a focal region and sends position information of the cursor signal to the processing component, the angle adjusting element 844 is connected to the image pickup element 842 and is in communication connection with the processing component, the processing component controls the angle adjusting element 844 to adjust an image tracking angle of the image pickup element 842, the mixed reality element 846 is in communication connection with the processing component, and the mixed reality element 846 displays a punching position under a coordinate system of the vision component 840 pushed by the processing component.
It should be noted that, the image pickup element 842 may be the depth camera, the angle adjusting element 844 may be the degree of freedom cradle head, and the mixed reality element 846 may present the information of the punching position in front of the eyes wearing the MR helmet in a mixed reality manner. The detected cursor signal position information is sent to the processing component by tracking the cursor signal emitted by the visual target on the ultrasound probe 822 by the camera element 842, i.e., the depth camera. By connecting the image pickup element 842 to the angle adjustment element 844, the image pickup element 842 can change position with rotation of the angle adjustment element 844, thereby adjusting the angle and position of the field of view. By communicatively coupling the angle adjustment member 844 with the processing assembly, the angle of the angle adjustment member 844132 can be controlled by the processing assembly to effect adjustment of the image tracking angle of the image capture member 842.
Illustratively, the degree-of-freedom cradle head has two degrees of freedom, a pitch degree of freedom and a yaw degree of freedom, respectively. When the doctor wears the MR helmet, the doctor adjusts the head pose such that the cursor signal emitted from the visual target in the ultrasonic probe 822 is located within the depth camera field of view of the MR helmet, which is the initialization process of the vision component 840. After initialization is completed, the degree-of-freedom cradle head adjusting function of the depth camera is started, the depth camera can detect a cursor signal emitted by a visual target, and after the processing component acquires an image of the cursor signal sent by the depth camera, the processing component can identify the position and the gesture of the cursor signal and control the depth camera to track the cursor signal in real time, so that the processing component can always obtain the pose relation between the visual component 840 and the ultrasonic target, and the reliability of punching guide work is guaranteed.
In one embodiment, the processing component is further configured to control the vision component 840 to track a cursor signal on the focal region, and adjust the image tracking angle of the vision component 840 if the cursor signal is not within the field of view of the vision component 840.
The image tracking angle refers to the angle of the field of view of the vision assembly 840. For example, when the vision component 840 is an MR helmet, a depth camera on the MR helmet is used to detect the cursor signal, and when the cursor signal is outside the lens of the depth camera, the cursor signal can be located within the field of view of the vision component 840 by adjusting the angle of the depth camera, or adjusting the angle of the MR helmet.
And pushing a prompt message beyond the visual field range if the image tracking angle exceeds the preset angle limit value.
The preset angle limit value refers to a preset maximum value or minimum value for adjusting the image tracking angle, and when the value is exceeded, the image tracking angle cannot be continuously adjusted. The out-of-view indication message is an indication signal sent to the user so that the user can adjust the image tracking angle conveniently. For example, when the degree of freedom pan-tilt of the depth camera moves to the limit of the travel, the degree of freedom pan-tilt cannot continue to move, and the doctor needs to be reminded to adjust the angle of the MR helmet, so that the cursor signal is located in the lens of the depth camera.
It can be appreciated that the vision component 840 tracks the cursor signal in real time, and sends the acquired image information to the processing component, the processing component identifies whether the image information has the cursor signal, and when the image information has the cursor signal and the cursor signal is in the center area of the field of view, the processing component only needs to control the vision component 840 to track the cursor signal in real time. When the processing component recognizes that there is a cursor signal in the image information, but the cursor signal is not in the central region of the field of view, the processing component calculates the deviation between the current cursor signal and the boundary of the central region, and controls the angle adjusting element 844 to move to drive the image pickup element 842, so as to adjust the image tracking angle of the image pickup element 842, so that the cursor signal is in the central region of the field of view. When the processing component controls the angle adjusting element 844 to move, the angle adjusting element 844 reaches the limit value and cannot move, the image tracking angle is indicated to exceed the limit value of the preset angle, and at the moment, the image tracking angle cannot be continuously adjusted. The processing component will push a prompt message to the vision component 840 that is out of the field of view to alert the doctor to adjust the orientation of the head-worn vision component 840, thereby adjusting the field of view of the vision component 840 such that the cursor signal is located within the center region of the field of view. Specifically, the adjustment direction may be determined by the intersection of the cursor signal with the center region, and the adjustment distance of the angle adjustment element 844 may be determined based on the distance of the cursor signal from the center of the field of view, thereby determining the adjustment direction and the adjustment distance of the angle adjustment element 844.
Illustratively, when the cursor signal is at the boundary to the left of the center region, then it is necessary to move the angle adjustment element to the left in order to bring the cursor signal within the center region of the field of view. If the angle adjusting element cannot move leftwards, the image tracking angle exceeds the preset angle limit value, at the moment, the processing component pushes a prompt message to the vision component, namely the MR helmet, so that a doctor adjusts the angle of the head, and the cursor signal is located in the central area of the visual field. Therefore, the visual component can be effectively ensured to track the cursor signal all the time, and the smooth proceeding of punching guide work is ensured.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, implements the steps of the above method.
In an embodiment, a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the above method.
The foregoing examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.