WO2024157113A1 - Surgical robotic system and method for assisted access port placement - Google Patents
Surgical robotic system and method for assisted access port placement Download PDFInfo
- Publication number
- WO2024157113A1 WO2024157113A1 PCT/IB2024/050408 IB2024050408W WO2024157113A1 WO 2024157113 A1 WO2024157113 A1 WO 2024157113A1 IB 2024050408 W IB2024050408 W IB 2024050408W WO 2024157113 A1 WO2024157113 A1 WO 2024157113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- patient
- external
- generate
- camera
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 30
- 238000003780 insertion Methods 0.000 claims abstract description 41
- 230000037431 insertion Effects 0.000 claims abstract description 41
- 230000033001 locomotion Effects 0.000 claims description 28
- 238000003384 imaging method Methods 0.000 claims description 18
- 210000003484 anatomy Anatomy 0.000 claims description 11
- 239000012636 effector Substances 0.000 claims description 11
- 230000003190 augmentative effect Effects 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims 1
- 230000029058 respiratory gaseous exchange Effects 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 description 10
- 210000000056 organ Anatomy 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 210000001519 tissue Anatomy 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000002591 computed tomography Methods 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 210000001835 viscera Anatomy 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 1
- 210000000683 abdominal cavity Anatomy 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000012978 minimally invasive surgical procedure Methods 0.000 description 1
- 238000013059 nephrectomy Methods 0.000 description 1
- 238000002355 open surgical procedure Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000011472 radical prostatectomy Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00115—Electrical control of surgical instruments with audible or visual output
- A61B2017/00119—Electrical control of surgical instruments with audible or visual output alarm; indicating an abnormal situation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2048—Tracking techniques using an accelerometer or inertia sensor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2059—Mechanical position encoders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
- A61B2034/252—User interfaces for surgical systems indicating steps of a surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
Definitions
- Surgical robotic systems are used in a variety of surgical procedures, including minimally invasive medical procedures.
- Some surgical robotic systems include a surgeon console controlling a surgical robotic arm and a surgical instrument having an end effector (e.g., forceps or grasping instrument) coupled to and actuated by the robotic arm.
- the robotic arm In operation, the robotic arm is moved to a position over a patient and then guides the surgical instrument into a small incision via a surgical port or a natural orifice of a patient to position the end effector at a work site within the patient’s body.
- Surgical robotic systems may be used with movable carts supporting one or more arms holding instrument(s). Setup time for robotic surgical systems may be lengthy and cumbersome and depends on placement of access ports. Thus, there is a need for systems for virtual placement of access ports, which is then used to determine initial placement of movable carts and robotic arms.
- a surgical robotic system includes a controller configured to generate a port location for at least one access port on a 3D model of a patient, generate a camera insertion model through the port location, and generate a setup guide for configuring at least one access port and a robotic arm.
- the system also includes a display configured to output a graphical user interface including the 3D model of the patient, the port location of the at least one access port, the camera insertion model, and a preview window from a perspective of the camera insertion model.
- the graphical user interface may be configured to receive user input to move the port location and the controller may be further configured to update the preview window based on movement of the port location.
- the surgical robotic system may further include a camera configured to capture an external image of a patient.
- the controller may be further configured to generate an external 3D model of a patient based on the external image, generate an internal 3D model of the patient based on preoperative imaging data, and generate a combined 3D model based on the external 3D model and the internal 3D model.
- the preview window may include a view of the internal 3D model.
- the controller may be further configured to generate a skeleton model having a plurality of keypoints for the patient based on the external image and deform the internal 3D model of the patient based on position of the patient and the skeleton model.
- the camera insertion model may include an outside portion and an inside portion and the inside portion and the outside portion may be visually different in the graphical user interface.
- the graphical user interface may be configured to receive user input to move the camera insertion model.
- the controller may be further configured to update the preview window based on movement of the camera insertion model.
- the controller may be further configured to generate a degree of freedom cone at the port location.
- the controller may be further configured to generate an instrument insertion model having an end effector model.
- the controller may be further configured to generate a degree of freedom cone extending from a joint of the end effector model.
- a method for determining access port placement includes capturing an external image of a patient, generating an external 3D model of a patient based on the external image, generating an internal 3D model of the patient based on preoperative imaging data, and further generating a combined 3D model based on the external 3D model and the internal 3D model.
- the method also includes determining an optimal port location for at least one access port based on the combined 3D model and generating a camera insertion model through the optimal port location.
- the method additionally includes generating a preview window of the internal 3D model from a perspective of the camera insertion model and generating a patient-specific customized setup guide based on a generic setup guide corresponding to the surgical procedure for configuring at least one access port and a robotic arm.
- Implementations of the above embodiment may include one or more of the following features.
- the method may further include receiving user input to move the camera insertion model and updating the preview window of the internal 3D model based on movement of the camera insertion model.
- the method may also include generating the external 3D model of the patient based on a depth map of the patient.
- the method may additionally include generating a skeleton model may include a plurality of keypoints for the patient based on the external image.
- the external 3D model may be generated based on the skeleton model.
- the method may also include deforming the internal 3D model of the patient based on position of the patient and the skeleton model.
- a surgical robotic system includes a first robotic arm having a first instrument drive unit coupled to a camera, a second robotic arm having a second instrument drive unit coupled to an instrument, and a camera configured to capture an external image of a patient.
- the system also includes a controller configured to generate an external 3D model of a patient based on the external image, generate an internal 3D model of the patient based on preoperative imaging data, generate a combined 3D model based on the external 3D model and the internal 3D model.
- the controller is further configured to determine a first optimal port location for a first access port based on the combined 3D model and determine a second optimal port location for a second access port based on the combined 3D model.
- the controller is further configured to provide visual aides to guide the marking of incision locations through which trocars will be inserted for ports. These visual aides may be in the form of projection overlay on patient anatomy or clinical staff moving their finger on the patient anatomy and seeing the overlay on GUI or clinical staff wearing an augmented reality headset that highlights the incision locations.
- the controller is additionally configured to generate a setup guide for configuring the first and second access ports and the first and second robotic arms.
- the system also includes a display configured to output a graphical user interface that is configured to display the combined 3D model of the patient, the first and second optimal port locations, a camera insertion model through the first optimal port location, an instrument insertion model through the second optimal port location; and a preview window of the internal 3D model from a perspective of the camera insertion model.
- FIG. 1 is a schematic illustration of a surgical robotic system including a control tower, a console, and one or more surgical robotic arms each disposed on a movable cart according to an embodiment of the present disclosure
- FIG. 2 is a perspective view of a surgical robotic arm of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure
- FIG. 3 is a perspective view of a movable cart having a setup arm with the surgical robotic arm of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure
- FIG. 4 is a schematic diagram of a computer architecture of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure
- FIG. 5 is a plan schematic view of movable carts of FIG. 1 positioned about a surgical table according to an embodiment of the present disclosure
- FIG. 6 is a flowchart of a method for assisted port placement according to an embodiment of the present disclosure
- FIG. 7 is a computed tomography image according to an embodiment of the present disclosure.
- FIG. 8 shows a white light (i.e., RGB) still image and a counterpart depth map image of the surgical robotic system of FIG. 1 and a patient according to an embodiment of the present disclosure
- FIG. 9 is a schematic flow chart of pose estimation of the patient according to an embodiment of the present disclosure.
- FIG. 10 is a computer-generated 3D model of a patient including a skeleton model according to an embodiment of the present disclosure
- FIG. 11 is an image of a graphical user interface (GUI) including the 3D model of a patient and suggested access port locations according to an embodiment of the present disclosure
- FIG. 12 is the image of the GUI including port locations for the instrument and the camera according to an embodiment of the present disclosure.
- FIG. 13 is the image of the GUI including preview windows of the camera at different orientations according to an embodiment of the present disclosure.
- a surgical robotic system which includes a surgeon console, a control tower, and one or more movable carts having a surgical robotic arm coupled to a setup arm.
- the surgeon console receives user input through one or more interface devices.
- the input is processed by the control tower as movement commands for moving the surgical robotic arm and an instrument and/or camera coupled thereto.
- the surgeon console enables teleoperation of the surgical arms and attached instruments/camera.
- the surgical robotic arm includes a controller, which is configured to process the movement commands to control one or more actuators of the robotic arm, which would, in turn, move the robotic arm and the instrument in response to the movement commands.
- a surgical robotic system 10 includes a control tower 20, which is connected to all of the components of the surgical robotic system 10 including a surgeon console 30 and one or more movable carts 60.
- Each of the movable carts 60 includes a robotic arm 40 having a surgical instrument 50 coupled thereto.
- the robotic arms 40 also couple to the movable carts 60.
- the robotic system 10 may include any number of movable carts 60 and/or robotic arms 40.
- the surgical instrument 50 is configured for use during minimally invasive surgical procedures.
- the surgical instrument 50 may be configured for open surgical procedures.
- the surgical instrument 50 may be an electrosurgical forceps configured to seal tissue by compressing tissue between jaw members and applying electrosurgical current thereto.
- the surgical instrument 50 may be a surgical stapler including a pair of jaws configured to grasp and clamp tissue while deploying a plurality of tissue fasteners, e.g., staples, and cutting stapled tissue.
- the surgical instrument 50 may be a surgical clip applier including a pair of jaws configured apply a surgical clip onto tissue.
- One of the robotic arms 40 may include an endoscopic camera 51 configured to capture video of the surgical site.
- the endoscopic camera 51 may be a stereoscopic endoscope configured to capture two side-by-side (i.e., left and right) images of the surgical site to produce a video stream of the surgical scene.
- the endoscopic camera 51 is coupled to a video processing device 56, which may be disposed within the control tower 20.
- the video processing device 56 may be any computing device as described below configured to receive the video feed from the endoscopic camera 51 and output the processed video stream.
- the surgeon console 30 includes a first display 32, which displays a video feed of the surgical site provided by camera 51 disposed on the robotic arm 40, and a second display 34, which displays a user interface for controlling the surgical robotic system 10.
- the first display 32 and second display 34 may be touchscreens allowing for displaying various graphical user inputs.
- the surgeon console 30 also includes a plurality of user interface devices, such as foot pedals 36 and a pair of handle controllers 38a and 38b which are used by a user to remotely control robotic arms 40.
- the surgeon console further includes an armrest 33 used to support clinician’s arms while operating the handle controllers 38a and 38b.
- the control tower 20 includes a display 23, which may be a touchscreen, and outputs on the graphical user interfaces (GUIs).
- GUIs graphical user interfaces
- the control tower 20 also acts as an interface between the surgeon console 30 and one or more robotic arms 40.
- the control tower 20 is configured to control the robotic arms 40, such as to move the robotic arms 40 and the corresponding surgical instrument 50, based on a set of programmable instructions and/or input commands from the surgeon console 30, in such a way that robotic arms 40 and the surgical instrument 50 execute a desired movement sequence in response to input from the foot pedals 36 and the handle controllers 38a and 38b.
- the foot pedals 36 may be used to enable and lock the hand controllers 38a and 38b, repositioning camera movement and electrosurgical activation/deactivation.
- the foot pedals 36 may be used to perform a clutching action on the hand controllers 38a and 38b. Clutching is initiated by pressing one of the foot pedals 36, which disconnects (i.e., prevents movement inputs) the hand controllers 38a and/or 38b from the robotic arm 40 and corresponding instrument 50 or camera 51 attached thereto. This allows the user to reposition the hand controllers 38a and 38b without moving the robotic arm(s) 40 and the instrument 50 and/or camera 51. This is useful when reaching control boundaries of the surgical space.
- Each of the control tower 20, the surgeon console 30, and the robotic arm 40 includes a respective computer 21, 31, 41.
- the computers 21, 31, 41 are interconnected to each other using any suitable communication network based on wired or wireless communication protocols.
- Suitable protocols include, but are not limited to, transmission control protocol/intemet protocol (TCP/IP), datagram protocol/intemet protocol (UDP/IP), and/or datagram congestion control protocol (DCCP).
- Wireless communication may be achieved via one or more wireless configurations, e.g., radio frequency, optical, Wi-Fi, Bluetooth (an open wireless protocol for exchanging data over short distances, using short length radio waves, from fixed and mobile devices, creating personal area networks (PANs), ZigBee® (a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 122.15.4-1203 standard for wireless personal area networks (WPANs)).
- wireless configurations e.g., radio frequency, optical, Wi-Fi, Bluetooth (an open wireless protocol for exchanging data over short distances, using short length radio waves, from fixed and mobile devices, creating personal area networks (PANs), ZigBee® (a specification for a suite of high level communication protocols using small, low-power digital radios based on the IEEE 122.15.4-1203 standard for wireless personal area networks (WPANs)).
- PANs personal area networks
- ZigBee® a specification for a suite of high level communication protocols using small, low-power digital radios
- the computers 21, 31, 41 may include any suitable processor (not shown) operably connected to a memory (not shown), which may include one or more of volatile, non-volatile, magnetic, optical, or electrical media, such as read-only memory (ROM), random access memory (RAM), electrically erasable programmable ROM (EEPROM), non-volatile RAM (NVRAM), or flash memory.
- the processor may be any suitable processor (e.g., control circuit) adapted to perform the operations, calculations, and/or set of instructions described in the present disclosure including, but not limited to, a hardware processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a central processing unit (CPU), a microprocessor, and combinations thereof.
- FPGA field programmable gate array
- DSP digital signal processor
- CPU central processing unit
- microprocessor e.g., microprocessor
- each of the robotic arms 40 may include a plurality of links 42a, 42b, 42c, which are interconnected at joints 44a, 44b, 44c, respectively.
- the joint 44a is configured to secure the robotic arm 40 to the movable cart 60 and defines a first longitudinal axis.
- the movable cart 60 includes a lift 67 and a setup arm 61, which provides a base for mounting of the robotic arm 40.
- the lift 67 allows for vertical movement of the setup arm 61.
- the movable cart 60 also includes a display 69 for displaying information pertaining to the robotic arm 40.
- the robotic arm 40 may include any type and/or number of joints.
- the setup arm 61 includes a first link 62a, a second link 62b, and a third link 62c, which provide for lateral maneuverability of the robotic arm 40.
- the links 62a, 62b, 62c are interconnected at joints 63a and 63b, each of which may include an actuator (not shown) for rotating the links 62b and 62b relative to each other and the link 62c.
- the links 62a, 62b, 62c are movable in their corresponding lateral planes that are parallel to each other, thereby allowing for extension of the robotic arm 40 relative to the patient (e.g., surgical table).
- the robotic arm 40 may be coupled to the surgical table (not shown).
- the setup arm 61 includes controls 65 for adjusting movement of the links 62a, 62b, 62c as well as the lift 67.
- the setup arm 61 may include any type and/or number of joints.
- the third link 62c may include a rotatable base 64 having two degrees of freedom.
- the rotatable base 64 includes a first actuator 64a and a second actuator 64b.
- the first actuator 64a is rotatable about a first stationary arm axis which is perpendicular to a plane defined by the third link 62c and the second actuator 64b is rotatable about a second stationary arm axis which is transverse to the first stationary arm axis.
- the first and second actuators 64a and 64b allow for full three-dimensional orientation of the robotic arm 40.
- the actuator 48b of the joint 44b is coupled to the joint 44c via the belt 45a, and the joint 44c is in turn coupled to the joint 46b via the belt 45b.
- Joint 44c may include a transfer case coupling the belts 45a and 45b, such that the actuator 48b is configured to rotate each of the links 42b, 42c and a holder 46 relative to each other. More specifically, links 42b, 42c, and the holder 46 are passively coupled to the actuator 48b which enforces rotation about a pivot point “P” which lies at an intersection of the first axis defined by the link 42a and the second axis defined by the holder 46. In other words, the pivot point “P” is a remote center of motion (RCM) for the robotic arm 40.
- RCM remote center of motion
- the actuator 48b controls the angle 0 between the first and second axes allowing for orientation of the surgical instrument 50. Due to the interlinking of the links 42a, 42b, 42c, and the holder 46 via the belts 45a and 45b, the angles between the links 42a, 42b, 42c, and the holder 46 are also adjusted in order to achieve the desired angle 0. In embodiments, some or all of the joints 44a, 44b, 44c may include an actuator to obviate the need for mechanical linkages.
- the joints 44a and 44b include an actuator 48a and 48b configured to drive the joints 44a, 44b, 44c relative to each other through a series of belts 45a and 45b or other mechanical linkages such as a drive rod, a cable, or a lever and the like.
- the actuator 48a is configured to rotate the robotic arm 40 about a longitudinal axis defined by the link 42a.
- the holder 46 defines a second longitudinal axis and configured to receive an instrument drive unit (IDU) 52 (FIG. 1).
- the IDU 52 is configured to couple to an actuation mechanism of the surgical instrument 50 and the camera 51 and is configured to move (e.g., rotate) and actuate the instrument 50 and/or the camera 51.
- IDU 52 transfers actuation forces from its actuators to the surgical instrument 50 to actuate components an end effector of the surgical instrument 50.
- the holder 46 includes a sliding mechanism 46a, which is configured to move the IDU 52 along the second longitudinal axis defined by the holder 46.
- the holder 46 also includes a joint 46b, which rotates the holder 46 relative to the link 42c.
- the instrument 50 may be inserted through an endoscopic access port 55 (FIG. 3) held by the holder 46.
- the holder 46 also includes a port latch 46c for securing the access port 55 to the holder 46 (FIG. 2).
- the IDU 52 is attached to the holder 46, followed by a sterile interface module (SIM) 43 being attached to a distal portion of the IDU 52.
- SIM sterile interface module
- the SIM 43 is configured to secure a sterile drape (not shown) to the IDU 52.
- the instrument 50 is then attached to the SIM 43.
- the instrument 50 is then inserted through the access port 55 by moving the IDU 52 along the holder 46.
- the SIM 43 includes a plurality of drive shafts configured to transmit rotation of individual motors of the IDU 52 to the instrument 50 thereby actuating the instrument 50.
- the SIM 43 provides a sterile barrier between the instrument 50 and the other components of robotic arm 40, including the IDU 52.
- the robotic arm 40 also includes a plurality of manual override buttons 53 (FIG. 1) disposed on the IDU 52 and the setup arm 61, which may be used in a manual mode. The user may press one or more of the buttons 53 to move the component associated with the button 53.
- each of the computers 21, 31, 41 of the surgical robotic system 10 may include a plurality of controllers, which may be embodied in hardware and/or software.
- the computer 21 of the control tower 20 includes a controller 21a and safety observer 21b.
- the controller 21a receives data from the computer 31 of the surgeon console 30 about the current position and/or orientation of the handle controllers 38a and 38b and the state of the foot pedals 36 and other buttons.
- the controller 21a processes these input positions to determine desired drive commands for each joint of the robotic arm 40 and/or the IDU 52 and communicates these to the computer 41 of the robotic arm 40.
- the controller 21a also receives the actual joint angles measured by encoders of the actuators 48a and 48b and uses this information to determine force feedback commands that are transmitted back to the computer 31 of the surgeon console 30 to provide haptic feedback through the handle controllers 38a and 38b.
- the safety observer 21b performs validity checks on the data going into and out of the controller 21a and notifies a system fault handler if errors in the data transmission are detected to place the computer 21 and/or the surgical robotic system 10 into a safe state.
- the computer 41 includes a plurality of controllers, namely, a main cart controller 41a, a setup arm controller 41b, a robotic arm controller 41c, and an instrument drive unit (IDU) controller 4 Id.
- the main cart controller 41a receives and processes joint commands from the controller 21a of the computer 21 and communicates them to the setup arm controller 41b, the robotic arm controller 41c, and the IDU controller 41d.
- the main cart controller 41a also manages instrument exchanges and the overall state of the movable cart 60, the robotic arm 40, and the IDU 52.
- the main cart controller 41a also communicates actual joint angles back to the controller 21a.
- Each of joints 63a and 63b and the rotatable base 64 of the setup arm 61 are passive joints (i.e., no actuators are present therein) allowing for manual adjustment thereof by a user.
- the joints 63a and 63b and the rotatable base 64 include brakes that are disengaged by the user to configure the setup arm 61.
- the setup arm controller 4 lb monitors slippage of each of joints 63a and 63b and the rotatable base 64 of the setup arm 61, when brakes are engaged or can be freely moved by the operator when brakes are disengaged, but do not impact controls of other joints.
- the robotic arm controller 41c controls each joint 44a and 44b of the robotic arm 40 and calculates desired motor torques required for gravity compensation, friction compensation, and closed loop position control of the robotic arm 40.
- the robotic arm controller 41c calculates a movement command based on the calculated torque.
- the calculated motor commands are then communicated to one or more of the actuators 48a and 48b in the robotic arm 40.
- the actual joint positions are then transmitted by the actuators 48a and 48b back to the robotic arm controller 41c.
- the IDU controller 41d receives desired joint angles for the surgical instrument 50, such as wrist and jaw angles, and computes desired currents for the motors in the IDU 52.
- the IDU controller 4 Id calculates actual angles based on the motor positions and transmits the actual angles back to the main cart controller 41a.
- the robotic arm 40 is controlled in response to a pose of the handle controller controlling the robotic arm 40, e.g., the handle controller 38a, which is transformed into a desired pose of the robotic arm 40 through a hand eye transform function executed by the controller 21a.
- the hand eye function as well as other functions described herein, is/are embodied in software executable by the controller 2 la or any other suitable controller described herein.
- the pose of one of the handle controllers 38a may be embodied as a coordinate position and roll-pitch-yaw (RPY) orientation relative to a coordinate reference frame, which is fixed to the surgeon console 30.
- the desired pose of the instrument 50 is relative to a fixed frame on the robotic arm 40.
- the pose of the handle controller 38a is then scaled by a scaling function executed by the controller 21a.
- the coordinate position may be scaled down and the orientation may be scaled up by the scaling function.
- the controller 21a may also execute a clutching function, which disengages the handle controller 38a from the robotic arm 40.
- the controller 21a stops transmitting movement commands from the handle controller 38a to the robotic arm 40 if certain movement limits or other thresholds are exceeded and in essence acts like a virtual clutch mechanism, e.g., limits mechanical input from effecting mechanical output.
- the desired pose of the robotic arm 40 is based on the pose of the handle controller 38a and is then passed by an inverse kinematics function executed by the controller 21a.
- the inverse kinematics function calculates angles for the joints 44a, 44b, 44c of the robotic arm 40 that achieve the scaled and adjusted pose input by the handle controller 38a.
- the calculated angles are then passed to the robotic arm controller 41c, which includes a joint axis controller having a proportional-derivative (PD) controller, the friction estimator module, the gravity compensator module, and a two-sided saturation block, which is configured to limit the commanded torque of the motors of the joints 44a, 44b, 44c.
- PD proportional-derivative
- the surgical robotic system 10 is setup around a surgical table 90.
- the system 10 includes movable carts 60a-d, which may be numbered “1” through “4.”
- each of the carts 60a-d is positioned around the surgical table 90.
- Position and orientation of the carts 60a-d depends on a plurality of factors, such as placement of a plurality of access ports 55a-d, which in turn, depends on the surgery being performed.
- the access ports 55a-d are inserted into the patient, and carts 60a-d are positioned to insert instruments 50 and the endoscopic camera 51 into corresponding ports 55a-d.
- each of the robotic arms 40a-d is attached to one of the access ports 55a-d that is inserted into the patient by attaching the latch 46c (FIG. 2) to the access port 55 (FIG. 3).
- the IDU 52 is attached to the holder 46, followed by the SIM 43 being attached to a distal portion of the IDU 52.
- the instrument 50 is attached to the SIM 43.
- the instrument 50 is then calibrated by the robotic arm 40 and is inserted through the access port 55 by moving the IDU 52 along the holder 46.
- a method for assisted port placement may be implemented as software instructions executable by a processor (e.g., controller 21a) and in particular as a software application having a graphical user interface (GUI) for planning a surgical procedure to be performed by the robotic system 10.
- the software receives as input various 3D image and position data pertaining to the patient’s anatomy and generates a 3D virtual model of the patient with suggested placements for the camera 51 and the instruments 50.
- the software application also allows a user to manipulate computer models of the camera 51 and the instrument 50 in the model of the patient to view the endoscopic view of the camera 51.
- preoperative internal imaging is obtained of the patient, which may be performed using any suitable imaging modality such as computed tomography (CT), magnetic resonance imaging (MRI), or any other imaging modality capable of obtaining 3D images.
- CT computed tomography
- MRI magnetic resonance imaging
- the preoperative images are then be used to construct an internal tissue and organ model 200 as shown in FIG. 7.
- the internal 3D model 200 may be constructed using any suitable image processing computer.
- an external image 202 is obtained of the patient’s body and a depth map 204 is generated from the image 202 as shown in FIG. 8.
- the image 202 may be obtained using an external vision system 70 (FIG. 5), which may be a passive stereoscopic camera to provide for depth imaging as well to generate the depth map 204.
- the external vision system 70 may be an active stereoscopic infrared (IR) camera having an IR module configured to project an array of invisible IR dots onto the patient.
- the external vision system 70 is configured to detect the dots and analyze the pattern to create the depth map 204, which may then be used to create an external 3D model 208 of the patient.
- the external vision system 70 may be a time-of-flight camera having an IR laser module that paints the scene with short bursts of IR lasers and generates dense depth map based on the time it takes for each laser emission to be received back at the detector.
- the external vision system 70 may be the multi -camera subsystem of an augmented reality headset worn by the clinical staff.
- the external vision system 70 may also be used as a user input system to monitor user hand gestures that are then processed to determine desired user input, e.g., to manipulate a GUI 300 of FIGS. 11-13.
- a pose 206 of the patient is estimated by the controller 21a using machine learning image processing algorithms as shown in FIG. 9.
- machine learning may include a convolutional neural network (CNN) and/or a vision transformer (ViT) backbone to extract features along with a lightweight decoder for pose estimation.
- the CNN or ViT may be trained on previous data, for example, synthetic and/or real images of patients in various poses.
- the controller 21a predicts a skeleton for the patient.
- the prediction may be used to generate an external 3D model 208 of the patient as shown in FIG. 10.
- the external 3D model 208 includes a virtual skeleton model 210 with a plurality of keypoints 212 corresponding to the joints of the patient’s anatomy.
- the external 3D model 208, the skeleton model 210, and the keypoints 212 are based on the external image 202, the depth map 204, and the pose 206.
- the keypoints 212 may be generated using various machine learning algorithms trained on datasets including synthetic and/or real images of patients in various poses.
- the external 3D model 208 is further refined to the patient specific skeleton.
- the patient-specific keypoint refinement algorithm may rely on the pre-operative imaging (CT/MRI) scans.
- the pre-operative imaging data may be segmented at different levels of densities in order: A first level of processing may generate the low-density soft-tissue external anatomical regions of the patient from the pre-operative imaging data; A second level of processing may generate the high-density internal bony joints of the patient from the preoperative imaging data.
- the pre-operative imaging data (CT/MRI) is thus used to adjust the position of the keypoints 212.
- the 3D model may then be fitted around the refined skeleton model 210.
- the external 3D model 208 may also be based on patient body habitus input into the software application.
- the controller 21a may receive a user input for modifying the simulated patient habitus and modify the simulated patient habitus based on the user input.
- the user input may include sliders or other inputs to adjust patient habitus dimensions, body positions, and/or leg/arm positions. Furthermore, the initial state of the sliders or other inputs may be automatically adjusted based on the refined skeleton model 210.
- the controller 21a is configured to determine optimal access port (i.e., access port 55) locations 302 based on the procedure being performed, e.g., organ being operated on. For example, partial nephrectomy involves different port placement than radical prostatectomy, etc. Port locations 302 are also determined based on the specific anatomy of the patient, which in turn, is based on the patient’s internal 3D model 200 and external 3D model 208. The port locations are determined by an optimization algorithm based on the initial fixed port placement guides for the specific procedure, the patient-specific external and internal models, the suggested instruments and their kinematics, and the robotic arms model. The optimization algorithm may start with the initial static port placement guides as starting point.
- the optimization algorithm may include a reinforcement learning algorithm in which the environment is in the form of the patient anatomy to be operated on, the algorithm generates a set of port locations as actions, and the reward is measured in the form of collision-free optimal access to patient anatomy.
- the optimization algorithm may present the user with multiple equally optimal port location plans letting the user select one.
- the optimal port locations 302 are shown in a GUI 300 of the software application.
- the GUI 300 displays the external 3D model 208 combined with the internal 3D model 200 including the port locations 302.
- the external 3D model 208 may be partially transparent to display the internal 3D model 200 including patient’s organs, which may be color coded for easy identification.
- the GUI 300 is used to generate, view, and modify the port locations 302 based on the patient’s internal 3D model 200 and external 3D model 208.
- the GUI includes the 3D geodesic distance between different ports computed on the patient anatomy in real-time.
- the distance between ports is displayed to the clinical staff to ensure that ports are placed at least a minimum distance away from each other in order to minimize the likelihood of arm collisions. Additionally, the GUI may display distance between each of the port locations to the organs of interest based on the internal 3D model 200.
- the GUI 300 allows for movement of the port locations 302 along the outside surface of the external 3D model 208.
- the user may select the port locations 302 in need of modification by simulating operation of instruments 50 and/or camera 51 being inserted through the access ports 55a-d at the port locations 302.
- the GUI may display a warning.
- the GUI may display virtual boundaries around the suggested port locations such that moving the ports within the virtual boundaries would still satisfy all the optimal port placement constraints.
- the simulation process is performed at step 114 and is shown in more detail in FIGS. 12 and 13, in which the GUI 300 provides the user with an initial view of the internal 3D model 200.
- the controller 21a automatically identifies which of the port locations 302 are used by the camera 51 and the instrument 50 and may provide a corresponding icon illustrating the device being inserted into at that port location 302, e.g., camera icon.
- the GUI 300 provides the user an option to select the icon corresponding to the port location 302 and insert a virtual instrument 50 or a camera 51, which may be shown as insertion models 304 and 306 for the instrument 50 and the camera 51, respectively.
- the models 304 and 306 includes outside portions 304a and 306a and inside portions 304b and 306b, respectively, which are visually differentiated from each other, e.g., by the degree of transparency, color, etc. to illustrate the portions of the instrument 50 and the camera 51 that are inside the patient.
- the insertion model 304 may also include a joint 304c and an end effector model 304d.
- cone 305a and 305b may also be generated by the controller 21a to illustrate the degree of freedom of the insertion model 304 and each joint 304c.
- the access ports 55a-d limit the movement of the device inserted through about the center of motion, which corresponds to the point of insertion of the access port through the patient. Since the port locations 302 correspond to those points, the cone 305a has its apex at the port location 302 represents the limits of motion of the insertion model 304. Similarly, the cone 305b represents the limits of motion of the end effector model 304d about the joint 304c.
- the user may place multiple models 306 representing different insertion trajectories of the camera 51 through the single port location 302.
- Each of the models 306 includes a preview 308 of the endoscopic view of the camera 51.
- the preview 308 includes a viewpoint of the internal 3D model 200 from the perspective of the model 306.
- the user may manipulate the models 306, e.g., rotate, advance, retract, etc., and the previews 308 are updated in real time as the models 306 are manipulated.
- User input may be received through the GUI 300 using a variety of devices, e.g., touchscreen, pointer, keyboards, etc.
- the preview 308 shows the modeled organs allowing to the user to confirm whether the proposed optimal camera location is suitable. Variations in patient’s anatomy and position deform the organs of the patient. Therefore, the controller 21a is also configured to generate a deformed internal 3D model 200 based on the position and the skeleton model 210 of the patient.
- the controller 21a may use a neural network trained in pre-operative imaging/modelling of the same organs in order to learn changes to the shape of organs due to shifting position and orientation of the patient using critical structure landmarks (e.g., vessels, arteries, etc.).
- the user may select one of the models 306 that was inserted based on a desired view and discard the others.
- the user may shift the port location 302 to a different location and repeat the preview process to confirm the desired port location 302.
- the preview window 308 may close during movement of the port location 302 and may automatically reappear once the movement is stopped.
- the GUI 300 may also provide playback, i.e., animation, of insertion and internal manipulation of the models 306 to simulate the surgical procedure.
- the playback may also include movement of the end-effector trajectories to evaluate workspace and collisions with the selected port locations between the camera and the instruments.
- the user confirms the port locations 302, including that of the camera 51, the port locations 302 and the controller 21a generates a setup guide for configuring and positioning the patient on the table 90 and the robotic arms 40 around the patient as shown in FIG. 5 and described above.
- the guide may include a plurality of text, images, and/or video instructions to be implemented by the operating room staff to setup the surgical robotic system 10.
- the guide may include instructions for insertion of access ports 55a-d at the port locations 302, instruments 50 to be used, position and orientation of the movable carts 60a-d relative to the table 90, etc.
- an endoscope is inserted through the camera port for ‘initial look’ phase, where the endoscope may be a stereo endoscope and the controller 21a may be configured to generate a dense depth map of the surgical site.
- the controller 21a may track the endoscope location via robotic arm kinematics and Visual-Simultaneous Localization And Mapping (Visual-SLAM) to generate the real-time 3D poses of the endoscope.
- the controller 21a may generate an extended stitched intra-operative 3D depth map of the surgical site from plurality of endoscopic camera poses using a combination of robot arm kinematics, Visual-SLAM, point cloud generation, and sequential point cloud registration.
- the controller 21a may initialize the registration of the pre-operative internal 3D model of the patient with the depth map either via point-to-point semi-automatic registration or via fully automatic registration.
- the semi-automatic registration may require the user to mark plurality of points on the pre-operative 3D model and the corresponding points on the 3D surface generated from the stereo endoscope.
- the controller 21a may use a machine learning model in the form of a neural network to register the pre-operative 3D model and the 3D surface generated from the depth map of stereo endoscope images. Such a neural network may be trained to account for the movement of organs resulting from the insufflation of the abdomen.
- the controller 21a may additionally update the registration between the pre-operative internal 3D model of the patient with the intra-operative 3D depth map via fully automatic registration using real-time robotic arm kinematics data and Visual-SLAM.
- the GUI may further display warnings and alarms if the projected trajectory or end points of any of the inserted trocars through any of the additional ports may collide with the internal organs as shown in the non-occupancy volume.
- the GUI may also let the clinical staff move the virtual viewpoint of the stereo endoscope camera using the 3D surgical site model in order to take a close look at the projected trocar trajectories through any of the additional ports.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Robotics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
A surgical robotic system includes a controller configured to generate a port location for at least one access port on a 3D model of a patient, generate a camera insertion model through the port location, and generate a setup guide for configuring at least one access port and a robotic arm. The system also includes a display configured to output a graphical user interface including the 3D model of the patient, the port location of the at least one access port, the camera insertion model, and a preview window from a perspective of the camera insertion model.
Description
SURGICAL ROBOTIC SYSTEM AND METHOD FOR ASSISTED ACCESS PORT
PLACEMENT
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/440,985, filed January 25, 2023, the entire content of which is incorporated herein by reference.
BACKGROUND
[0002] Surgical robotic systems are used in a variety of surgical procedures, including minimally invasive medical procedures. Some surgical robotic systems include a surgeon console controlling a surgical robotic arm and a surgical instrument having an end effector (e.g., forceps or grasping instrument) coupled to and actuated by the robotic arm. In operation, the robotic arm is moved to a position over a patient and then guides the surgical instrument into a small incision via a surgical port or a natural orifice of a patient to position the end effector at a work site within the patient’s body. Surgical robotic systems may be used with movable carts supporting one or more arms holding instrument(s). Setup time for robotic surgical systems may be lengthy and cumbersome and depends on placement of access ports. Thus, there is a need for systems for virtual placement of access ports, which is then used to determine initial placement of movable carts and robotic arms.
SUMMARY
[0003] According to one embodiment of the present disclosure, a surgical robotic system is disclosed. The system includes a controller configured to generate a port location for at least one access port on a 3D model of a patient, generate a camera insertion model through the port location, and generate a setup guide for configuring at least one access port and a robotic arm. The system also includes a display configured to output a graphical user interface including the 3D model of the patient, the port location of the at least one access port, the camera insertion model, and a preview window from a perspective of the camera insertion model.
[0004] Implementations of the above embodiment may include one or more of the following features. According to one aspect of the above embodiment, the graphical user interface may be configured to receive user input to move the port location and the controller may be further configured to update the preview window based on movement of the port location. The surgical robotic system may further include a camera configured to capture an external image of a
patient. The controller may be further configured to generate an external 3D model of a patient based on the external image, generate an internal 3D model of the patient based on preoperative imaging data, and generate a combined 3D model based on the external 3D model and the internal 3D model. The preview window may include a view of the internal 3D model. The controller may be further configured to generate a skeleton model having a plurality of keypoints for the patient based on the external image and deform the internal 3D model of the patient based on position of the patient and the skeleton model.
[0005] The camera insertion model may include an outside portion and an inside portion and the inside portion and the outside portion may be visually different in the graphical user interface. The graphical user interface may be configured to receive user input to move the camera insertion model. The controller may be further configured to update the preview window based on movement of the camera insertion model. The controller may be further configured to generate a degree of freedom cone at the port location. The controller may be further configured to generate an instrument insertion model having an end effector model. The controller may be further configured to generate a degree of freedom cone extending from a joint of the end effector model.
[0006] According to another embodiment of the present disclosure, a method for determining access port placement is disclosed. The method includes capturing an external image of a patient, generating an external 3D model of a patient based on the external image, generating an internal 3D model of the patient based on preoperative imaging data, and further generating a combined 3D model based on the external 3D model and the internal 3D model. The method also includes determining an optimal port location for at least one access port based on the combined 3D model and generating a camera insertion model through the optimal port location. The method additionally includes generating a preview window of the internal 3D model from a perspective of the camera insertion model and generating a patient-specific customized setup guide based on a generic setup guide corresponding to the surgical procedure for configuring at least one access port and a robotic arm.
[0007] Implementations of the above embodiment may include one or more of the following features. According to one aspect of the above embodiment, the method may further include receiving user input to move the camera insertion model and updating the preview window of the internal 3D model based on movement of the camera insertion model. The method may also include generating the external 3D model of the patient based on a depth map of the patient. The method may additionally include generating a skeleton model may include a plurality of keypoints for the patient based on the external image. The external 3D model may be generated
based on the skeleton model. The method may also include deforming the internal 3D model of the patient based on position of the patient and the skeleton model.
[0008] According to a further embodiment of the present disclosure, a surgical robotic system is disclosed. The surgical robotic system includes a first robotic arm having a first instrument drive unit coupled to a camera, a second robotic arm having a second instrument drive unit coupled to an instrument, and a camera configured to capture an external image of a patient. The system also includes a controller configured to generate an external 3D model of a patient based on the external image, generate an internal 3D model of the patient based on preoperative imaging data, generate a combined 3D model based on the external 3D model and the internal 3D model. The controller is further configured to determine a first optimal port location for a first access port based on the combined 3D model and determine a second optimal port location for a second access port based on the combined 3D model. The controller is further configured to provide visual aides to guide the marking of incision locations through which trocars will be inserted for ports. These visual aides may be in the form of projection overlay on patient anatomy or clinical staff moving their finger on the patient anatomy and seeing the overlay on GUI or clinical staff wearing an augmented reality headset that highlights the incision locations. The controller is additionally configured to generate a setup guide for configuring the first and second access ports and the first and second robotic arms. The system also includes a display configured to output a graphical user interface that is configured to display the combined 3D model of the patient, the first and second optimal port locations, a camera insertion model through the first optimal port location, an instrument insertion model through the second optimal port location; and a preview window of the internal 3D model from a perspective of the camera insertion model.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Various embodiments of the present disclosure are described herein with reference to the drawings wherein:
[0010] FIG. 1 is a schematic illustration of a surgical robotic system including a control tower, a console, and one or more surgical robotic arms each disposed on a movable cart according to an embodiment of the present disclosure;
[0011] FIG. 2 is a perspective view of a surgical robotic arm of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure;
[0012] FIG. 3 is a perspective view of a movable cart having a setup arm with the surgical robotic arm of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure;
[0013] FIG. 4 is a schematic diagram of a computer architecture of the surgical robotic system of FIG. 1 according to an embodiment of the present disclosure;
[0014] FIG. 5 is a plan schematic view of movable carts of FIG. 1 positioned about a surgical table according to an embodiment of the present disclosure;
[0015] FIG. 6 is a flowchart of a method for assisted port placement according to an embodiment of the present disclosure;
[0016] FIG. 7 is a computed tomography image according to an embodiment of the present disclosure;
[0017] FIG. 8 shows a white light (i.e., RGB) still image and a counterpart depth map image of the surgical robotic system of FIG. 1 and a patient according to an embodiment of the present disclosure;
[0018] FIG. 9 is a schematic flow chart of pose estimation of the patient according to an embodiment of the present disclosure;
[0019] FIG. 10 is a computer-generated 3D model of a patient including a skeleton model according to an embodiment of the present disclosure;
[0020] FIG. 11 is an image of a graphical user interface (GUI) including the 3D model of a patient and suggested access port locations according to an embodiment of the present disclosure;
[0021] FIG. 12 is the image of the GUI including port locations for the instrument and the camera according to an embodiment of the present disclosure; and
[0022] FIG. 13 is the image of the GUI including preview windows of the camera at different orientations according to an embodiment of the present disclosure.
DETAIUED DESCRIPTION
[0023] Embodiments of the presently disclosed surgical robotic system are described in detail with reference to the drawings, in which like reference numerals designate identical or corresponding elements in each of the several views.
[0024] As will be described in detail below, the present disclosure is directed to a surgical robotic system, which includes a surgeon console, a control tower, and one or more movable carts having a surgical robotic arm coupled to a setup arm. The surgeon console receives user input through one or more interface devices. The input is processed by the control tower as
movement commands for moving the surgical robotic arm and an instrument and/or camera coupled thereto. Thus, the surgeon console enables teleoperation of the surgical arms and attached instruments/camera. The surgical robotic arm includes a controller, which is configured to process the movement commands to control one or more actuators of the robotic arm, which would, in turn, move the robotic arm and the instrument in response to the movement commands.
[0025] With reference to FIG. 1, a surgical robotic system 10 includes a control tower 20, which is connected to all of the components of the surgical robotic system 10 including a surgeon console 30 and one or more movable carts 60. Each of the movable carts 60 includes a robotic arm 40 having a surgical instrument 50 coupled thereto. The robotic arms 40 also couple to the movable carts 60. The robotic system 10 may include any number of movable carts 60 and/or robotic arms 40.
[0026] The surgical instrument 50 is configured for use during minimally invasive surgical procedures. In embodiments, the surgical instrument 50 may be configured for open surgical procedures. In further embodiments, the surgical instrument 50 may be an electrosurgical forceps configured to seal tissue by compressing tissue between jaw members and applying electrosurgical current thereto. In yet further embodiments, the surgical instrument 50 may be a surgical stapler including a pair of jaws configured to grasp and clamp tissue while deploying a plurality of tissue fasteners, e.g., staples, and cutting stapled tissue. In yet further embodiments, the surgical instrument 50 may be a surgical clip applier including a pair of jaws configured apply a surgical clip onto tissue.
[0027] One of the robotic arms 40 may include an endoscopic camera 51 configured to capture video of the surgical site. The endoscopic camera 51 may be a stereoscopic endoscope configured to capture two side-by-side (i.e., left and right) images of the surgical site to produce a video stream of the surgical scene. The endoscopic camera 51 is coupled to a video processing device 56, which may be disposed within the control tower 20. The video processing device 56 may be any computing device as described below configured to receive the video feed from the endoscopic camera 51 and output the processed video stream.
[0028] The surgeon console 30 includes a first display 32, which displays a video feed of the surgical site provided by camera 51 disposed on the robotic arm 40, and a second display 34, which displays a user interface for controlling the surgical robotic system 10. The first display 32 and second display 34 may be touchscreens allowing for displaying various graphical user inputs.
[0029] The surgeon console 30 also includes a plurality of user interface devices, such as foot pedals 36 and a pair of handle controllers 38a and 38b which are used by a user to remotely control robotic arms 40. The surgeon console further includes an armrest 33 used to support clinician’s arms while operating the handle controllers 38a and 38b.
[0030] The control tower 20 includes a display 23, which may be a touchscreen, and outputs on the graphical user interfaces (GUIs). The control tower 20 also acts as an interface between the surgeon console 30 and one or more robotic arms 40. In particular, the control tower 20 is configured to control the robotic arms 40, such as to move the robotic arms 40 and the corresponding surgical instrument 50, based on a set of programmable instructions and/or input commands from the surgeon console 30, in such a way that robotic arms 40 and the surgical instrument 50 execute a desired movement sequence in response to input from the foot pedals 36 and the handle controllers 38a and 38b. The foot pedals 36 may be used to enable and lock the hand controllers 38a and 38b, repositioning camera movement and electrosurgical activation/deactivation. In particular, the foot pedals 36 may be used to perform a clutching action on the hand controllers 38a and 38b. Clutching is initiated by pressing one of the foot pedals 36, which disconnects (i.e., prevents movement inputs) the hand controllers 38a and/or 38b from the robotic arm 40 and corresponding instrument 50 or camera 51 attached thereto. This allows the user to reposition the hand controllers 38a and 38b without moving the robotic arm(s) 40 and the instrument 50 and/or camera 51. This is useful when reaching control boundaries of the surgical space.
[0031] Each of the control tower 20, the surgeon console 30, and the robotic arm 40 includes a respective computer 21, 31, 41. The computers 21, 31, 41 are interconnected to each other using any suitable communication network based on wired or wireless communication protocols. The term “network,” whether plural or singular, as used herein, denotes a data network, including, but not limited to, the Internet, Intranet, a wide area network, or a local area network, and without limitation as to the full scope of the definition of communication networks as encompassed by the present disclosure. Suitable protocols include, but are not limited to, transmission control protocol/intemet protocol (TCP/IP), datagram protocol/intemet protocol (UDP/IP), and/or datagram congestion control protocol (DCCP). Wireless communication may be achieved via one or more wireless configurations, e.g., radio frequency, optical, Wi-Fi, Bluetooth (an open wireless protocol for exchanging data over short distances, using short length radio waves, from fixed and mobile devices, creating personal area networks (PANs), ZigBee® (a specification for a suite of high level communication protocols using
small, low-power digital radios based on the IEEE 122.15.4-1203 standard for wireless personal area networks (WPANs)).
[0032] The computers 21, 31, 41 may include any suitable processor (not shown) operably connected to a memory (not shown), which may include one or more of volatile, non-volatile, magnetic, optical, or electrical media, such as read-only memory (ROM), random access memory (RAM), electrically erasable programmable ROM (EEPROM), non-volatile RAM (NVRAM), or flash memory. The processor may be any suitable processor (e.g., control circuit) adapted to perform the operations, calculations, and/or set of instructions described in the present disclosure including, but not limited to, a hardware processor, a field programmable gate array (FPGA), a digital signal processor (DSP), a central processing unit (CPU), a microprocessor, and combinations thereof. Those skilled in the art will appreciate that the processor may be substituted for by using any logic processor (e.g., control circuit) adapted to execute algorithms, calculations, and/or set of instructions described herein.
[0033] With reference to FIG. 2, each of the robotic arms 40 may include a plurality of links 42a, 42b, 42c, which are interconnected at joints 44a, 44b, 44c, respectively. Other configurations of links and joints may be utilized as known by those skilled in the art. The joint 44a is configured to secure the robotic arm 40 to the movable cart 60 and defines a first longitudinal axis. With reference to FIG. 3, the movable cart 60 includes a lift 67 and a setup arm 61, which provides a base for mounting of the robotic arm 40. The lift 67 allows for vertical movement of the setup arm 61. The movable cart 60 also includes a display 69 for displaying information pertaining to the robotic arm 40. In embodiments, the robotic arm 40 may include any type and/or number of joints.
[0034] The setup arm 61 includes a first link 62a, a second link 62b, and a third link 62c, which provide for lateral maneuverability of the robotic arm 40. The links 62a, 62b, 62c are interconnected at joints 63a and 63b, each of which may include an actuator (not shown) for rotating the links 62b and 62b relative to each other and the link 62c. In particular, the links 62a, 62b, 62c are movable in their corresponding lateral planes that are parallel to each other, thereby allowing for extension of the robotic arm 40 relative to the patient (e.g., surgical table). In embodiments, the robotic arm 40 may be coupled to the surgical table (not shown). The setup arm 61 includes controls 65 for adjusting movement of the links 62a, 62b, 62c as well as the lift 67. In embodiments, the setup arm 61 may include any type and/or number of joints.
[0035] The third link 62c may include a rotatable base 64 having two degrees of freedom. In particular, the rotatable base 64 includes a first actuator 64a and a second actuator 64b. The first actuator 64a is rotatable about a first stationary arm axis which is perpendicular to a plane
defined by the third link 62c and the second actuator 64b is rotatable about a second stationary arm axis which is transverse to the first stationary arm axis. The first and second actuators 64a and 64b allow for full three-dimensional orientation of the robotic arm 40.
[0036] The actuator 48b of the joint 44b is coupled to the joint 44c via the belt 45a, and the joint 44c is in turn coupled to the joint 46b via the belt 45b. Joint 44c may include a transfer case coupling the belts 45a and 45b, such that the actuator 48b is configured to rotate each of the links 42b, 42c and a holder 46 relative to each other. More specifically, links 42b, 42c, and the holder 46 are passively coupled to the actuator 48b which enforces rotation about a pivot point “P” which lies at an intersection of the first axis defined by the link 42a and the second axis defined by the holder 46. In other words, the pivot point “P” is a remote center of motion (RCM) for the robotic arm 40. Thus, the actuator 48b controls the angle 0 between the first and second axes allowing for orientation of the surgical instrument 50. Due to the interlinking of the links 42a, 42b, 42c, and the holder 46 via the belts 45a and 45b, the angles between the links 42a, 42b, 42c, and the holder 46 are also adjusted in order to achieve the desired angle 0. In embodiments, some or all of the joints 44a, 44b, 44c may include an actuator to obviate the need for mechanical linkages.
[0037] The joints 44a and 44b include an actuator 48a and 48b configured to drive the joints 44a, 44b, 44c relative to each other through a series of belts 45a and 45b or other mechanical linkages such as a drive rod, a cable, or a lever and the like. In particular, the actuator 48a is configured to rotate the robotic arm 40 about a longitudinal axis defined by the link 42a.
[0038] With reference to FIG. 2, the holder 46 defines a second longitudinal axis and configured to receive an instrument drive unit (IDU) 52 (FIG. 1). The IDU 52 is configured to couple to an actuation mechanism of the surgical instrument 50 and the camera 51 and is configured to move (e.g., rotate) and actuate the instrument 50 and/or the camera 51. IDU 52 transfers actuation forces from its actuators to the surgical instrument 50 to actuate components an end effector of the surgical instrument 50. The holder 46 includes a sliding mechanism 46a, which is configured to move the IDU 52 along the second longitudinal axis defined by the holder 46. The holder 46 also includes a joint 46b, which rotates the holder 46 relative to the link 42c. During endoscopic procedures, the instrument 50 may be inserted through an endoscopic access port 55 (FIG. 3) held by the holder 46. The holder 46 also includes a port latch 46c for securing the access port 55 to the holder 46 (FIG. 2).
[0039] The IDU 52 is attached to the holder 46, followed by a sterile interface module (SIM) 43 being attached to a distal portion of the IDU 52. The SIM 43 is configured to secure a sterile drape (not shown) to the IDU 52. The instrument 50 is then attached to the SIM 43. The
instrument 50 is then inserted through the access port 55 by moving the IDU 52 along the holder 46. The SIM 43 includes a plurality of drive shafts configured to transmit rotation of individual motors of the IDU 52 to the instrument 50 thereby actuating the instrument 50. In addition, the SIM 43 provides a sterile barrier between the instrument 50 and the other components of robotic arm 40, including the IDU 52.
[0040] The robotic arm 40 also includes a plurality of manual override buttons 53 (FIG. 1) disposed on the IDU 52 and the setup arm 61, which may be used in a manual mode. The user may press one or more of the buttons 53 to move the component associated with the button 53. [0041] With reference to FIG. 4, each of the computers 21, 31, 41 of the surgical robotic system 10 may include a plurality of controllers, which may be embodied in hardware and/or software. The computer 21 of the control tower 20 includes a controller 21a and safety observer 21b. The controller 21a receives data from the computer 31 of the surgeon console 30 about the current position and/or orientation of the handle controllers 38a and 38b and the state of the foot pedals 36 and other buttons. The controller 21a processes these input positions to determine desired drive commands for each joint of the robotic arm 40 and/or the IDU 52 and communicates these to the computer 41 of the robotic arm 40. The controller 21a also receives the actual joint angles measured by encoders of the actuators 48a and 48b and uses this information to determine force feedback commands that are transmitted back to the computer 31 of the surgeon console 30 to provide haptic feedback through the handle controllers 38a and 38b. The safety observer 21b performs validity checks on the data going into and out of the controller 21a and notifies a system fault handler if errors in the data transmission are detected to place the computer 21 and/or the surgical robotic system 10 into a safe state.
[0042] The computer 41 includes a plurality of controllers, namely, a main cart controller 41a, a setup arm controller 41b, a robotic arm controller 41c, and an instrument drive unit (IDU) controller 4 Id. The main cart controller 41a receives and processes joint commands from the controller 21a of the computer 21 and communicates them to the setup arm controller 41b, the robotic arm controller 41c, and the IDU controller 41d. The main cart controller 41a also manages instrument exchanges and the overall state of the movable cart 60, the robotic arm 40, and the IDU 52. The main cart controller 41a also communicates actual joint angles back to the controller 21a.
[0043] Each of joints 63a and 63b and the rotatable base 64 of the setup arm 61 are passive joints (i.e., no actuators are present therein) allowing for manual adjustment thereof by a user. The joints 63a and 63b and the rotatable base 64 include brakes that are disengaged by the user to configure the setup arm 61. The setup arm controller 4 lb monitors slippage of each of joints
63a and 63b and the rotatable base 64 of the setup arm 61, when brakes are engaged or can be freely moved by the operator when brakes are disengaged, but do not impact controls of other joints. The robotic arm controller 41c controls each joint 44a and 44b of the robotic arm 40 and calculates desired motor torques required for gravity compensation, friction compensation, and closed loop position control of the robotic arm 40. The robotic arm controller 41c calculates a movement command based on the calculated torque. The calculated motor commands are then communicated to one or more of the actuators 48a and 48b in the robotic arm 40. The actual joint positions are then transmitted by the actuators 48a and 48b back to the robotic arm controller 41c.
[0044] The IDU controller 41d receives desired joint angles for the surgical instrument 50, such as wrist and jaw angles, and computes desired currents for the motors in the IDU 52. The IDU controller 4 Id calculates actual angles based on the motor positions and transmits the actual angles back to the main cart controller 41a.
[0045] The robotic arm 40 is controlled in response to a pose of the handle controller controlling the robotic arm 40, e.g., the handle controller 38a, which is transformed into a desired pose of the robotic arm 40 through a hand eye transform function executed by the controller 21a. The hand eye function, as well as other functions described herein, is/are embodied in software executable by the controller 2 la or any other suitable controller described herein. The pose of one of the handle controllers 38a may be embodied as a coordinate position and roll-pitch-yaw (RPY) orientation relative to a coordinate reference frame, which is fixed to the surgeon console 30. The desired pose of the instrument 50 is relative to a fixed frame on the robotic arm 40. The pose of the handle controller 38a is then scaled by a scaling function executed by the controller 21a. In embodiments, the coordinate position may be scaled down and the orientation may be scaled up by the scaling function. In addition, the controller 21a may also execute a clutching function, which disengages the handle controller 38a from the robotic arm 40. In particular, the controller 21a stops transmitting movement commands from the handle controller 38a to the robotic arm 40 if certain movement limits or other thresholds are exceeded and in essence acts like a virtual clutch mechanism, e.g., limits mechanical input from effecting mechanical output.
[0046] The desired pose of the robotic arm 40 is based on the pose of the handle controller 38a and is then passed by an inverse kinematics function executed by the controller 21a. The inverse kinematics function calculates angles for the joints 44a, 44b, 44c of the robotic arm 40 that achieve the scaled and adjusted pose input by the handle controller 38a. The calculated angles are then passed to the robotic arm controller 41c, which includes a joint axis controller
having a proportional-derivative (PD) controller, the friction estimator module, the gravity compensator module, and a two-sided saturation block, which is configured to limit the commanded torque of the motors of the joints 44a, 44b, 44c.
[0047] With reference to FIG. 5, the surgical robotic system 10 is setup around a surgical table 90. The system 10 includes movable carts 60a-d, which may be numbered “1” through “4.” During setup, each of the carts 60a-d is positioned around the surgical table 90. Position and orientation of the carts 60a-d depends on a plurality of factors, such as placement of a plurality of access ports 55a-d, which in turn, depends on the surgery being performed. Once the port placements are determined, the access ports 55a-d are inserted into the patient, and carts 60a-d are positioned to insert instruments 50 and the endoscopic camera 51 into corresponding ports 55a-d. During use, each of the robotic arms 40a-d is attached to one of the access ports 55a-d that is inserted into the patient by attaching the latch 46c (FIG. 2) to the access port 55 (FIG. 3). The IDU 52 is attached to the holder 46, followed by the SIM 43 being attached to a distal portion of the IDU 52. Thereafter, the instrument 50 is attached to the SIM 43. The instrument 50 is then calibrated by the robotic arm 40 and is inserted through the access port 55 by moving the IDU 52 along the holder 46.
[0048] With reference to FIG. 6, a method for assisted port placement may be implemented as software instructions executable by a processor (e.g., controller 21a) and in particular as a software application having a graphical user interface (GUI) for planning a surgical procedure to be performed by the robotic system 10. The software receives as input various 3D image and position data pertaining to the patient’s anatomy and generates a 3D virtual model of the patient with suggested placements for the camera 51 and the instruments 50. The software application also allows a user to manipulate computer models of the camera 51 and the instrument 50 in the model of the patient to view the endoscopic view of the camera 51.
[0049] At step 100, preoperative internal imaging is obtained of the patient, which may be performed using any suitable imaging modality such as computed tomography (CT), magnetic resonance imaging (MRI), or any other imaging modality capable of obtaining 3D images. The preoperative images are then be used to construct an internal tissue and organ model 200 as shown in FIG. 7. The internal 3D model 200 may be constructed using any suitable image processing computer.
[0050] At step 102, an external image 202 is obtained of the patient’s body and a depth map 204 is generated from the image 202 as shown in FIG. 8. The image 202 may be obtained using an external vision system 70 (FIG. 5), which may be a passive stereoscopic camera to provide for depth imaging as well to generate the depth map 204. In embodiments, the external
vision system 70 may be an active stereoscopic infrared (IR) camera having an IR module configured to project an array of invisible IR dots onto the patient. The external vision system 70 is configured to detect the dots and analyze the pattern to create the depth map 204, which may then be used to create an external 3D model 208 of the patient. In further embodiments, the external vision system 70 may be a time-of-flight camera having an IR laser module that paints the scene with short bursts of IR lasers and generates dense depth map based on the time it takes for each laser emission to be received back at the detector. In further embodiments, the external vision system 70 may be the multi -camera subsystem of an augmented reality headset worn by the clinical staff. The external vision system 70 may also be used as a user input system to monitor user hand gestures that are then processed to determine desired user input, e.g., to manipulate a GUI 300 of FIGS. 11-13.
[0051] At step 104, a pose 206 of the patient is estimated by the controller 21a using machine learning image processing algorithms as shown in FIG. 9. In embodiments, machine learning may include a convolutional neural network (CNN) and/or a vision transformer (ViT) backbone to extract features along with a lightweight decoder for pose estimation. The CNN or ViT may be trained on previous data, for example, synthetic and/or real images of patients in various poses.
[0052] At step 106, the controller 21a predicts a skeleton for the patient. The prediction may be used to generate an external 3D model 208 of the patient as shown in FIG. 10. The external 3D model 208 includes a virtual skeleton model 210 with a plurality of keypoints 212 corresponding to the joints of the patient’s anatomy. The external 3D model 208, the skeleton model 210, and the keypoints 212 are based on the external image 202, the depth map 204, and the pose 206. The keypoints 212 may be generated using various machine learning algorithms trained on datasets including synthetic and/or real images of patients in various poses.
[0053] At step 108, the external 3D model 208 is further refined to the patient specific skeleton. The patient-specific keypoint refinement algorithm may rely on the pre-operative imaging (CT/MRI) scans. The pre-operative imaging data may be segmented at different levels of densities in order: A first level of processing may generate the low-density soft-tissue external anatomical regions of the patient from the pre-operative imaging data; A second level of processing may generate the high-density internal bony joints of the patient from the preoperative imaging data. The pre-operative imaging data (CT/MRI) is thus used to adjust the position of the keypoints 212. The 3D model may then be fitted around the refined skeleton model 210. The external 3D model 208 may also be based on patient body habitus input into the software application. The controller 21a may receive a user input for modifying the
simulated patient habitus and modify the simulated patient habitus based on the user input. The user input may include sliders or other inputs to adjust patient habitus dimensions, body positions, and/or leg/arm positions. Furthermore, the initial state of the sliders or other inputs may be automatically adjusted based on the refined skeleton model 210.
[0054] At step 110, the controller 21a is configured to determine optimal access port (i.e., access port 55) locations 302 based on the procedure being performed, e.g., organ being operated on. For example, partial nephrectomy involves different port placement than radical prostatectomy, etc. Port locations 302 are also determined based on the specific anatomy of the patient, which in turn, is based on the patient’s internal 3D model 200 and external 3D model 208. The port locations are determined by an optimization algorithm based on the initial fixed port placement guides for the specific procedure, the patient-specific external and internal models, the suggested instruments and their kinematics, and the robotic arms model. The optimization algorithm may start with the initial static port placement guides as starting point. The optimization algorithm may include a reinforcement learning algorithm in which the environment is in the form of the patient anatomy to be operated on, the algorithm generates a set of port locations as actions, and the reward is measured in the form of collision-free optimal access to patient anatomy. The optimization algorithm may present the user with multiple equally optimal port location plans letting the user select one.
[0055] At step 112, and as shown in FIGS. 11-13, the optimal port locations 302 are shown in a GUI 300 of the software application. The GUI 300 displays the external 3D model 208 combined with the internal 3D model 200 including the port locations 302. The external 3D model 208 may be partially transparent to display the internal 3D model 200 including patient’s organs, which may be color coded for easy identification. The GUI 300 is used to generate, view, and modify the port locations 302 based on the patient’s internal 3D model 200 and external 3D model 208. Furthermore, the GUI includes the 3D geodesic distance between different ports computed on the patient anatomy in real-time. The distance between ports is displayed to the clinical staff to ensure that ports are placed at least a minimum distance away from each other in order to minimize the likelihood of arm collisions. Additionally, the GUI may display distance between each of the port locations to the organs of interest based on the internal 3D model 200.
[0056] The GUI 300 allows for movement of the port locations 302 along the outside surface of the external 3D model 208. The user may select the port locations 302 in need of modification by simulating operation of instruments 50 and/or camera 51 being inserted through the access ports 55a-d at the port locations 302. Furthermore, if the clinical staff moves
the ports such that the distance between any two ports is less than threshold distance, the GUI may display a warning. Additionally, the GUI may display virtual boundaries around the suggested port locations such that moving the ports within the virtual boundaries would still satisfy all the optimal port placement constraints.
[0057] The simulation process is performed at step 114 and is shown in more detail in FIGS. 12 and 13, in which the GUI 300 provides the user with an initial view of the internal 3D model 200. The controller 21a automatically identifies which of the port locations 302 are used by the camera 51 and the instrument 50 and may provide a corresponding icon illustrating the device being inserted into at that port location 302, e.g., camera icon. The GUI 300 provides the user an option to select the icon corresponding to the port location 302 and insert a virtual instrument 50 or a camera 51, which may be shown as insertion models 304 and 306 for the instrument 50 and the camera 51, respectively. The models 304 and 306 includes outside portions 304a and 306a and inside portions 304b and 306b, respectively, which are visually differentiated from each other, e.g., by the degree of transparency, color, etc. to illustrate the portions of the instrument 50 and the camera 51 that are inside the patient.
[0058] In embodiments where the insertion model 304 illustrates a device having an articulating joint, e.g., a wristed instrument, the insertion model 304 may also include a joint 304c and an end effector model 304d. To provide additional simulation details, cone 305a and 305b may also be generated by the controller 21a to illustrate the degree of freedom of the insertion model 304 and each joint 304c. The access ports 55a-d limit the movement of the device inserted through about the center of motion, which corresponds to the point of insertion of the access port through the patient. Since the port locations 302 correspond to those points, the cone 305a has its apex at the port location 302 represents the limits of motion of the insertion model 304. Similarly, the cone 305b represents the limits of motion of the end effector model 304d about the joint 304c.
[0059] With reference to FIG. 13, the user may place multiple models 306 representing different insertion trajectories of the camera 51 through the single port location 302. Each of the models 306 includes a preview 308 of the endoscopic view of the camera 51. The preview 308 includes a viewpoint of the internal 3D model 200 from the perspective of the model 306. The user may manipulate the models 306, e.g., rotate, advance, retract, etc., and the previews 308 are updated in real time as the models 306 are manipulated. User input may be received through the GUI 300 using a variety of devices, e.g., touchscreen, pointer, keyboards, etc.
[0060] The preview 308 shows the modeled organs allowing to the user to confirm whether the proposed optimal camera location is suitable. Variations in patient’s anatomy and position
deform the organs of the patient. Therefore, the controller 21a is also configured to generate a deformed internal 3D model 200 based on the position and the skeleton model 210 of the patient. The controller 21a may use a neural network trained in pre-operative imaging/modelling of the same organs in order to learn changes to the shape of organs due to shifting position and orientation of the patient using critical structure landmarks (e.g., vessels, arteries, etc.). The user may select one of the models 306 that was inserted based on a desired view and discard the others. In embodiments, the user may shift the port location 302 to a different location and repeat the preview process to confirm the desired port location 302. The preview window 308 may close during movement of the port location 302 and may automatically reappear once the movement is stopped.
[0061] The GUI 300 may also provide playback, i.e., animation, of insertion and internal manipulation of the models 306 to simulate the surgical procedure. The playback may also include movement of the end-effector trajectories to evaluate workspace and collisions with the selected port locations between the camera and the instruments.
[0062] With reference to FIG. 6, at step 116, the user confirms the port locations 302, including that of the camera 51, the port locations 302 and the controller 21a generates a setup guide for configuring and positioning the patient on the table 90 and the robotic arms 40 around the patient as shown in FIG. 5 and described above. The guide may include a plurality of text, images, and/or video instructions to be implemented by the operating room staff to setup the surgical robotic system 10. The guide may include instructions for insertion of access ports 55a-d at the port locations 302, instruments 50 to be used, position and orientation of the movable carts 60a-d relative to the table 90, etc.
[0063] Once the camera port has been placed, an endoscope is inserted through the camera port for ‘initial look’ phase, where the endoscope may be a stereo endoscope and the controller 21a may be configured to generate a dense depth map of the surgical site. The controller 21a may track the endoscope location via robotic arm kinematics and Visual-Simultaneous Localization And Mapping (Visual-SLAM) to generate the real-time 3D poses of the endoscope. Furthermore, the controller 21a may generate an extended stitched intra-operative 3D depth map of the surgical site from plurality of endoscopic camera poses using a combination of robot arm kinematics, Visual-SLAM, point cloud generation, and sequential point cloud registration. The controller 21a may initialize the registration of the pre-operative internal 3D model of the patient with the depth map either via point-to-point semi-automatic registration or via fully automatic registration. The semi-automatic registration may require the user to mark plurality of points on the pre-operative 3D model and the corresponding points on the 3D surface
generated from the stereo endoscope. The controller 21a may use a machine learning model in the form of a neural network to register the pre-operative 3D model and the 3D surface generated from the depth map of stereo endoscope images. Such a neural network may be trained to account for the movement of organs resulting from the insufflation of the abdomen. The controller 21a may additionally update the registration between the pre-operative internal 3D model of the patient with the intra-operative 3D depth map via fully automatic registration using real-time robotic arm kinematics data and Visual-SLAM.
[0064] It is standard practice of the trade that surgeons insert the remaining one or more trocars through the marked port locations under direct vision via endoscope camera in order to avoid the trocar having to pierce through any of the internal anatomical organs. The process of generating real-time sequentially registered intra-operative textured 3D model of surgical site during initial look phase creates a non-occupancy volume showing the distance between the abdominal cavity wall and internal organs. The controller 21a may predict the trajectories of trocars inserted through the additional ports and the GUI may display an augmented reality overlay of trocars inserted through the port locations. The GUI may further display warnings and alarms if the projected trajectory or end points of any of the inserted trocars through any of the additional ports may collide with the internal organs as shown in the non-occupancy volume. The GUI may also let the clinical staff move the virtual viewpoint of the stereo endoscope camera using the 3D surgical site model in order to take a close look at the projected trocar trajectories through any of the additional ports.
[0065] It will be understood that various modifications may be made to the embodiments disclosed herein. Therefore, the above description should not be construed as limiting, but merely as exemplifications of various embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended thereto.
Claims
1. A surgical robotic system comprising: a robotic arm holding an endoscopic camera inserted through an access port; a controller configured to: generate a port location for an access port on a 3D model of a patient; generate a camera insertion model through the port location; and generate a patient-specific setup guide for configuring the access port and the robotic arm; and a display configured to output a graphical user interface including: the 3D model of the patient; the port location of the access port and the camera insertion model; a preview window showing internal 3D model overlayed on camera images from plurality of perspectives of the camera insertion model; and distance measurements between ports and target anatomy and real-time feedback while the port locations are moved.
2. The surgical robotic system according to claim 1, wherein the graphical user interface is configured to receive user input to move the port location and the controller is further configured to update the preview window based on movement of the port location.
3. The surgical robotic system according to claim 1, further comprising: an external vision system with one or more external cameras configured to: capture a plurality of external images of a patient; and capture a plurality of external images of the robotic arm; wherein the controller is further configured to: generate a depth map of a surgical site; generate a registration of preoperative imaging data of the patient with the depth map; track location of the endoscopic camera via kinematics of the robotic arm and visual-simultaneous localization and mapping (Visual-SLAM); and update the registration between the preoperative imaging data with the depth map via fully automatic registration using real-time robotic arm kinematics data and Visual-SLAM.
4. The surgical robotic system according to claim 3, wherein the controller is further configured to: generate an external 3D model of a patient based on a plurality of external images from the external vision system; generate an internal 3D model of the patient based on the preoperative imaging data; and generate a combined 3D model based on the external 3D model and the internal 3D model.
5. The surgical robotic system according to claim 4, wherein the preview window includes a view of the internal 3D model along with the internal 3D model overlayed on images provided by the endoscopic camera.
6. The surgical robotic system according to claim 4, wherein the controller is configured to: generate a skeleton model including a plurality of keypoints for the patient based on the plurality of external images from external vision system; and generate a deformed 3D model of the patient based on the combined 3D model, position of the patient, the skeleton model and patient breathing pattern with optional input from a pulse oximeter apparatus.
7. The surgical robotic system according to claim 1, wherein the camera insertion model includes an outside portion and an inside portion and the inside portion and the outside portion are visually different in the graphical user interface.
8. The surgical robotic system according to claim 1, wherein the graphical user interface is configured to receive user input to move the camera insertion model and the user input includes tracking hand gestures from an external vision system.
9. The surgical robotic system according to claim 8, wherein the controller is further configured to update the preview window based on movement of the camera insertion model.
10. The surgical robotic system according to claim 1, wherein the controller is further configured to generate a degree of freedom conical volume at the port location.
11. The surgical robotic system according to claim 1, wherein the controller is further configured to generate multiple instrument insertion models having an end effector model for each instrument insertion models.
12. The surgical robotic system according to claim 11, wherein the controller is further configured to generate a degree of freedom volume extending from a joint of the end effector model and graphical user interface configured to display a preview window showing playback of recorded end-effector trajectories to evaluate workspace and any potential collisions between robotic arms and ranges of motion of different types of instruments.
13. A method for determining access port placement, the method comprising: capturing plurality of external images of a patient from an external vision system; generating an external 3D model of a patient based on the plurality of external images; generating an internal 3D model of the patient based on preoperative imaging data; generating a combined 3D model based on the external 3D model and the internal 3D model; determining an optimal port location for at least one access port based on the combined 3D model; generating a camera insertion model through the optimal port location; generating a preview window of the internal 3D model from a perspective of the camera insertion model; and generating a setup guide for configuring at least one access port and a robotic arm.
14. The method according to claim 13, further comprising receiving user input to move the camera insertion model.
15. The method according to claim 14, further comprising updating the preview window of the internal 3D model based on movement of the camera insertion model.
16. The method according to claim 14, further comprising generating the external 3D model of the patient based on a depth map of the patient.
17. The method according to claim 14, further comprising generating a skeleton model including a plurality of keypoints for the patient based on the plurality of external images.
18. The method according to claim 17, wherein the external 3D model is generated based on the skeleton model.
19. The method according to claim 18, further comprising deforming the internal 3D model of the patient based on position of the patient and the skeleton model.
20. A surgical robotic system comprising: a first access port; a first robotic arm including a first instrument drive unit coupled to a camera inserted through the first access port; one or more additional access ports; one or more additional robotic arms each including an instrument drive unit coupled to an instrument and inserted through an additional access port; a camera configured to capture an external image of a patient, the first robotic arm, and one or more additional robotic arms; a controller configured to: generate an external 3D model of a patient based on the external image; generate an internal 3D model of the patient based on preoperative imaging data; generate a combined 3D model based on the external 3D model and the internal 3D model; determine a first optimal port location for a first access port for endoscopic camera based on the combined 3D model; determine one or more additional optimal port locations for one or more additional access ports based on the combined 3D model and initial exploration images; and generate a patient-specific setup guide for configuring the first access port, one or more additional access ports, the first robotic arm and one or more additional robotic arms; and a display configured to output an augmented reality based graphical user interface configured to display:
the combined 3D model of the patient; the first optimal port location and one or more additional optimal port locations; a camera insertion model through the first optimal port location; multiple instrument insertion models through one or more additional optimal port locations and a preview window of the internal 3D model from a perspective of the camera insertion model.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363440985P | 2023-01-25 | 2023-01-25 | |
US63/440,985 | 2023-01-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024157113A1 true WO2024157113A1 (en) | 2024-08-02 |
Family
ID=89663624
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2024/050408 WO2024157113A1 (en) | 2023-01-25 | 2024-01-16 | Surgical robotic system and method for assisted access port placement |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024157113A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019139931A1 (en) * | 2018-01-10 | 2019-07-18 | Covidien Lp | Guidance for placement of surgical ports |
WO2021034679A1 (en) * | 2019-08-16 | 2021-02-25 | Intuitive Surgical Operations, Inc. | Systems and methods for performance of external body wall data and internal depth data-based performance of operations associated with a computer-assisted surgical system |
WO2021146339A1 (en) * | 2020-01-14 | 2021-07-22 | Activ Surgical, Inc. | Systems and methods for autonomous suturing |
WO2021202433A1 (en) * | 2020-03-30 | 2021-10-07 | Dimension Orthotics, LLC | Apparatus for anatomic three-dimensional scanning and automated three-dimensional cast and splint design |
US20210378768A1 (en) * | 2020-06-05 | 2021-12-09 | Verb Surgical Inc. | Remote surgical mentoring |
-
2024
- 2024-01-16 WO PCT/IB2024/050408 patent/WO2024157113A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019139931A1 (en) * | 2018-01-10 | 2019-07-18 | Covidien Lp | Guidance for placement of surgical ports |
WO2021034679A1 (en) * | 2019-08-16 | 2021-02-25 | Intuitive Surgical Operations, Inc. | Systems and methods for performance of external body wall data and internal depth data-based performance of operations associated with a computer-assisted surgical system |
WO2021146339A1 (en) * | 2020-01-14 | 2021-07-22 | Activ Surgical, Inc. | Systems and methods for autonomous suturing |
WO2021202433A1 (en) * | 2020-03-30 | 2021-10-07 | Dimension Orthotics, LLC | Apparatus for anatomic three-dimensional scanning and automated three-dimensional cast and splint design |
US20210378768A1 (en) * | 2020-06-05 | 2021-12-09 | Verb Surgical Inc. | Remote surgical mentoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102217573B1 (en) | Systems and methods for tracking a path using the null-space | |
US8864652B2 (en) | Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip | |
EP4275642A1 (en) | Real-time instrument position identification and tracking | |
US11948226B2 (en) | Systems and methods for clinical workspace simulation | |
CN116600732A (en) | Augmented reality headset for surgical robot | |
US20240252247A1 (en) | Systems and methods for clinical workspace simulation | |
WO2023052998A1 (en) | Setting remote center of motion in surgical robotic system | |
WO2024238729A2 (en) | Surgical robotic system and method for digital twin generation | |
EP4543351A1 (en) | Assisted port placement for minimally invasive or robotic assisted surgery | |
WO2024201216A1 (en) | Surgical robotic system and method for preventing instrument collision | |
US20240415590A1 (en) | Surgeon control of robot mobile cart and setup arm | |
WO2024157113A1 (en) | Surgical robotic system and method for assisted access port placement | |
EP4453962A1 (en) | Systems and methods for clinical workspace simulation | |
WO2019032450A1 (en) | Systems and methods for rendering alerts in a display of a teleoperational system | |
EP4408316A1 (en) | System of operating surgical robotic systems with access ports of varying length | |
EP4360533A1 (en) | Surgical robotic system and method with multiple cameras | |
WO2024150077A1 (en) | Surgical robotic system and method for communication between surgeon console and bedside assistant | |
WO2024042468A1 (en) | Surgical robotic system and method for intraoperative fusion of different imaging modalities | |
US20240256723A1 (en) | Systems and methods for clinical workspace simulation | |
US20240070875A1 (en) | Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system | |
WO2024150088A1 (en) | Surgical robotic system and method for navigating surgical instruments | |
WO2025078950A1 (en) | Surgical robotic system and method for integrated control of 3d model data | |
WO2024127276A1 (en) | Systems and methods for creating virtual boundaries in robotic surgical systems | |
WO2024127275A1 (en) | Augmented reality simulated setup and control of robotic surgical systems with instrument overlays | |
WO2025133854A1 (en) | Systems and methods for cooperation between surgeon and assistant in virtual procedure training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24701508 Country of ref document: EP Kind code of ref document: A1 |