[go: up one dir, main page]

WO2026013668A1 - Surgical robotic systems and methods for using the same - Google Patents

Surgical robotic systems and methods for using the same

Info

Publication number
WO2026013668A1
WO2026013668A1 PCT/IL2025/050580 IL2025050580W WO2026013668A1 WO 2026013668 A1 WO2026013668 A1 WO 2026013668A1 IL 2025050580 W IL2025050580 W IL 2025050580W WO 2026013668 A1 WO2026013668 A1 WO 2026013668A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
surgical
processor
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IL2025/050580
Other languages
French (fr)
Inventor
Mark C. Dace
Andrew J. WALD
Jaffar Hleihil
Adi Sandelson
Juan P. ANGULO
Saideep NAKKA
Alon FOGEL
Jeffrey Gum
Douglas J. Fox
Christopher R. GOOD
Ronald A. Lehman
Gregory T. POULTER
Yair Barzilay
Chetan Patel
Harel ARZI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Publication of WO2026013668A1 publication Critical patent/WO2026013668A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00203Electrical control of surgical instruments with speech control or speech recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods
    • A61B2017/00017Electrical control of surgical instruments
    • A61B2017/00207Electrical control of surgical instruments with hand gesture control or hand gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/252User interfaces for surgical systems indicating steps of a surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • A61B2034/742Joysticks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • A61B2034/743Keyboards
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/74Manipulators with manual electric input means
    • A61B2034/744Mouse
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • A61B2090/064Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/374NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • A61B2090/3764Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT] with a rotating C-arm having a cone beam emitting source
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3937Visible markers
    • A61B2090/3941Photoluminescent markers

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

A system according to at least one embodiment of the present disclosure includes: a processor; and a memory storing data thereon that, when executed by the processor, enable the processor to: initiate a robot to perform a first step of a surgical task; receive information associated with the robot performing the surgical task, the information including sensor information from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyze the subset of the sensor information; generate, based on the analysis, a control signal; and automatically provide the control signal to the robot to perform the second step of the surgical task.

Description

SURGICAL ROBOTIC SYSTEMS AND METHODS FOR USING THE SAME
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/669,440, filed 10 July 2024, the entire content of which is incorporated herein by reference.
BACKGROUND
[0002] The present disclosure is generally directed to surgical systems, and relates more particularly to robotic surgical systems.
[0003] Surgical robots may assist a surgeon or other medical provider in carrying out a surgical procedure, or may complete one or more surgical procedures autonomously. Imaging may be used by a medical provider for diagnostic and/or therapeutic purposes. Patient anatomy can change over time, particularly following placement of a medical implant in the patient anatomy.
BRIEF SUMMARY
[0004] Example aspects of the present disclosure include:
[0005] A system according to at least one embodiment of the present disclosure comprises: a processor; and a memory storing data thereon that, when executed by the processor, enable the processor to: initiate a robot to perform a first step of a surgical task; receive information associated with the robot performing the surgical task, the information comprising sensor information from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyze the subset of the sensor information; generate, based on the analysis, a control signal; and automatically provide the control signal to the robot to perform the second step of the surgical task.
[0006] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive at least one of force information, capacitance information, and one or more images of a patient.
[0007] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive a user input; and adjust, based on the user input, the control signal. [0008] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: provide, to a display, a prompt to a user to approve the control signal. [0009] Any of the aspects herein, wherein the user input comprises at least one of a keyboard entry, a gesture, a voice input, and an indication from a pointer device.
[0010] Any of the aspects herein, wherein the control signal causes the robot to pause performance of the surgical task.
[0011] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive a user input; and cause, based on the user input, the robot to resume the performance of the surgical task.
[0012] Any of the aspects herein, wherein the control signal causes the robot to move from a current pose to a previous, known pose.
[0013] Any of the aspects herein, wherein the control signal causes a change to at least one of a drill speed of a surgical tool connected to the robot, a drill direction of the surgical tool, a trajectory of the surgical tool, a position of the robot, and a path along which the surgical tool operates.
[0014] Any of the aspects herein, wherein the control signal causes a change to a removal speed of a surgical tool connected to the robot, a direction of the surgical tool, a trajectory of the surgical tool, a position of the robot, and a path along which the surgical tool operates.
[0015] A system according to at least one embodiment of the present disclosure comprises: a robot; a processor; and a memory storing data thereon that, when executed by the processor, enable the processor to: receive information associated with the robot performing a first step of a surgical task, the information comprising sensor information from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyze the subset of the sensor information; generate, based on the analysis, an output; and automatically provide the output to the robot to perform the second step of the surgical task.
[0016] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive at least one of force information, capacitance information, and one or more images of a patient.
[0017] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive a user input; and adjust, based on the user input, the output.
[0018] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: provide, to a display, a prompt to a user to approve the output. [0019] Any of the aspects herein, wherein the user input comprises at least one of a keyboard entry, a gesture, a voice input, and an indication from a pointer device.
[0020] Any of the aspects herein, wherein the output causes the robot to pause performance of the surgical task.
[0021] Any of the aspects herein, wherein the data, when processed by the processor, further enable the processor to: receive a user input; and cause, based on the user input, the robot to resume the performance of the surgical task.
[0022] Any of the aspects herein, wherein the output causes the robot to move from a current pose to a previous, known pose.
[0023] Any of the aspects herein, wherein the output causes a change to at least one of a drill speed of a surgical tool connected to the robot, a drill direction of the surgical tool, a trajectory of the surgical tool, a position of the robot, and a path along which the surgical tool operates.
[0024] Any of the aspects herein, wherein the output causes a change to at least one of a removal speed of a surgical tool connected to the robot, a direction of the surgical tool, a trajectory of the surgical tool, a position of the robot, and a path along which the surgical tool operates.
[0025] A method according to at least one embodiment of the present disclosure comprises: initiating a robot to perform a first step in a surgical task; receiving information associated with the robot performing the surgical task, the information comprising sensor information from a plurality of sensors; determining, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyzing the subset of the sensor information; generating, based on the analysis, one or more outputs; and automatically providing the one or more outputs to the robot to perform the second step of the surgical task.
[0026] Any of the aspects herein, wherein controlling the robot comprises causing the robot to pause performance of the surgical task, and wherein the method further comprises: receiving a user input; and adjusting, based on the user input, the one or more outputs.
[0027] Any aspect in combination with any one or more other aspects.
[0028] Any one or more of the features disclosed herein.
[0029] Any one or more of the features as substantially disclosed herein.
[0030] Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein. [0031] Any one of the aspects/features/embodiments in combination with any one or more other aspects/ features/ embodiments .
[0032] Use of any one or more of the aspects or features as disclosed herein.
[0033] It is to be appreciated that any feature described herein can be claimed in combination with any other feature(s) as described herein, regardless of whether the features come from the same described embodiment.
[0034] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims.
[0035] The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. When each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as XI -Xn, Yl-Ym, and Zl- Zo, the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y1 and Zo).
[0036] The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably.
[0037] The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, embodiments, and configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below. [0038] Numerous additional features and advantages of the present disclosure will become apparent to those skilled in the art upon consideration of the embodiment descriptions provided hereinbelow.
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] The accompanying drawings are incorporated into and form a part of the specification to illustrate several examples of the present disclosure. These drawings, together with the description, explain the principles of the disclosure. The drawings simply illustrate preferred and alternative examples of how the disclosure can be made and used and are not to be construed as limiting the disclosure to only the illustrated and described examples. Further features and advantages will become apparent from the following, more detailed, description of the various aspects, embodiments, and configurations of the disclosure, as illustrated by the drawings referenced below. [0040] Fig. 1 is a block diagram illustrating aspects of a system for performing a surgery or surgical procedure according to at least one embodiment of the present disclosure;
[0041] Fig. 2A is a conceptual diagram of aspects of an imaging device according to at least one embodiment of the present disclosure;
[0042] Fig. 2B is a conceptual diagram of additional aspects of the imaging device according to at least one embodiment of the present disclosure;
[0043] Fig. 3 is a conceptual diagram of aspects of a robot and a navigation system according to at least one embodiment of the present disclosure;
[0044] Fig. 4 is a block diagram of aspects of a computing system according to at least one embodiment of the present disclosure;
[0045] Fig. 5 is a flowchart according to at least one embodiment of the present disclosure;
[0046] Fig. 6 is a flowchart according to at least one embodiment of the present disclosure; and
[0047] Fig. 7 is a flowchart according to at least one embodiment of the present disclosure.
[0048] Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of examples, aspects, and features illustrated.
[0049] In some instances, the apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the of various implementations, examples, aspects, and features so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0050] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example or embodiment, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, and/or may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the disclosed techniques according to different embodiments of the present disclosure). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
[0051] Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. Unless explicitly stated otherwise, the use or listing of one or more examples (which may be denoted by “for example,” “by way of example,” “e.g.,” “such as,” or similar language) is not intended to and does not limit the scope of the present disclosure.
[0052] The terms proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
[0053] The use of robotic surgical systems may benefit or help surgeons achieve clinical goals and objectives. According to at least one embodiment of the present disclosure, an autonomous bone cutting surgical robotic system is provided that implements tissue cutting devices, navigation positioning, robotic control, and advanced imaging. The robotic system may be beneficial in spine procedures as tissue removal is an important step in most spine workflows.
[0054] According to at least one embodiment of the present disclosure, the surgical robotic system may be controlled by one or more inputs. For example, the surgical robotic system may comprise a fully autonomous mechanism driven by closed-loop feedback acquired intraoperatively. The intraoperative information may comprise sensor information (e.g., force measurements, capacitance measurements, etc.) that may initiate an instantaneous system stop if unexpected conditions, tissue, or structures are encountered. In another example, the surgical robotic system may receive one or more inputs associated with a user (e.g., a surgeon), such as from a pointer device, keyboard entries, gestures (whether on a touch screen or in three-dimensional (3D) space), voice inputs, combinations thereof, and/or the like. In yet another example, the surgical robotic system may require a virtual or physical element (e.g., a button, switch, lever, pedal, etc.) to be actuated before progressing with a surgical task. In one example, a user may use a foot pedal or other physical throttle to control the cutting technology, direction, speed, and/or other aspects of the operation of the surgical robotic system. In some embodiments, the surgical robotic system may comprise a manual mode where the that enables the user to alter the path, speed, trajectory, position, technology, combinations thereof, etc. of the robotic surgical system (e.g., a “joystick” mode where the user can manipulate a robotic arm using a physical or virtual controller). The user of the robotic surgical system may receive haptic, visual, audio, combinations thereof, etc. feedback when in the manual mode based on, for example, proximity of a surgical tool to a target or sensitive tissue.
[0055] Accordingly to at least one embodiment of the present disclosure, the surgical robotic system may be able to restart or resume with a surgical task in the surgical plan after an automatic or manual stop has been initiated. In other words, the surgical robotic system may begin from where the surgical robotic system “left off’. In some cases, the robotic surgical system may be rewound to prior known positions (e.g., after a robotic arm moves from a first pose to a second pose, the user may be able to cause the robotic arm to move back to the first pose). The robotic surgical system may enable the user to verify the status of the surgery or surgical procedure, visualize the next step in the surgery or surgical procedure (e.g., via a laser pointer showing a cut path), and provide an input that enables the robotic surgical system to proceed with the next step of the surgery or surgical procedure.
[0056] Embodiments of the present disclosure provide technical solutions to one or more of the problems of (1) inaccurate resection of anatomical tissues and (2) lack of user control over surgical robotic systems.
[0057] With reference to Fig. 1, an example surgical robotic system 100 that supports aspects of the present disclosure is shown according to at least one embodiment of the present disclosure. The surgical robotic system 100 depicted in Fig. 1 includes a computing system 102 that may be in communication with an imaging device 104, a navigation system 108, a robot 112, a user device 116, a database 120, and/or other components. Systems according to other implementations of the present disclosure may include more or fewer components than illustrated in Fig. 1. For example, systems and methods described herein may be implemented while omitting and/or including additional instances of one or more of the computing system 102, the imaging device 104, the robot 112, and/or the user device 116. In another example, the system may omit features such as the imaging device 104. The computing system 102 or similar systems may be used, for example, in conjunction with the imaging device 104, the navigation system 108, the robot 112, the user device 116, the database 120, and/or other components to carry out one or more aspects of any of the methods 500, 600, and/or 700 described herein. The computing system 102 or similar systems may also be used for other purposes.
[0058] In some examples, the computing system 102 may take the form of a computer workstation, handheld computing device, server or other network computing device, or external program that includes a user interface 416 for presenting information to and receiving input from a user. The user (e.g., a physician, technician, clinician, member of surgical staff, etc.) may interact with the computing system 102 via the user device 116 to process images of patients (e.g., images from the imaging device 104) for the purposes of performing or assisting with a surgery or surgical procedure. For instance, the user may interact with the computing system 102 to facilitate surgical ablation(s), implant surgical screws, insert Deep Brain Stimulation (DBS) leads, perform surgical drilling and/or cutting procedures, perform soft tissue removal, combinations thereof, and/or the like.
[0059] The computing system 102 may communicate with the imaging device 104, the navigation system 108, the robot 112, the user device 116, and/or the database 120 via a cloud or other network. The cloud or other network may comprise one or more computing device (not shown), such as one or more non-edge switches, routers, hubs, gateways, security devices such as firewalls, computer terminals, wireless mobile devices (e.g., cellular phones), wireless access points, or other network devices. The cloud or other network may provide computing devices, the computing system 102, the imaging device 104, the navigation system 108, the robot 112, the user device 116, the database 120, and/or other components access to the Internet, and may provide a communication framework that allows the computing devices to communicate with one another. In some cases, the cloud or other network may be a private network that allows the computing system 102, the imaging device 104, the navigation system 108, the robot 112, the user device 116, the database 120, and/or other components to communicate with one another via a wired connection, a wireless connection, or both. In such cases, the communications between the foregoing components may be encrypted.
[0060] The user device 116 may be or comprise a keyboard, mouse, trackball, joystick, monitor, television, screen, touchscreen, and/or any other device for enabling a user to interact with the computing system 102, the imaging device 104, the navigation system 108, the robot 112, the database 120, and/or other components. For example, the user device 116 may enable the user to input information (e.g., instructions) into and/or receive information from the computing system 102, the imaging device 104, the navigation system 108, the robot 112, the database 120, and/or other components. The user may use the user device 116, for example, to select or provide other input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the computing system 102 (e.g., by the processor 404 or another component of the computing system 102) or received by the computing system 102 from a source external to the computing system 102. In some cases, the user device 116 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 404 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user device 116 or corresponding thereto.
[0061] In some cases, a user of the surgical robotic system 100 (e.g., a clinician) may provide one or more user inputs via the user device 116. The one or more user inputs may control one or more components of the surgical robotic system 100, such as the robot 112. For example, the user may provide a voice input via a microphone or other audio processing device connected to the surgical robotic system 100. The voice input may cause the robot 112 to begin, pause, or continue a surgical task (e.g., the user says “stop” into the microphone, and the robot 112 pauses the surgical task; the user then says “go” into the microphone, and the robot 112 continues performing the surgical task). In some embodiments, the user may provide the one or more user inputs via a keyboard entry or touchscreen, such as when the user presses a button (e.g., a physical button on the keyboard, a virtual button on the touchscreen, etc.) to adjust the drill speed or removal speed of a surgical drill connected to the robot 112. In another example, the user may provide the user input through an indication from a pointer device. The surgical robotic system 100 may include a pointer device (e.g., a laser pointer) that is tracked by the surgical robotic system 100 (e.g., using a laser spot detector and image processing, or similar technology) as the user moves the pointer device within the surgical environment. When the user directs the pointer device to a specific location, the surgical robotic system 100 may cause the robot 112 to respond accordingly, such as by causing the robot 112 to move to the specific location. It is to be understood that, while the foregoing examples are directed to control of the robot 112, alternative components of the surgical robotic system 100 (e.g., the imaging device 104) may be similarly controlled by the one or more user inputs.
[0062] The imaging device 104 may comprise one or more components capable of capturing X- ray, fluoroscopy, Computed Tomography (CT), and/or other images of a patient 110. The captured images may comprise anatomical feature(s) (e.g., bones, veins, nerves, soft tissues, etc.) of the patient 110 and/or other aspects of the anatomy of the patient 110. The imaging device 104 may capture a single, still image of the patient 110, and may additionally or alternatively capture multiple images of the patient 110 continuously to generate a data stream of images that depict movement of the anatomical feature(s) of the patient over time. The imaging device 104 may generate two-dimensional (2D) or three-dimensional (3D) images. In some examples, the imaging device 104 may be used to capture one or more preoperative images, one or more intraoperative images, one or more postoperative images, and/or one or more images taken independently of any surgery or surgical procedure.
[0063] The database 120 may store information that correlates one coordinate system to another (e.g., one or more robotic coordinate systems to a patient coordinate system and/or to a navigation coordinate system). The database 120 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 112, the navigation system 108, and/or a user of the computing system 102); one or more images useful in connection with a surgery to be completed by or with the assistance of one or more other components of the computing system 102; and/or any other useful information. The database 120 may be configured to provide any such information to the computing system 102, the imaging device 104, the navigation system 108, the robot 112, and/or to any other device. In some embodiments, the database 120 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
[0064] The imaging device 104 may be or comprise one or more components capable of performing various imaging modalities. For example, the imaging device 104 may be or comprise an 0-arm, C-arm, G-arm, or any other device capable of utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, an X-ray machine, etc.). As another example, the imaging device 104 may be or comprise an ultrasound scanner (which may comprise a physically separate transducer and receiver or a single ultrasound transceiver) capable of generating information that can be processed to produce an ultrasound image. In other examples, the imaging device 104 may be or comprise a magnetic resonance imaging (MRI) scanner, an optical camera, a thermographic (e.g., infrared) camera, or any other imaging device suitable for obtaining images of anatomical feature(s) of the patient 110. In some cases, the components of the imaging device 104 may be contained entirely withing a singular housing, such as when the imaging device 104 comprises an optical camera or other imaging device whose components can be positioned together. Additionally or alternatively, the imaging device 104 may comprise components housed in separate housings, such as when the imaging device 104 comprises a transmitter/emitter positioning in a first housing and a receiver/detector positioned in a second housing such that the two components are physically separated.
[0065] As previously mentioned, the imaging device 104 may comprise more than one imaging device. For example, the imaging device 104 may comprise a first imaging device that provides first image data (e.g., data that can be manipulated by hardware and/or software components of the computing system 102 to generate an image) and/or a first image, and a second imaging device that provides second image data and/or second image. In such examples, the first imaging device and the second imaging device may correspond to components that implement different imaging modalities (e.g., the first imaging device implements X-ray fluoroscopy, while the second imaging device implements ultrasonic waves), which may offer the user of the computing system 102 additional flexibility in imaging the patient 110.
[0066] With reference to Figs. 2A-2B, aspects of an example imaging device 104 are shown in accordance with at least one embodiment of the present disclosure. In the example shown in Figs. 2A-2B, the imaging device 104 may be or comprise an 0-arm or other imaging apparatus capable of imaging anatomical features or other features of the patient 110. The imaging device 104 depicted in Figs. 2A-2B may comprise an upper portion 208 and a lower portion 212 connected by a pair of sidewalls. In some cases, the imaging device 104 may be secured to the ground surface or floor 216 of an operating room or other surgical environment. In other cases, the imaging device 104 may be releasably securable to the floor 216 or may be a standalone component that is supported by the floor 216.
[0067] The patient 110 may be positioned on a table 204 that is positioned orthogonally to and extends at least partially through the isocenter of the imaging device 104, such that the imaging device 104 can image one or more portions of the patient 110. In some cases, the table 204 may be mounted to the imaging device 104. In other cases, table 204 may be releasably mounted to the imaging device 104. In cases where the table 204 is mounted to the imaging device 104 (whether detachably mounted or permanently mounted), the table 204 may be mounted to the imaging device 104 such that a pose of the table 204 relative to the imaging device 104 is selectively adjustable. In still other cases, the table 204 may not be attached to the imaging device 104. In such cases, the table 204 may be supported and/or mounted to an operating room wall, for example.
[0068] The table 204 may be any operating table configured to support the patient 110 during a surgical procedure. The table 204 may include any accessories mounted to or otherwise coupled to the table 204 such as, for example, a bed rail, a bed rail adaptor, an arm rest, an extender, or the like. The table 204 may be stationary or may be operable to maneuver the patient 110 (e.g., the table 204 may be able to move). In some cases, the table 204 has two positioning degrees of freedom and one rotational degree of freedom, which allows positioning of the specific anatomy of the patient anywhere in space (within a volume defined by the limits of movement of the table 204). For example, the table 204 can slide forward and backward and from side to side, and can tilt (e.g., around an axis positioned between the head and foot of the table 204 and extending from one side of the table 204 to the other) and/or roll (e.g., around an axis positioned between the two sides of the table 204 and extending from the head of the table 204 to the foot thereof). In other cases, the table 204 can bend at one or more areas (which bending may be possible due to, for example, the use of a flexible surface for the table 204, or by physically separating one portion of the table 204 from another portion of the table 204 and moving the two portions independently). In at least some examples, the table 204 may be manually moved or manipulated by, for example, a surgeon or other user, or the table 204 may comprise one or more motors, actuators, and/or other mechanisms configured to enable movement and/or manipulation of the table 204 by a processor such as the processor 404.
[0069] The imaging device 104 may also comprise a gantry. The gantry may be or comprise a substantially circular, or “O-shaped,” housing that enables imaging of objects placed into an isocenter of the housing. In other words, the gantry may be positioned around the object being imaged (e.g., the patient 110). In some examples, the gantry may be disposed at least partially within the upper portion 208, the sidewalls, and the lower portion 212 of the imaging device 104. [0070] The imaging device 104 comprises a source 224 and a detector 228. The source 224 may be or comprise a device configured to generate and emit radiation, and the detector 228 may be or comprise a device configured to detect the emitted radiation. In some examples, the source 224 and the detector 228 may be or comprise an imaging source and an imaging detector (e.g., the source 224 and the detector 228 are used to generate data useful for producing images). The source 224 may be positioned in a first position and the detector 228 may be positioned in a second position opposite the source 224. In some examples, the source 224 may comprise an X-ray source such as, for example, a thermionic emission tube, a cold emission X-ray tube, and/or the like. The source 224 may project a radiation beam that passes through the patient 110 and onto the detector 228 located on the opposite side of the patient 110. The detector 228 may be or comprise one or more sensors that receive the radiation beam (e.g., once the radiation beam has passed through the patient 110) and transmit information related to the radiation beam to one or more other components of the computing system 102 for processing, such as to the processor 404. In some examples, the detector 228 may comprise an array. For example, the detector 228 may comprise three 2D flat panel solid-state detectors arranged side-by-side, and angled to approximate the curvature of the imaging device 104. It will be understood, however, that various detectors and detector arrays can be used with the imaging device 104, including any detector configurations used in typical diagnostic fan-beam or cone-beam CT scanners. The source 224 and/or the detector 228 may comprise a collimator 232. The collimator 232 may be configured to confine or shape the radiation beam as the radiation beam is emitted from the source 224 and/or as the radiation beam is received by the detector 228.
[0071] The source 224 and the detector 228 may be attached to the gantry and configured to rotate 360 degrees around the patient 110 in a continuous or step- wise manner so that the radiation beam can be projected through the patient 110 at various angles. In other words, the source 224 and the detector 228 may rotate, spin, or otherwise revolve about an axis that passes through the top and bottom of the patient, with the volume of interest positioned at the isocenter of the imaging device 104. The imaging device 104 comprises a drive mechanism capable of causing the gantry to move such that the source 224 and the detector 228 encircle the patient 110 on the table 204. Additionally or alternatively, the source 224 and the detector 228 may move along a length of the patient 110. For example, the table 204 holding the patient 110 may move in a lateral direction while the source 224 and detector 228 remain in a fixed location, such that the length of the patient can be scanned. At each projection angle in the revolution, the radiation beam passes through and is attenuated by the patient 110. The attenuated radiation is then detected by the detector 228. The detected radiation from each of the projection angles can then be processed, using various reconstruction techniques such as image processing 424, to produce a 2D or 3D reconstruction image of one or more anatomical features of the patient 110. In one example, the processor 404 may use the image processing 424 to generate a 3D cone beam computed tomography (CBCT) reconstruction image.
[0072] In some examples, a switch 220 may be provided to a user (e.g., a physician, a member of surgical staff, etc.) that can be used to control over one or more components of the imaging device 104, the navigation system 108, and/or the robot 112. For example, the switch 220 may comprise a manual override that automatically stops movement of the imaging device 104 and/or the robot 112 when the user actuates the switch 220. In some cases, the switch 220 may be configured to stop movement of the imaging device 104 and/ or the robot 112 when the user actuates the switch 220, and to resume movement of the imaging device 104 and/or the robot 112 when the user stops the actuation of the switch 220.
[0073] In some cases, the user of the surgical robotic system 100 may actuate the switch 220 to control the robot 112 as the robot 112 performs a surgical task or, more generally, any other component of the surgical robotic system 100. For instance, the switch 220 may control one or more operating parameters of the robot 112 and/or a surgical instrument (e.g., a surgical drill) attached to the robot 112. In one case, the switch 220 may control the drill speed of a surgical drill attached to the robot 112, and the user may interact with the switch 220 to control the drill speed of the surgical drill (e.g., the harder the user pushes down on the switch 220, the faster the operative portion of the surgical drill spins). In another case, the switch 220 may control the removal speed of a surgical tool attached to the robot 112, such as when the user actuates the switch 220 to change a rate at which the surgical tool is extracted from a surgical site. In some embodiments, the switch 220 may comprise multiple switches, with each switch controlling a different operating parameter of the robot 112. For instance, the switch 220 may comprise a foot pedal that controls the drill speed of a surgical tool connected to the robot 112, and a joystick that controls the robot 112 which in turn enables the user to adjust the direction in which the surgical tool drills. In this case, the user may be able to simultaneously operate the foot pedal and the joystick to control the drill speed and direction of the surgical tool.
[0074] With reference to Fig. 3, aspects of a robot and navigation system are shown in accordance with at least one embodiment of the present disclosure. In the example shown in Fig. 3, the patient 110 and the robot 112 are both positioned on the table 204 such that the robot 112 and/or the robotic arms 114 can interact with the patient 110 with assistance or guidance from the navigation system 108. While the example depicts the robot 112 positioned on the table 204, the robot 112 and/or the robotic arms 114 may alternatively be positioned on a cart or other surface near the patient 110. In some cases, the robot 112 and/or the robotic arms 114 are mounted or connected to other portions of the table 204 such that movement of the robot 112 and/or the robotic arms 114 relative to the table 204 is reduced or minimized.
[0075] The robot 112 may be any surgical robot or surgical robotic system. The robot 112 may be or comprise, for example, the Mazor X™ Stealth Edition robotic guidance system. The robot 112 may be configured to manipulate a surgical tool (whether based on guidance from the navigation system 108 or not) to accomplish or to assist with a surgical task. In some examples, the robot 112 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure. The robot 112 may comprise one or more robotic arms 114. In some examples, the robotic arm 114 may comprise a first robotic arm and a second robotic arm, though the robot 112 may comprise more than two robotic arms. Each robotic arm 114 may be positionable independently of the other robotic arm 114. The robotic arms 114 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
[0076] In some examples, one or more of the robotic arms 114 may be used to hold and/or maneuver the imaging device 104. In examples where the imaging device 104 comprises two or more physically separate components (e.g., a transmitter and receiver), one robotic arm 114 may hold one such component, and another robotic arm 114 may hold another such component. The robot 112 may be configured to position the imaging device 104 at one or more precise position(s) and orientation(s), and/or to return the imaging device 104 to the same position(s) and orientation(s) at a later point in time.
[0077] The robot 112, together with the robotic arm 114, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 114 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 104, surgical tool, or other object held by the robot 112 (or, more specifically, by the robotic arm 114) may be precisely positionable in one or more needed and specific positions and orientations. The robotic arm(s) 114 may comprise one or more sensors, markers, and/or the like (e.g., navigation markers 320A-320N) that enable a processor 404 (or a processor of the robot 112) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
[0078] In various embodiments, the navigation system 108 may be used to track a position and orientation (e.g., a pose) of the imaging device 104, the robot 112 and/or robotic arm 114, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing). The navigation system 108 may include a display for displaying one or more images from an external source (e.g., the computing system 102, imaging device 104, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 108. The navigation system 108 may be configured to provide guidance to a surgeon or other user of the computing system 102 or a component thereof, to the robot 112, and/or the like regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or howto move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan. [0079] In some cases and as discussed in further detail below, reference markers (e.g., navigation markers) may be placed on the robot 112 (including, e.g., on the robotic arm 114), the imaging device 104, or any other object in the surgical space. The reference markers may be tracked by the navigation system 108, and the results of the tracking may be used by the robot 112 and/or by an operator of the computing system 102 or any component thereof. In some embodiments, the navigation system 108 can be used to track other components of the system (e.g., imaging device 104) and the system can operate without the use of the robot 112 (e.g., with the surgeon manually manipulating the imaging device 104 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 108, for example).
[0080] The navigation system 108 may provide navigation for a surgeon and/or a surgical robot during an operation. The navigation system 108 may be any now-known or future-developed navigation system, including, for example, the Medtronic StealthStation™ S8 surgical navigation system or any successor thereof. The navigation system 108 may include one or more cameras (e.g., a navigation camera 324) or other sensor(s) for tracking one or more reference markers, navigated trackers (e.g., navigation markers 320A-320N), or other objects (e.g., optical localizer(s) 308, electromagnetic localizer(s) 316, etc.) within the operating room or other room in which some or all of the navigation system 108 is located. The one or more navigation cameras 324 may be optical cameras, infrared cameras, or other cameras.
[0081] In some cases, the navigation system 108 may comprise a depth sensor (e.g., a sensor positioned within the navigation camera 324) that can identify one or more gestures associated with the physician and/or other individuals. The depth sensor may provide a continuous or live feed of captured gestures to the computing system 102, where the processor 404 may interpret or process the captured gestures (e.g., using image processing 424) for the purposes of controlling the imaging device 104, the robot 112, and/or other components. In some cases, the processor 404 may use gesture recognition software (e.g., OpenPose or other software capable of capturing human hand gestures or other movements) to process the information captured by the depth sensor to determine one or more gestures. The gesture recognition software may in some examples use pose estimation of various portions of the bodies of multiple individuals to determine the relative movements or gestures of the individuals.
[0082] In one example, the navigation system 108 implements optical tracking to facilitate navigation. The optical tracking may comprise the navigation system 108 using an optical marker
Y1 304 and navigation markers 320A-320N to track objects within the surgical environment. The optical marker 304 may be positioned relative to the patient (e.g., on the table 204) and the navigation markers 320A-320N may be positioned relative to one or more objects (e.g., the robot 112 and/or the robotic arms 114) within and tracked by the navigation camera 324. The optical marker 304 and/or the navigation markers 320A-320N may be or comprise optical fiducials, reflective surfaces, and/or the like capable of being detected in images generated by the imaging device 104. Additionally or alternatively, the optical marker 304 and/or the navigation markers 320A-320N may be identifiable real-time by the imaging device 104, such as in examples where the imaging device 104 provides a live feed of components within the view of the imaging device 104. Based on the information captured by the imaging device 104, the navigation system 108 may identify the optical markers and use the marker location to determine the pose of the optical markers in an optical coordinate system. The navigation system 108 may then navigate one or more surgical tools (e.g., relative to the optical localizer 308 whose position is known or determined in one or more coordinate systems).
[0083] In another example, the navigation system 108 implements electromagnetic tracking to facilitate navigation. In some cases, the navigation system 108 may implement both electromagnetic and optical tracking (e.g., via co-registration of optical and electromagnetic localizers). The electromagnetic tracking may include the navigation system 108 using an electromagnetic field emitter 312 and an electromagnetic localizer 316 to track objects within the surgical environment. The electromagnetic field emitter 312 generates an electromagnetic field in which one or more components are positioned. The electromagnetic field emitter 312 may generate a constant electromagnetic field, or may alternatively emit a time-variant electromagnetic field. In some examples, the electromagnetic field emitter 312 may comprise a plurality of emitters each configured to generate slightly different electromagnetic fields. The presence of multiple electromagnetic fields may enable multiplexed sensing of the electromagnetic fields to determine the location of objects in the surgical environment.
[0084] In some cases, the electromagnetic field emitter 312 may be positioned proximate to the patient 110 (e.g., positioned underneath the patient, positioned next to the patient, positioned within the table 204, etc.). For example, the patient 110 may lie on a pad, pillow, or other support containing the electromagnetic field emitter 312. In some examples, the known pose of the electromagnetic field emitter 312 may enable the computing system 102 to register the electromagnetic field emitter 312 to one or more other coordinate systems using, for example, registration 436.
[0085] The electromagnetic field generated and emitted by the electromagnetic field emitter 312 may be detected by one or more sensors positioned within the surgical environment such that the navigation system 108 can perform electromagnetic tracking of the sensors. For instance, the electromagnetic localizer 316 may be positioned within the electromagnetic field generated by the electromagnetic field emitter 312. In some cases, the electromagnetic localizer 316 may be positioned near the anatomical elements 302A-302N of the patient 110. It is to be understood that, while a single electromagnetic localizer 316 is depicted in Fig. 3 , an additional number of localizers or electromagnetic sensors may be present in the surgical environment. The electromagnetic localizer 316 may comprise one or more electromagnetic sensors or other similar devices capable of measuring aspects of the electromagnetic field (e.g., magnitude of the electromagnetic field, direction of the electromagnetic field, etc.). The sensor measurements may be sent to the computing system 102 (e.g., stored in a memory 408 as sensor information 440) and/or to the database 120. The measurements may be processed by the processor 404 to determine the pose of the electromagnetic localizer 316 relative to an electromagnetic coordinate system. The pose of the electromagnetic localizer 316 may also be known in one or more other coordinate systems (e.g., a patient coordinate system), such that the processor 404 can use registration 436 to register the electromagnetic coordinate system with the patient coordinate system. Based on the registration, the navigation system 108 may use the pose of the electromagnetic localizer 316 and subsequent readings therefrom to perform tracking of one or more anatomical elements, to help navigate one or more surgical tools relative to the patient 110, combinations thereof, and/or the like.
[0086] With reference to Fig. 4, aspects of a computing system 102 are shown in accordance with embodiments of the present disclosure. The computing system 102 is illustrated in Fig. 4 to comprise the processor 404, a memory 408, a communication interface 412, and a user interface 416. In some cases, the computing system 102 may omit and/or include additional features. For example, the user interface 416 may be omitted in cases where the computing system 102 communicates with the user device 116.
[0087] The processor 404 may be configured to execute instructions stored in the memory 408, which instructions may cause the processor 404 to carry out one or more computing steps utilizing or based on data received from the imaging device 104, the navigation system 108, the robot 112, the user device 116, the database 120, and/or any other component. The processor 404 may be or comprise one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000- series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0088] The communication interface 412 may be used for receiving image data or other information from an external source (such as the imaging device 104, the robot 112, the navigation system 108, the user device 116, the database 120, and/or any other system or component), and/or for transmitting instructions, images, or other information to an external system or device (e.g., the imaging device 104, the robot 112, the navigation system 108, the database 120, and/or any other system or component). The communication interface 412 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.1 la/b/g/n, Bluetooth, NFC, ZigBee, and so forth). In some embodiments, the communication interface 412 may be useful for enabling the computing system 102 to communicate with one or more other processors or computing devices not a part of the computing system 102, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
[0089] The user interface 416 may be or comprise a keyboard, mouse, trackball, joystick, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user. Although the user interface 416 is shown as part of the computing system 102, in some embodiments, the computing system 102 may utilize a user interface 416 that is housed separately from one or more remaining components of the computing system 102. In some examples, the user interface 416 may be located proximate one or more other components of the computing system 102, while in other embodiments, the user interface 416 may be located remotely from one or more other components of the computing system 102. For example, the user interface 416 may be associated with or a part of the user device 116, such as when a surgeon, physician, or other user uses the user device 116 to interact with the computing system 102.
[0090] In one example, the user interface 416 comprises a joystick or other manual input device that enables the user of the surgical robotic system 100 to control one or more components of the surgical robotic system 100 (e.g., the imaging device 104, the robot 112, etc.). The joystick may enable the user to change or control one or more operating features of the robot 112, such as a drill speed of a surgical tool connected to the robot 112, a removal speed of the surgical tool (e.g., the rate at which the surgical tool is extracted from a surgical site), a direction (e.g., a drill direction) of the surgical tool, a trajectory of the surgical tool, a position of the robot 112, and/or a path along which the surgical tool operates. In some cases, the surgical robotic system 100 may provide feedback to the user based on, for example, the operating features of the robot 112 changed by the user using the joystick or other manual input device. For example, haptic feedback (e.g., buzzers within the joystick are actuated), visual feedback (e.g., renderings to a display), and/or audio feedback (e.g., an alarm) may be provided to the user. The feedback may be provided for any reason, such as when the user manually navigates the robot 112 such that the operative end of a surgical tool connected to the robot 112 comes within a threshold distance of a target location, sensitive or vital anatomical features (e.g., within threshold distance of the spinal cord), and/or the like.
[0091] The memory 408 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions. The memory 408 may comprise a surgical plan 420, image processing 424, segmentation 428, transformation 432, registration 436, sensor information 440, and/or one or more control algorithms 444. The memory 408 may store information or data useful for completing, for example, any step of the methods 500, 600, and/or 700 described herein, or of any other methods. The memory 408 may store, for example, instructions and/or machine learning models that support one or more functions of the robot 112. For instance, the memory 408 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 404, enable the image processing 424, the segmentation 428, the transformation 432, and/or the registration 436. Such content, if provided as in instruction, may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines. Alternatively or additionally, the memory 408 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 404 to carry out the various methods and features described herein. Thus, although various contents of memory 408 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models. The data, algorithms, and/or instructions may cause the processor 404 to manipulate data stored in the memory 408 and/or received from or via the imaging device 104, the robot 112, the database 120, and/or other components.
[0092] The surgical plan 420 may comprise information about one or more surgeries or surgical procedures. For example, the surgical plan 420 may be or comprise information about one or more trajectories (e.g., an implant trajectory for one or more surgical implants), information about the various steps in the surgery or surgical procedure, information about the types of surgical tools to be used in the surgery or surgical procedure, combinations thereof, and/or the like. In some cases, the user may be able to modify the content of the surgical plan 420, such as through user inputs to the user device 116 and/or the user interface 416.
[0093] The image processing 424 enables the processor 404 to process image data of an image (received from, for example, the imaging device 104 or any other imaging device) for the purpose of, for example, identifying information about the patient 110 and/or an object (e.g., a surgical tool, an implant, etc.) depicted in the image. “Image data” as used herein refers to the data generated or captured by the imaging device 104, including in a machine-readable form, a graphical/visual form, and in any other form. The information about the patient 110 and/or the object may comprise, for example, a pose of the patient, a pose of an object (e.g., a surgical tool, an implant, etc.), a boundary of reference marks(s) proximate the patient 110, combinations thereof, and/or the like. In some cases, the image processing 424 may use one or more algorithms to enhance the appearance of the initial image data captured by the imaging device 104 (e.g., artifact removal algorithms, contrast enhancement algorithms, etc.).
[0094] The information obtained from the image processing 424 may enable, for example, determining of the pose of the patient 110, the pose of reference markers (e.g., optical marker 304, navigation markers 320A-320N, etc.) proximate the patient 110, combinations thereof, and/or the like. The information may also enable registration of the patient 110 to a common coordinate frame of the imaging device 104, and/or registration of the elements depicted in the image data to the common coordinate frame of the imaging device 104. The image processing 424 may be used in conjunction with segmentation 428 to identify anatomical features of the patient 110 and/or of one or more objects, as discussed below.
[0095] The segmentation 428 enables the processor 404 to segment image data so as to identify anatomical features of the patient 110 (and/or anatomical features thereof) and/or one or more objects depicted in the image data such as, for example, a surgical tool, implanted medical device, combinations thereof, and/or the like. The segmentation 428 may enable the processor 404 to identify a boundary of an object or an anatomical feature of the patient 110 using, for example, feature recognition. For example, the segmentation 428 may enable the processor 404 to identify, in the image data, one or more vertebrae of the patient 110 and/or one or more objects (e.g., surgical screws) implanted in the vertebrae of the patient 110. In other examples, the segmentation 428 may enable the processor 404 to identify a boundary of an object (e.g., a boundary of a vertebra, a boundary of a surgical screw, etc.) by determining a difference in or contrast between colors or grayscale values of image pixels.
[0096] The transformation 432 enables the processor 404 to generate transformations that map coordinates in one coordinate system into another coordinate system. In other words, the transformation 432 enables the processor 404 to transform coordinates associated with an object in the surgical environment (e.g., the robot 112, a portion of patient anatomy, etc.) from a first coordinate system (e.g., a patient coordinate system) into a second coordinate system (e.g., a reference frame coordinate system) based on, for example, the registration of the first coordinate system and a third coordinate system (e.g., an imaging device coordinate system) and the registration of the third coordinate system and the first coordinate system.
[0097] The registration 436 enables the processor 404 to correlate one coordinate system with another coordinate system. For example, the registration 436 may enable the processor 404 to correlate or map a first coordinate system (e.g., a patient coordinate system) with a second coordinate system (e.g., an imaging device coordinate system) using localization (e.g., optical localization, electromagnetic localization, etc.). The registration 436 may, for example, comprise an algorithm that receives a set of 3D points of optical and/or electromagnetic markers in the second coordinate system and information about the position of one or more localizers in the first coordinate system and the second coordinate system to generate a correlation or map between the two coordinate systems.
[0098] The sensor information 440 may be or comprise a collection of measurements generated by one or more sensors positioned within the surgical environment. For example, the sensor information 440 may be or comprise measurements from force sensors, such as from force and/or torque sensors positioned on the robot 112 to measure forces generated as a result of the robot 112 interacting with the patient 110. As another example, the sensor information 440 may comprise information recorded by a depth sensor that captures a user gesture, which gesture may be used by the computing system 102 to control the robot 112 based on the user gesture. As yet another example, the sensor information 440 may comprise capacitance measurements of various tissues of the patient 110. In some examples, the sensor information 440 may be used as an input into the control algorithm 444, which may provide outputs that can be used to control the imaging device 104, the navigation system 108, the robot 112, and/or the like.
[0099] The control algorithm 444 may be or comprise one or more algorithms that generate outputs that can be used by the processor 404 to control the imaging device 104, the navigation system 108, the robot 112, and/or the like. The control algorithm 444 may receive one or more inputs, such as the sensor information 440, one or more images generated by the imaging device 104, combinations thereof, and/or the like. The control algorithm 444 may then process the inputs and generate one or more outputs. The outputs may comprise signals or other data that, when processed by the processor 404, enable the processor 404 to control the robot 112 in accordance with the outputs. In one example, such as when the robot 112 is being used to drill a hole for a pedicle screw, force sensors positioned on the robot 112 may generate measurements that are input into the control algorithm 444. The control algorithm 444 may then determine an overall force generated by the robot 112 on the patient 110 and compare the overall force to a threshold value. When the overall force meets or exceeds the threshold value, the control algorithm 444 may generate an output that, when processed by the processor 404, enables the processor 404 to change one or more operation parameters of the surgical drill to reduce the overall force generated by the robot 112 on the patient 110.
[0100] In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0101] Fig. 5 depicts a method 500 that may be used, for example, to determine an implant trajectory.
[0102] The method 500 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor(s) 404 of the computing system 102 described above. The at least one processor may be part of a robot (such as a robot 112) or part of a navigation system (such as a navigation system 108). A processor other than any processor described herein may also be used to execute the method 500. The at least one processor may perform the method 500 by executing elements stored in a memory such as the memory 408. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 500. One or more portions of a method 500 may be performed by the processor executing any of the contents of memory, such as image processing 424, segmentation 428, transformation 432, registration 436, and/or one or more control algorithm 444.
[0103] The method 500 comprises receiving an image of a patient’s anatomy (step 504). The image of the patient anatomy may be captured by the imaging device 104 and sent to the computing system 102. The image may in some cases be displayed to the user via the user device 116, the user interface 416, and/or the like. In some embodiments, the image data captured by the imaging device 104 may be processed by the processor 404 using image processing 424 to, for example, remove object artifacts (e.g., metal artifact noise), improve contrast between soft tissue and bones, and/or the like. In some embodiments, the image of the patient anatomy may be saved in the memory 408 and/or the database 120.
[0104] The method 500 also comprises detecting a depiction of at least one anatomical element and/or at least one medical device in the image (step 508). The processor 404 may use segmentation 428 to identify at least one anatomical element (e.g., a vertebrae) as well as the at least one medical device (e.g., an optical marker) that appear in the image. The segmentation 428 may use feature detection algorithms to segment the image into a plurality of segments, where one segment contains the depiction of the at least one anatomical element, and a second, different segment contains a depiction of the at least one medical device. In some cases, the medical device may be positioned proximate the anatomical element.
[0105] The method 500 also comprises determining a pose of the at least one anatomical element (step 512). Based on the depiction of the medical device relative to the anatomical element, the processor 404 may determine the pose of the anatomical element. For example, the pose of the medical device may be known or determined by the processor 404. Based on the pose of the medical device, the processor 404 may determine the pose of the anatomical element relative to the medical device, and then map (e.g., using transformation 432) the coordinates associated with the anatomical element to a known coordinate system based on a registration between the medical device and a known coordinate system (e.g., an imaging device coordinate system). In some embodiments, the processor 404 may transform coordinates associated with the anatomical element into one or more other coordinate systems (e.g., the medical device coordinate system) and/or into a common coordinate system shared by other objects in the surgical environment. In some cases, the processor 404 may capture multiple images and use image-to-image registration to determine the pose of the anatomical element.
[0106] The method 500 also comprises determining a trajectory based on the determined pose of the at least one anatomical element and a surgical plan (step 516). The processor 404 may use the surgical plan 420 and the determined pose of the anatomical element to determine a trajectory. The trajectory may be associated with, for example, the direction along which a surgical screw is to be implanted. In other examples, the trajectory may be associated with the alignment of the robot 112 (e.g., the robotic arms 114 may hold a tool guide along a certain trajectory for a physician to perform a drilling or cutting procedure). In yet another example, the trajectory may be associated with a direction along which a surgical tool is to move when interacting with the patient 110, such as when the robotic arms 114 holds a surgical burr for resecting anatomical tissue. The processor 404 may determine the trajectory by mapping a planned surgical trajectory in the surgical plan 420 to the current pose of the anatomical element based on the difference in pose between the anatomical element depicted in the surgical plan 420 and the determined pose of the anatomical element.
[0107] The method 500 also comprises controlling a robotic arm based on the determined trajectory (step 520). After the trajectory is determined, the processor 404 may cause the robot 112 (or more specifically the robotic arms 114) to move based on the determined trajectory. For example, when a robotic arm 114 holds a surgical tool, the processor 404 may cause the robotic arm 114 to move such that the surgical tool aligns with the trajectory. In cases where there are a plurality of robotic arms, the processor 404 may cause the robotic arm(s) not holding the surgical tool to move out of a working volume to mitigate the likelihood of collision between the surgical tool and the robotic arm(s).
[0108] The method 500 also comprises causing the determined trajectory to be displayed on a user interface (step 524). The determined trajectory may be displayed to the user via the user device 116, the user interface 416, combinations thereof, and/or the like. In some cases, the display may include a visual depiction of the trajectory relative to patient anatomy (e.g., relative to one or more vertebrae). For example, during an implant step where a surgical screw (e.g., a pedicle screw) is implanted into a vertebra, the display may depict the vertebra (e.g., based on one or more images captured by the imaging device 104) as well as the trajectory along which the surgical screw is to be implanted.
[0109] The method 500 also comprises receiving a user input regarding the determined trajectory (step 528). Once the determined trajectory is displayed to the user, the computing system 102 may require that the user approve the trajectory prior to starting the implant step. Once the user accepts or approves the trajectory, the processor 404 may control the robotic arm 114 to begin implanting the surgical screw. In some cases, the user may choose to adjust the determined trajectory. When the user adjusts the trajectory, the processor 404 may perform the steps 520 and 524 again to adjust the robotic arm to conform with the trajectory, and render the new trajectory to the user interface. [0110] The present disclosure encompasses embodiments of the method 500 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0111] Fig. 6 depicts a method 600 that may be used, for example, to control a robotic arm based on sensor information. [0112] The method 600 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor(s) 404 of the computing system 102 described above. The at least one processor may be part of a robot (such as a robot 112) or part of a navigation system (such as a navigation system 108). A processor other than any processor described herein may also be used to execute the method 600. The at least one processor may perform the method 600 by executing elements stored in a memory such as the memory 408. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 600. One or more portions of a method 600 may be performed by the processor executing any of the contents of memory, such as an image processing 424, a segmentation 428, a transformation 432, a registration 436, and/or one or more control algorithms 444.
[0113] The method 600 comprises receiving sensor information from one or more sensors positioned within a surgical environment (step 604). The sensor information may be similar to or the same as the sensor information 440, which may comprise information generated by force sensors, capacitance sensors, etc. that are positioned on or near the patient 110, the robot 112 (e.g., the robotic arms 114), one or more surgical tools held by the robot 112, combinations thereof, and/or the like. In some examples, the sensor information may be received from the memory 408 and/or from the database 120.
[0114] In some implementations, the step 604 may optionally follow from the step 520 of the method 500, where the robotic arm was controlled based on the determined trajectory. For example, once the robotic arm has been moved such that a surgical tool is aligned with the determined trajectory, the robotic arm 114 may proceed with drilling into the vertebra of the patient 110 using a surgical tool aligned with the determined trajectory. As the surgical tool drills into the patient 110, force sensors on or near the surgical tool (e.g., on the robotic arm 114, within the surgical tool, etc.) may generate force measurements that are stored as sensor information in the memory 408 and/or the database 120.
[0115] The method 600 also comprises providing an input associated with the sensor information into one or more control algorithms (step 608). The sensor information may be provided by the processor 404 as an input into the one or more control algorithms, which may be similar to or the same as the control algorithm(s) 444. In some embodiments, the sensor information may be input into the control algorithm(s), while in other embodiments the processor 404 may perform one or more processing steps on the sensor information before inputting into the control algorithm, such as normalization of data, conversion of force measurement vectors into a common coordinate system, and/or the like. Continuing with the vertebra drilling step example, the force measurements generated by the force sensors on the robot may be provided as input into the control algorithm.
[0116] The method 600 also comprises receiving, from the one or more control algorithms, one or more outputs associated with controlling the robotic arm (step 612). The control algorithm 444 may process the input data and generate one or more outputs. For example, when force measurements are provided to the control algorithm, the control algorithm may determine the overall force being applied to the patient 110 by the surgical drill and then compare the overall force to a threshold value (e.g., a value stored in the database 120 and/or the memory 408).
[0117] The method 600 also comprises controlling the robotic arm in accordance with the one or more outputs (step 616). When the force meets or exceeds the threshold value, the control algorithm may output a control signal that, when processed by the processor 404, causes the processor 404 to change the operation of the robotic arm 114 to reduce the overall force applied to the patient 110. For example, the processor 404 may cause the robotic arm 114 to move such that the operative end of the surgical tool advances more slowly. In some cases, the control signal may cause additional or alternative changes to the robotic arms 114 and/or the tools attached thereto. As an example, the control signal may cause the processor 404 to reduce the rotations per minute (rpm) of the surgical drill bit to reduce the force applied by the surgical drill. In other examples, the control signal may cause a change in one or more other operation parameters of the surgical drill. In some cases, the force measurements may be continuously provided as inputs into the control algorithm 444 to verify that force generated by the surgical drill does not meet or exceed the threshold value.
[0118] The present disclosure encompasses embodiments of the method 600 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0119] Fig. 7 depicts a method 700 that may be used, for example, to control a robot arm performing a surgical task.
[0120] The method 700 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor(s) 404 of the computing system 102 described above. The at least one processor may be part of a robot (such as a robot 112) or part of a navigation system (such as a navigation system 108). A processor other than any processor described herein may also be used to execute the method 700. The at least one processor may perform the method 700 by executing elements stored in a memory such as the memory 408. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 700. One or more portions of a method 700 may be performed by the processor executing any of the contents of memory, such as an image processing 424, a segmentation 428, a transformation 432, a registration 436, and/or one or more control algorithms 444.
[0121] The method 700 comprises initiating a robot to perform a first step in a surgical task (step 704). The robot (e.g., the robot 112) may perform various surgical tasks, such as resecting anatomical tissues, positioning one or more components of the imaging device 104, inserting one or more pedicle screws, combinations thereof, and/or the like. In some cases, the initiation may be based on a user input (e.g., the user inputs a command into the user device 116 for the robot 112 to begin the surgical task), or may automatically occur (e.g., based on the processor 404 controlling the robot 112 based on the surgical plan 420). In some cases, the surgical robotic system 100 may require a user (e.g., a clinician) to verify the steps in the surgical task before the robot begins performance of the first step of the surgical task. For instance, the surgical task may include a surgical burr that resects anatomical tissue, and the user may be required to verify the pose of the surgical burr in a first step before a second step of resection begins.
[0122] The method 700 also comprises receiving information associated with the robot performing the surgical task, the information comprising sensor information from a plurality of sensors (step 708). As the robot performs the surgical task, information associated with the performance may be captured, measured, or otherwise determined. For example, force sensors positioned on or near the robot may record force information as the surgical burr connected to the robot resects anatomical tissue. In other cases, the information may be based on a user input, such as when the user provides an input to adjust or change one or more operating features of the robot as the robot performs the surgical task. For example, the user may monitor the robot as the surgical burr resects anatomical tissue, and may wish to change the path along which the surgical burr moves. The user may then provide input (e.g., via a display, via a joystick, etc.) to change the path along which the surgical tool moves. In another example, the user may determine that the surgical burr is no longer performing the resection in accordance with the surgical plan, and may wish to return the robot to a previous, known pose. The user may then provide an input to cause the robot to move from the robot’s current pose to the previous, known pose.
[0123] The method 700 also comprises determining, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task (step 712). The control algorithm may receive the sensor information and select a subset of the sensor information that is relevant to a second step of the surgical task. For example, the surgical task may comprise drilling a hole in a vertebra to insert a surgical screw, and the sensor information may comprise force information from sensors positioned near the operative end of the surgical tool performing the drilling as well as information from the imaging device 104 depicting optical markers disposed on a separate, different robotic arm that will be used to insert the surgical screw. In this case, the control algorithm may determine from the surgical plan that the second task involves drilling through the cortical layer of the vertebra, that the pose information of the optical markers one the separate robotic arm is not relevant to drilling through the cortical layer, and that the force information is relevant to drilling through the cortical layer. In other examples, such as after the hole is drilled, the control algorithm may determine that the force information is not relevant to implanting the surgical screw, and that the optical marker information related to the separate robotic arm is relevant.
[0124] The method 700 also comprises analyzing the subset of the sensor information (step 716). The analyzing may include the control algorithm receiving the subset of the sensor information as an input to generate one or more outputs (e.g., control signal(s)). In some cases, the control algorithm may use one or more elements in the memory 408 to perform the analyzing.
[0125] The method 700 also comprises generating, based on the analysis, a control signal (step 720). As mentioned in the step 716, one or more control algorithms (e.g., control algorithm(s) 444), may receive the subset of sensor information from the step 712 as input(s) and generate the control signal based on the input information. For example, when the input information is associated with a user changing the path along which the surgical burr moves, the control algorithm may generate a control signal that causes a set of movements in the robot to adjust the path. As another example, when the input information is associated with a user instructing the robot to return to a previous, known pose, the control algorithm may generate a control signal that causes the robot to return to the previous, known pose. [0126] In some cases, the control signal may cause the robot to start, stop, pause, or continue with the surgical task. For example, the control signal may cause the robot to stop drilling when the input information indicates that the robot is within a threshold distance of a target surgical site (e.g., within a threshold distance from a vertebra) and/or within a threshold distance of a sensitive area (e.g., within a threshold distance of the spinal cord of the patient). In some cases, the control signal may cause the robot to pause performance of the surgical task until the user causes the robot to resume performance of the surgical task, such as by providing an input into a user device or interface.
[0127] The method 700 also comprises automatically providing the control signal to the robot to perform the second step of the surgical task (step 724). In some cases, the control signal may cause a change to the path, speed, trajectory, position, combinations thereof, etc. of the robot. For instance, the control signal may cause the pose of the robot to change such as when the robot is used to insert pedicle screws into the vertebra, and the change in pose enables the pedicle screws to be more easily inserted into the patient. As another example, the robot may include a surgical drill used for drilling holes for pedicle screws, and the movement speed of the robot along a path may be decreased to reduce the overall force applied by the robot on the patient.
[0128] In some examples, feedback may be provided based on the control signal. The feedback may be haptic feedback, visual feedback, audio feedback, combinations thereof, and/or the like. The feedback may be provided after the robot has received the control signal and begins performing the second step of the surgical task and/or after the control signal is generated but before the robot is controlled. In some cases, the feedback may be associated with information about the control signal. For example, the feedback may comprise a visual indicator rendered to a display showing a status of the robot, the current path of the robot, and/or the new path the robot will follow once the control signal has been implemented. As another example, the user may use a joystick or other input device to manually control the robot, and the feedback may be provided to the user haptically (e.g., motors in the joystick vibrate) to indicate that an operative end of a surgical drill connected to the robot is within a threshold distance of the spinal cord.
[0129] The present disclosure encompasses embodiments of the method 700 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above. [0130] As noted above, the present disclosure encompasses methods with fewer than all of the steps identified in Figs. 5, 6, and 7 (and the corresponding description of the methods 500, 600, and 700), as well as methods that include additional steps beyond those identified in Figs. 5, 6, and 7 (and the corresponding description of the methods 500, 600, and 700). The present disclosure also encompasses methods that comprise one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or comprise a registration or any other correlation.
[0131] The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
[0132] Moreover, though the foregoing has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
[0133] The following examples provide various embodiments disclosed herein.
[0134] Example 1: A system comprising: a processor (404); and a memory (408) storing data thereon that, when executed by the processor (404), enable the processor (404) to: initiate a robot (112) to perform a first step of a surgical task; receive information associated with the robot (112) performing the surgical task, the information comprising sensor information (440) from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information (440) that is relevant to a second step of the surgical task; analyze the subset of the sensor information (440); generate, based on the analysis, a control signal; and automatically provide the control signal to the robot (112) to perform the second step of the surgical task.
[0135] Example 2: The system of Example 1 , wherein the data, when processed by the processor (404), further enable the processor (404) to: receive at least one of force information, capacitance information, and one or more images of a patient (110).
[0136] Example 3: The system of any one of Examples 1-2, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and adjust, based on the user input, the control signal.
[0137] Example 4: The system of any one of Examples 1-3, wherein the data, when processed by the processor (404), further enable the processor (404) to: provide, to a display, a prompt to a user to approve the control signal.
[0138] Example 5: The system of Example 3, wherein the user input comprises at least one of a keyboard entry, a gesture, a voice input, and an indication from a pointer device.
[0139] Example 6: The system of any one of Examples 1-5, wherein the control signal causes the robot (112) to pause performance of the surgical task.
[0140] Example 7: The system of Example 6, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and cause, based on the user input, the robot (112) to resume the performance of the surgical task.
[0141] Example 8: The system of any one of Examples 1-7, wherein the control signal causes the robot (112) to move from a current pose to a previous, known pose.
[0142] Example 9: The system of any one of Examples 1-8, wherein the control signal causes a change to at least one of a drill speed of a surgical tool connected to the robot (112), a drill direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates.
[0143] Example 10: The system of any one of Examples 1-8, wherein the control signal causes a change to a removal speed of a surgical tool connected to the robot (112), a direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates. [0144] Example 11 : A system comprising: a robot (112); a processor (404); and a memory (408) storing data thereon that, when executed by the processor (404), enable the processor (404) to: receive information associated with the robot (112) performing a first step of a surgical task, the information comprising sensor information (440) from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information (440) that is relevant to a second step of the surgical task; analyze the subset of the sensor information (440); generate, based on the analysis, an output; and automatically provide the output to the robot (112) to perform the second step of the surgical task.
[0145] Example 12: The system of Example 11, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive at least one of force information, capacitance information, and one or more images of a patient (110).
[0146] Example 13: The system of any one of Examples 11-12, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and adjust, based on the user input, the output.
[0147] Example 14: The system of any one of Examples 11-13, wherein the data, when processed by the processor (404), further enable the processor (404) to: provide, to a display, a prompt to a user to approve the output.
[0148] Example 15: The system of Example 13, wherein the user input comprises at least one of a keyboard entry, a gesture, a voice input, and an indication from a pointer device.
[0149] Example 16: The system of any one of Examples 11-15, wherein the output causes the robot (112) to pause performance of the surgical task.
[0150] Example 17: The system of Example 16, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and cause, based on the user input, the robot (112) to resume the performance of the surgical task.
[0151] Example 18: The system of any one of Examples 11-17, wherein the output causes the robot (112) to move from a current pose to a previous, known pose.
[0152] Example 19: The system of any one of Examples 11-18, wherein the output causes a change to at least one of a drill speed of a surgical tool connected to the robot (112), a drill direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates. [0153] Example 20: The system of any one of Examples 11-18, wherein the output causes a change to at least one of a removal speed of a surgical tool connected to the robot (112), a direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates.
[0154] Example 21 : A method comprising: initiating a robot (112) to perform a first step in a surgical task; receiving information associated with the robot (112) performing the surgical task, the information comprising sensor information (440) from a plurality of sensors; determining, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyzing the subset of the sensor information (440); generating, based on the analysis, one or more outputs; and automatically providing the one or more outputs to the robot (112) to perform the second step of the surgical task.
[0155] Example 22: The method of Example 21, wherein controlling the robot (112) comprises causing the robot (112) to pause performance of the surgical task, and wherein the method further comprises: receiving a user input; and adjusting, based on the user input, the one or more outputs. [0156] The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
[0157] Moreover, though the foregoing has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims

CLAIMS What is claimed is:
1. A system comprising: a processor (404); and a memory (408) storing data thereon that, when executed by the processor (404), enable the processor (404) to: initiate a robot (112) to perform a first step of a surgical task; receive information associated with the robot (112) performing the surgical task, the information comprising sensor information (440) from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information (440) that is relevant to a second step of the surgical task; analyze the subset of the sensor information (440); generate, based on the analysis, a control signal; and automatically provide the control signal to the robot (112) to perform the second step of the surgical task.
2. The system of claim 1, wherein the data, when processed by the processor (404), further enable the processor (404) to receive at least one of force information, capacitance information, and one or more images of a patient (110).
3. The system of any one of claims 1-2, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and adjust, based on the user input, the control signal.
4. The system of any one of claims 1-3, wherein the data, when processed by the processor (404), further enable the processor (404) to provide, to a display, a prompt to a user to approve the control signal.
5. The system of claim 3, wherein the user input comprises at least one of: a keyboard entry, a gesture, a voice input, and an indication from a pointer device.
6. The system of any one of claims 1-5, wherein the control signal causes the robot (112) to pause performance of the surgical task.
7. The system of claim 6, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and cause, based on the user input, the robot (112) to resume the performance of the surgical task.
8. The system of any one of claims 1-7, wherein the control signal causes the robot (112) to move from a current pose to a previous, known pose.
9. The system of any one of claims 1-8, wherein the control signal causes a change to at least one of a drill speed of a surgical tool connected to the robot (112), a drill direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates.
10. The system of any one of claims 1-8, wherein the control signal causes a change to a removal speed of a surgical tool connected to the robot (112), a direction of the surgical tool, a trajectory of the surgical tool, a position of the robot (112), and a path along which the surgical tool operates.
11. A system comprising: a robot (112); a processor (404); and a memory (408) storing data thereon that, when executed by the processor (404), enable the processor (404) to: receive information associated with the robot (112) performing a first step of a surgical task, the information comprising sensor information (440) from a plurality of sensors; determine, based on the first step of the surgical task, a subset of the sensor information (440) that is relevant to a second step of the surgical task; analyze the subset of the sensor information (440); generate, based on the analysis, an output; and automatically provide the output to the robot (112) to perform the second step of the surgical task.
12. The system of claim 11, wherein the data, when processed by the processor (404), further enable the processor (404) to: receive a user input; and adjust, based on the user input, the output.
13. The system of claim 12, wherein the user input comprises at least one of a keyboard entry, a gesture, a voice input, and an indication from a pointer device, and wherein the data, when processed by the processor (404), further enable the processor (404) to: provide, to a display, a prompt to a user to approve the output.
14. A method comprising: initiating a robot (112) to perform a first step in a surgical task; receiving information associated with the robot (112) performing the surgical task, the information comprising sensor information (440) from a plurality of sensors; determining, based on the first step of the surgical task, a subset of the sensor information that is relevant to a second step of the surgical task; analyzing the subset of the sensor information (440); generating, based on the analysis, one or more outputs; and automatically providing the one or more outputs to the robot (112) to perform the second step of the surgical task.
15. The method of claim 14, wherein controlling the robot (112) comprises causing the robot (112) to pause performance of the surgical task, and wherein the method further comprises: receiving a user input; and adjusting, based on the user input, the one or more outputs.
PCT/IL2025/050580 2024-07-10 2025-07-07 Surgical robotic systems and methods for using the same Pending WO2026013668A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463669440P 2024-07-10 2024-07-10
US63/669,440 2024-07-10

Publications (1)

Publication Number Publication Date
WO2026013668A1 true WO2026013668A1 (en) 2026-01-15

Family

ID=96659694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2025/050580 Pending WO2026013668A1 (en) 2024-07-10 2025-07-07 Surgical robotic systems and methods for using the same

Country Status (1)

Country Link
WO (1) WO2026013668A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210346109A1 (en) * 2018-10-03 2021-11-11 Cmr Surgical Limited Monitoring performance during manipulation of user input control device of robotic system
US20230240749A1 (en) * 2022-02-01 2023-08-03 Mazor Robotics Ltd. Systems and methods for controlling surgical tools based on bone density estimation
US20230401649A1 (en) * 2019-02-21 2023-12-14 Theator inc. System for updating a predictable outcome

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210346109A1 (en) * 2018-10-03 2021-11-11 Cmr Surgical Limited Monitoring performance during manipulation of user input control device of robotic system
US20230401649A1 (en) * 2019-02-21 2023-12-14 Theator inc. System for updating a predictable outcome
US20230240749A1 (en) * 2022-02-01 2023-08-03 Mazor Robotics Ltd. Systems and methods for controlling surgical tools based on bone density estimation

Similar Documents

Publication Publication Date Title
EP4054468B1 (en) Robotic positioning of a device
JP7399982B2 (en) 3D visualization during surgery
JP2022516642A (en) Systems and methods for alignment of coordinate system and navigation
US20220395342A1 (en) Multi-arm robotic systems and methods for monitoring a target or performing a surgical procedure
US12295797B2 (en) Systems, methods, and devices for providing an augmented display
US20250152262A1 (en) Path planning based on work volume mapping
EP4026511B1 (en) Systems and methods for single image registration update
US12383371B2 (en) Systems, devices, and methods for robotic placement of electrodes for anatomy imaging
US20240382265A1 (en) Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same
US12249099B2 (en) Systems, methods, and devices for reconstructing a three-dimensional representation
WO2026013668A1 (en) Surgical robotic systems and methods for using the same
CN118647331A (en) System and apparatus for generating hybrid images
WO2026013669A1 (en) Tissue removal surgical robotic system
WO2026013670A1 (en) Surgical robotic and spatial measurement system
WO2026013671A1 (en) Surgical robotic system to safely remove and reposition surgical elements when an error is detected
US20240358461A1 (en) Multi-arm robotic systems and methods for monitoring a target or performing a surgical procedure
US11847809B2 (en) Systems, devices, and methods for identifying and locating a region of interest
US20240156531A1 (en) Method for creating a surgical plan based on an ultrasound view
WO2024229649A1 (en) Non-invasive patient tracker for surgical procedure
US20240398362A1 (en) Ultra-wide 2d scout images for field of view preview
US20240382169A1 (en) Long image multi-field of view preview
WO2024236440A1 (en) Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same
WO2024246897A1 (en) Systems and methods for long scan adjustment and anatomy tracking
WO2025120637A1 (en) Systems and methods for planning and updating trajectories for imaging devices
WO2025229561A1 (en) Systems and methods for detecting and visualizing implanted devices in image volumes