AU2024253887A1 - Methods and systems of registering a three-dimensional bone model - Google Patents
Methods and systems of registering a three-dimensional bone modelInfo
- Publication number
- AU2024253887A1 AU2024253887A1 AU2024253887A AU2024253887A AU2024253887A1 AU 2024253887 A1 AU2024253887 A1 AU 2024253887A1 AU 2024253887 A AU2024253887 A AU 2024253887A AU 2024253887 A AU2024253887 A AU 2024253887A AU 2024253887 A1 AU2024253887 A1 AU 2024253887A1
- Authority
- AU
- Australia
- Prior art keywords
- bone
- video images
- surgical
- surgical controller
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/90—Identification means for patients or instruments, e.g. tags
- A61B90/94—Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text
- A61B90/96—Identification means for patients or instruments, e.g. tags coded with symbols, e.g. text using barcodes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/108—Computer aided selection or customisation of medical implants or cutting guides
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
- A61B2090/304—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using chemi-luminescent materials
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3904—Markers, e.g. radio-opaque or breast lesions markers specially adapted for marking specified tissue
- A61B2090/3916—Bone tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3933—Liquid markers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1077—Measuring of profiles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1127—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using markers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6846—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
- A61B5/6867—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive specially adapted to be attached or implanted in a specific body part
- A61B5/6878—Bone
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/30—Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Pathology (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- Endoscopes (AREA)
- Surgical Instruments (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Registering a three-dimensional bone model. Various examples are directed to methods and related systems of identifying surface features of a rigid structure visible in a video stream, and using the surface features to register a three-dimensional model for use in computer-assisted navigation of the surgical procedure. In some examples, the surface features are determined using touchless techniques based on a known or calculated motion of the camera. In other examples, the surface features are gathered using a touch probe that is not itself directly tracked. In yet still further examples, the three-dimensional model may be registered by use of a patient-specific instrument that couples to the rigid structure in only one orientation.
Description
METHODS AND SYSTEMS OF REGISTERING A THREE-DIMENSIONAL BONE MODEL
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 63/457,908 filed April 7, 2023 titled “METHODS AND SYSTEMS OF REGISTERING A THREE- DIMENSIONAL BONE MODEL.” The provisional application is incorporated by reference herein as if reproduced in full below.
BACKGROUND
[0002] Arthroscopic surgical procedures are minimally invasive surgical procedures in which access to the surgical site within the body is by way of small keyholes or ports through the patient’s skin. The various tissues within the surgical site are visualized by way of an arthroscope placed through a port, and the internal scene is shown on an external display device. The tissue may be repaired or replaced through the same or additional ports. In computer-assisted surgical procedures (e.g., replacement of the anterior cruciate ligament (ACL), reduction of femora-acetabular impingement), the location of various objects with the surgical site may be tracked relative to the bone by way of images captured by an arthroscope and a three-dimensional model of the bone.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] For a detailed description of example embodiments, reference will now be made to the accompanying drawings in which:
[0004] Figure 1 shows an anterior or front elevation view of right knee, with the patella removed;
[0005] Figure 2 shows a posterior or back elevation view of the right knee;
[0006] Figure 3 shows a view of the femur from below and looking into the intercondylar notch;
[0007] Figure 4 shows a surgical system in accordance with at least some embodiments;
[0008] Figure 5 shows a conceptual drawing of a surgical site with various objects within the surgical site tracked, in accordance with at least some embodiments;
[0009] Figure 6 is an example video display showing portions of a femur and having
visible therein a bone fiducial, in accordance with at least some embodiments;
[0010] Figure 7 shows a method in accordance with at least some embodiments;
[0011] Figure 8 is an example video display showing portions of a femur and a bone fiducial during a registration procedure, in accordance with at least some embodiments; [0012] Figure 9 shows a method in accordance with at least some embodiments;
[0013] Figure 10 shows a method in accordance with at least some embodiments;
[0014] Figure 11 shows an anterior or front elevation view of right knee, with the patella removed, and a patient-specific instrument installed, in accordance with at least some embodiments; and
[0015] Figure 12 shows a computer system in accordance with at least some embodiments.
DEFINITIONS
[0016] Various terms are used to refer to particular system components. Different companies may refer to a component by different names - this document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open- ended fashion, and thus should be interpreted to mean “including, but not limited to... .” Also, the term “couple” or “couples” is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.
[0017] An endoscope having “a single optical path” through an endoscope shall mean that the endoscope is not a stereoscopic endoscope having two distinct optical paths separated by an interocular distance at the light collecting end of the endoscope. The fact that an endoscope has two or more optical members (e.g., glass rods, optical fibers) forming a single optical path shall not obviate the status as a single optical path.
DETAILED DESCRIPTION
[0018] The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the
following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
[0019] Various examples are directed to methods and systems of registering a three- dimensional model of a rigid structure, such as bone. More particularly, various examples are directed to methods and related systems of identifying surface features of a rigid structure visible in a video stream, and using the surface features to register a three- dimensional model for use in computer-assisted navigation of the surgical procedure. In some examples, the surface features are determined using touchless techniques based on a known or calculated motion of the camera. In other examples, the surface features are gathered using a touch probe that is not itself directly tracked; but rather, the pose of the touch probe, and thus the locations of the distal tip of the touch probe touching the bone, may be determined by segmenting the frames of the video stream and pose estimation. In yet still further examples, the three-dimensional model may be registered by use of a patient-specific instrument that couples to the rigid structure in only one orientation; thus, a fiducial coupled to the patient-specific instrument, or in some cases the patient-specific instrument itself without a fiducial, may be used to register the three- dimensional bone model.
[0020] The various examples were developed in the context of anterior-cruciate ligament (ACL) repair, and thus the discussion below is based on the developmental context. In this context, the rigid structure is bone, and the three-dimensional mode is a three-dimensional bone model. However, the techniques are applicable to any suitable rigid anatomical structure, such as teeth. Moreover, the various techniques may be applicable to many types of surgical procedures, such as repairs associated with the knee, the hip, the shoulder, the wrist, or the ankle. The techniques may be applicable not only to ligament repair (e.g., medial collateral ligament repair, lateral collateral ligament repair, and posterior cruciate ligament repair), but also for planning and placing anchors to reattach soft tissue (e.g., reattaching the labrum of the hip, the rotator cuff, or the meniscal root), and surgical procedures to address femoroacetabular impingement. Thus, the description and developmental context shall not be read as a limitation of the applicability of the teachings. In order to orient the reader, the specification first turns a
description of the knee.
[0021] Figure 1 shows an anterior or front elevation view of a right knee, with the patella removed. In particular, visible in Figure 1 is lower portion of the femur 100 including the outer or lateral condyle 102 and the inner or medial condyle 104. The femur 100 and condyles 102 and 104 are in operational relationship to a tibia 106 including the tibial tuberosity 108 and Gerdy’s tubercle 110. Disposed between the femoral condyles 102 and 104 and the tibia 106 are the lateral meniscus 112 and the medial meniscus 114. Several ligaments are also visible in the view of Figure 1 , such as the ACL 116 extending from the lateral side of femoral notch to the medial side of the tibia 106. Oppositely, the posterior cruciate ligament 118 extends from medial side of the femoral notch to the tibia 106. Also visible is the fibula 120, and several additional ligaments that are not specifically numbered.
[0022] Figure 2 shows a posterior or back elevation view of the right knee. In particular, visible in Figure 2 is lower portion of the femur 100 including the lateral condyle 102 and the medial condyle 104. The femur 100 and femoral condyles 102 and 104 again are in operational relationship to the tibia 106, and disposed between the femoral condyles 102 and 104 and the tibia 106 are the lateral meniscus 112 and the medial meniscus 114. Figure 2 further shows the ACL 116 extending from the lateral side of femoral notch to the medial side of the tibia 106, though the attachment point to the tibia 106 is not visible. The posterior cruciate ligament 118 extends from medial side of the femoral notch to the tibia 106, though the attachment point to the femur 100 not visible. Again several additional ligaments are shown that are not specifically numbered.
[0023] The most frequent ACL injury is a complete tear of the ligament. Treatment involves reconstruction of the ACL by placement of a substitute graft (e.g., autograft from either the patellar tendon, quad tendon, or the hamstring tendons). The graft is placed into tunnels prepared within the femur 100 and the tibia 106. The current standard of care for ACL repair is to locate the tunnels such that the tunnel entry point for the graft is at the anatomical attachment location of the native ACL. Such tunnel placement at the attachment location of the native ACL attempts to recreate original knee kinematics. In arthroscopic surgery, the location of the tunnel through the tibia 106 is relatively easy to reach, particularly when the knee is bent or in flexion. However, the tunnel through the
femur 100 resides within the intercondylar notch. Depending upon the physical size of the patient and the surgeon’s selection for location of the port through the skin, it may be difficult to reach the attachment location of the native ACL to the femur 100.
[0024] Figure 3 shows a view of the femur from below and looking into the intercondylar notch. In particular, visible in Figure 3 are the lateral condyle 102 and the medial condyle 104. Defined between the femoral condyles 102 and 104 is the femoral notch 200. The femoral tunnel may define inside aperture 202 within the femoral notch 200, the inside aperture 202 closer to the lateral condyle 102 and displaced into the posterior portion of the femoral notch 200. The femoral tunnel extends through the femur 100 and forms an outside aperture on the outside or lateral surface of the femur 100 (the outside aperture not visible in Figure 3). Figure 3 shows an example drill wire 204 that may be used to create an initial tunnel or pilot hole. Once the surgeon verifies that the pilot hole is closely aligned with a planned-tunnel path, the femoral tunnel is created by boring or reaming with another instrument (e.g., a reamer) that may use the drill wire 204 as a guide. In some cases, a socket or counter-bore is created on the intercondylar notch side to accommodate the width of the graft that extends into the bone, and that counterbore may also be created using another instrument (e.g., reamer) that may use the drill wire 204 as a guide.
[0025] Figure 4 shows a surgical system (not to scale) in accordance with at least some embodiments. In particular, the example surgical system 400 comprises a tower or device cart 402, an example mechanical resection instrument 404, an example plasmabased ablation instrument (hereafter just ablation instrument 406), and an endoscope in the example form of an arthroscope 408 and attached camera head or camera 410. In the example systems, the arthroscope 408 is a rigid device, unlike endoscopes for other procedures, such as upper-endoscopies. The device cart 402 may comprise a display device 414, a resection controller 416, and a camera control unit (CCU) together with an endoscopic light source and video controller 418. In example cases the combined CCU and video controller 418 not only provides light to the arthroscope 408 and displays images received from the camera 410, but also implements various additional aspects, such as registering a three-dimensional bone model with the bone visible in the video images, and providing computer-assisted navigation during the surgery. Thus, the
combined CCU and video controller are hereafter referred to as surgical controller 418. In other cases, however, the CCU and video controller may be a separate and distinct system from the controller that handles registration and computer-assisted navigation, yet the separate devices would nevertheless be operationally coupled.
[0026] The example device cart 402 further includes a pump controller 422 (e.g., single or dual peristaltic pump). Fluidic connections of the mechanical resection instrument 404 and ablation instrument 406 to the pump controller 422 are not shown so as not to unduly complicate the figure. Similarly, fluidic connections between the pump controller 422 and the patient are not shown so as not to unduly complicate the figure. In the example system, both the mechanical resection instrument 404 and the ablation instrument 406 are coupled to the resection controller 416 being a dual-function controller. In other cases, however, there may be a mechanical resection controller separate and distinct from an ablation controller. The example devices and controllers associated with the device cart 402 are merely examples, and other examples include vacuum pumps, patient-positioning systems, robotic arms holding various instruments, ultrasonic cutting devices and related controllers, patient-positioning controllers, and robotic surgical systems.
[0027] Figure 4 further shows additional instruments that may be present during an arthroscopic surgical procedure. In particular, Figure 4 shows an example touch probe 424, a drill guide or aimer 426, and a bone fiducial 428. The touch probe 424 may be used during the surgical procedure to provide information to the surgical controller 418, such as information to register a three-dimensional bone model to an underlying bone visible in images captured by the arthroscope 408 and camera head 410. The aimer 426 may be used as a guide for placement and drilling with a drill wire to create an initial or pilot tunnel through the bone. The bone fiducial 428 may be affixed or rigidly attached to the bone and serve as an anchor location for the surgical controller 418 to know the orientation of the bone (e.g., after registration of a three-dimensional bone model). Additional tools and instruments will be present, such as the drill wire, various reamers for creating the throughbore and counterbore aspects of a tunnel through the bone, and various tools, such as for suturing and anchoring a graft. These additional tools and instruments are not shown so as not to further complicate the figure. The specification
now turns to a workflow for an example anterior cruciate ligament repair.
[0028] A surgical procedure may begin with a planning phase. The example anterior cruciate ligament repair may start with imaging (e.g., X-ray imaging, computed tomography (CT), magnetic resonance imaging (MRI)) of the knee of the patient, including the relevant anatomy like the lower portion of the femur, the upper portion of the tibia, and the articular cartilage. The imaging may be preoperative imaging, hours or days before the intraoperative repair, or the imaging may take place within the surgical setting just prior to the intraoperative repair. The discussion that follows assumes MRI imaging, but again many different types of imaging may be used. The image slices from the MRI imaging can be segmented such that a volumetric model or three-dimensional model of the anatomy is created. Any suitable currently available, or after developed, segmentation technology may be used to create the three-dimensional model. More specifically to the example of anterior cruciate ligament repair, a three-dimensional bone model of the lower portion of the femur, including the femoral condyles, is created.
[0029] Using the three-dimensional bone model, an operative plan is created that comprises choosing a planned-tunnel path through the femur, including locations of the apertures of the bone that define the ends of the tunnel. For an example inside-out repair, the aperture within the femoral notch is the entry location for the drilling, and the aperture on the lateral surface of the femur is the exit location. For an outside-in repair, the entry and exit locations for drilling are swapped. Still assuming an inside-out repair, the entry location may be selected to be the same as, or close to, the attachment location of the native anterior cruciate ligament to the femur within the femoral notch. In some cases, selecting the entry location within the femoral notch may involve use of a Bernard & Hertel Quadrant or grid placed on a fluoroscopic image, or by placing the Bernard & Hertel Quadrant on a simulated fluoroscopic image created from the three-dimensional bone model. Based on use of the Bernard & Hertel Quadrant, an entry location for the tunnel is selected. For an inside-out repair, selection of the exit location is less restrictive, not only because the portion of the tunnel proximate to the exit location is used for placement of the anchor for the graft, but also because the exit location is in approximately centered in the femur (considered anteriorly to posteriorly), and thus issues of bone wall thickness at the exit location are of less concern. In some cases, a three-dimensional bone model
of the proximal end of the tibia is also created, and the surgeon may likewise choose planned-tunnel path(s) through the tibia.
[0030] The results of the planning may include: a three-dimensional bone model of the distal end of the femur; a three-dimensional bone model for a proximal end of the tibia; an entry location and exit location through the femur and thus a planned-tunnel path for the femur; and an entry location and exit location through the tibia and thus a planned- tunnel path through the tibia. Other surgical parameters may also be selected during the planning, such as tunnel throughbore diameters, tunnel counterbore diameters and depth, desired post-repair flexion, and the like, but those additional surgical parameters are omitted so as not to unduly complicate the specification.
[0031] The specification now turns to intraoperative aspects. The intraoperative aspects include steps and procedures for setting up the surgical system to perform the various repairs. It is noted, however, that some of the intraoperative aspects (e.g., optical system calibration), may take place before any ports or incisions are made through the patient’s skin, and in fact before the patient is wheeled into the surgical room. Nevertheless, such steps and procedures may be considered intraoperative as they take place in the surgical setting and with the surgical equipment and instruments used to perform the actual repair. [0032] The example ACL repair is conducted arthroscopically and is computer-assisted in the sense the surgical controller 418 is used for arthroscopic navigation within the surgical site. More particularly, in example systems the surgical controller 418 provides computer-assisted navigation during the ligament repair by tracking location of various objects within the surgical site, such as the location of the bone within the three- dimensional coordinate space of the view of the arthroscope, and location of the various instruments (e.g., a drill wire) within the three-dimensional coordinate space of the view of the arthroscope. The specification turns to brief description of such tracking techniques.
[0033] Figure 5 shows a conceptual drawing of a surgical site with various objects within the surgical site. In particular, visible in Figure 5 is a distal end of the arthroscope 408, a portion of a bone 500 (e.g., femur), the bone fiducial 428 within the surgical site, and the touch probe 424. Each is addressed in turn.
[0034] The arthroscope 408 illuminates the surgical site with visible light. In the example
of Figure 5, the illumination is illustrated by arrows 508. The illumination provided to the surgical site is reflected by various objects and tissues within the surgical site, and the reflected light that returns to the distal end enters the arthroscope 408, propagates along an optical channel within the arthroscope 408, and is eventually incident upon a capture array within the camera 410 (Figure 4). The images detected by the capture array within the camera 410 are sent electronically to the surgical controller 418 (Figure 4) and displayed on the display device 414 (Figure 4). In one example, the arthroscope 408 is monocular or has a single optical path through the arthroscope for capturing images of the surgical site, notwithstanding that the single optical path may be constructed of two or more optical members (e.g., glass rods, optical fibers). That is to say, in example systems and methods the computer-assisted navigation provided by the arthroscope 408, the camera 410, and the surgical controller 418 is provided with the arthroscope 408 that is not a stereoscopic endoscope having two distinct optical paths separated by an interocular distance at the distal end endoscope.
[0035] During a surgical procedure, a surgeon selects an arthroscope with a viewing direction beneficial for the planned surgical procedure. Viewing direction refers to a line residing at the center of an angle subtended by the outside edges or peripheral edges of the view of an endoscope. The viewing direction for some arthroscopes is aligned with the longitudinal central axis of the arthroscope, and such arthroscopes are referred to as “zero degree” arthroscopes (e.g., the angle between the viewing direction and the longitudinal central axis of the arthroscope is zero degrees). The viewing direction of other arthroscopes forms a non-zero angle with the longitudinal central axis of the arthroscope. For example, for a 30° arthroscope the viewing direction forms a 30° angle to the longitudinal central axis of the arthroscope, the angle measured as an obtuse angle beyond the distal end of the arthroscope. In many cases for ACL repair, the surgeon selects a 30° arthroscope or a 45° arthroscope based on location the port created through the skin of the patient. In the example of Figure 5, the view angle 510 of the arthroscope 408 forms a non-zero angle to the longitudinal central axis 512 of the arthroscope 408.
[0036] Still referring to Figure 5, within the view of the arthroscope 408 is a portion of the bone 500 (e.g., within the intercondylar notch), along with the example bone
fiducial 428, and the example touch probe 424. The example bone fiducial 428 is multifaceted element, with each face or facet having a fiducial disposed or created thereon. However, the bone fiducial need not have multiple faces, and in fact may take any shape so long as that shape can be tracked within the video images. The bone fiducial, such as bone fiducial 428, may be attached to the bone 500 in any suitable form, in this example the fastening by the screw portion of the bone fiducial 428 (not visible in Figure 5, but visible in Figure 4). The patterns of the fiducials on each facet are designed to provide information regarding the orientation of the bone fiducial 428 in the three-dimensional coordinate space of the view of the arthroscope 408. More particularly, the pattern is selected such that the orientation of the bone fiducial 428 may be determined from images captured by the arthroscope 408 and attached camera (Figure 4).
[0037] The touch probe 424 is also shown as partially visible within the view of the arthroscope 408. The touch probe 424 may be used, as discussed more below, to identify a plurality of surface features on the bone 500 as part of the registration of the bone 500 to the three-dimensional bone model. Alternatively, though not specifically shown, the aimer 426 (Figure 4) may be used as the device to help with the registration process. In some cases the touch probe 424 and/or the aimer 426 may carry their own, unique fiducials, such that their respective poses may be calculated from the one or more fiducial present in the video stream. However, in other cases, and as shown, the medical instrument used to help with registration of the three-dimensional bone model, be it the touch probe 424, the aimer 426, or any other suitable medical device, may omit carrying fiducials. Stated otherwise, in such examples the medical instrument has no fiducial markings. In such cases, the pose of the medical instrument may be determined by a machine learning model, discussed in more detail below.
[0038] The images captured by the arthroscope 408 and attached camera are subject to optical distortion in many forms. For example, the visual field between distal end of the arthroscope 408 and the bone 500 within the surgical site is filled with fluid, such as bodily fluids and saline used to distend the joint. Many arthroscopes have one or more lenses at the distal end that widen the field of view, and the wider field of view causes a “fish eye” effect in the captured images. Further, the optical elements within the arthroscope (e.g., rod lenses) may have optical aberrations inherent to the manufacturing and/or
assembly process. Further still, the camera may have various optical elements for focusing the images received onto the capture array, and the various optical elements may have aberrations inherent to the manufacturing and/or assembly process. In example systems, prior to use within each surgical procedure, the endoscopic optical system is calibrated to account for the various optical distortions. The calibration creates a characterization function that characterizes the optical distortion, and further analysis of the frames of the video stream may be, prior to further analysis, compensated using the characterization function.
[0039] The next example step in the intraoperative procedure is the registration of the bone model created during the planning stage. During the intraoperative repair, the three- dimensional bone model is obtained by or provided to the surgical controller 418. Again using the example of anterior cruciate ligament repair, and specifically computer-assisted navigation for tunnel paths through the femur, the three-dimensional bone model of the lower portion of the femur is obtained by or provided to the surgical controller 418. Thus, the surgical controller 418 receives the three-dimensional bone model, and assuming the arthroscope 408 is inserted into the knee by way of a port through the patient’s skin, the surgical controller 418 also receives video images of a portion of the lower end of the femur. In order to relate the three-dimensional bone model to the images received by way of the arthroscope 408 and camera 410, the surgical controller 418 registers the three- dimensional bone model to the images of the femur received by way of the arthroscope 408 and camera 410.
[0040] In order to perform the registration, and in accordance with example methods, the bone fiducial 428 is attached to the femur. The bone fiducial placement is such that the bone fiducial is within the field of view of the arthroscope 408, but in a location spaced apart from the expected tunnel entry/exit point through the lateral condyle. More particularly, in example cases the bone fiducial 428 is placed within the intercondylar notch superior to the expected location of the tunnel through lateral condyle.
[0041] Figure 6 is an example frame of a video display showing portions of a femur and a bone fiducial. The display may be shown, for example, on the display device 414 (Figure 4) associated with the device cart 402 (Figure 4), or any other suitable location. In particular, visible in Figure 6 is a femoral notch or intercondylar notch 600, a portion of
the lateral condyle 102, a portion the medial condyle 104, and an example bone fiducial 428. In the example, each of the outer faces of the bone fiducial 428 has a machine-readable pattern or fiducial thereon, and in some cases each fiducial is unique. In other cases, the upper face of the bone fiducial 428, opposite the face from which the threaded portion extends, may have an attachment feature and thus not carry a fiducial. In yet still other cases, when the viewing direction of the arthroscope 408 is relatively constant, the bone fiducial may have only a single face with a fiducial. Regardless of the precise arrangement of the bone fiducial 428, once placed the bone fiducial 428 represents a fixed location on the outer surface of the bone in the view of the arthroscope 408, even as the position of the arthroscope 408 is moved and changed relative to the bone fiducial 428. Initially, the location of the bone fiducial 428 with respect to the three-dimensional bone model is not known to the surgical controller 418, hence the need for the registration of the three-dimensional bone model.
[0042] In order to relate or register the bone visible in the video images to the three- dimensional bone model, the surgical controller 418 (Figure 4) is provided or determines a plurality of surface features of an outer surface of the bone. Identifying the surface features may take several forms, including a touch-based registration using the touch probe 424 without a carried fiducial, a touchless registration technique in which the surface features are identified after resolving the motion of the arthroscope 408 and camera relative to the bone fiducial 428, and a third technique in which uses a patientspecific instrument. Each will be addressed in turn.
[0043] In the example touch-based registration, the surgeon may touch a plurality of locations using the touch probe 424 (Figure 4). In some cases, particularly when portions of the outer surface of the bone are exposed to view, receiving the plurality of surface features of the outer surface of the bone may involve the surgeon “painting” the outer surface of the bone. “Painting” is a term of art that does not involve application of color or pigment, but instead implies motion of the touch probe 424 when the distal end of the touch probe 424 is touching bone. In this example, the touch probe 424 does not carry or have a fiducial visible to the arthroscope 408 and the camera 410. It follows that the pose of the touch probe 424 and the location of the distal tip of the touch probe 424 needs to be determined in order to gather the surface features for purposes of registering the three-
dimensional bone model.
[0044] Figure 7 shows a method in accordance with at least some embodiments. The example method may be implemented in software within a computer system, such as the surgical controller 418. In particular, the example method starts (block 700) and comprises obtaining a three-dimensional bone model (block 702). That is to say, in the example method, what is obtained is the three-dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., CT, MRI) taken preoperatively or intraoperatively. With the bone segmented from or within the images, the three-dimensional bone model may be created. The three-dimensional bone may take any suitable form, such as a computer-aided design (CAD) model, a point cloud of data points with respect to an arbitrary origin, or a parametric representation of a surface expressed using analytical mathematical equations. Thus, the three-dimensional bone model is defined with respect to the origin and in any suitable an orthogonal basis.
[0045] The next step in the example method is capturing video images of the bone fiducial attached to the bone (block 704). The capturing is performed intraoperatively. In the example case of an arthroscopic anterior cruciate ligament repair, the capturing of video images is by way of the arthroscope 408 and camera 410. Other endoscopes may be used, such as endoscopes in which the capture array resides at the distal end of the device (e.g., chip-on-the-tip devices). However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera system, or a portable computing device, such as a tablet or smart-phone device. The video images may be provided to the surgical controller 418 in any suitable form.
[0046] The next step in the example method is determining locations of a distal tip of the medical instrument visible within the video images (block 706), where the distal tip is touching the bone in at least some of the frames of the video images, and the medical instrument does not have a fiducial. Determining the locations of the distal tip of the medical instrument may take any suitable form. In one example, determining the locations may include segmenting the medical instrument in the frames of the video images (block 708). The segmenting may take any suitable form, such as applying the video images to a segmentation machine learning algorithm. The segmentation machine
learning algorithm may take any suitable form, such as neural network or convolution neural network trained with a training data set showing the medical instrument in a plurality of known orientations. The segmentation machine learning algorithm may produce segmented video images where the medical instrument is identified or highlighted in some way (e.g., box, brightness increased, other objects removed).
[0047] With the segmented video images, the example method may estimate a plurality of poses of the medical instrument within a respective plurality of frames of the video images (block 710). The estimating the poses may take any suitable form, such as applying the video images to a pose machine learning algorithm. The pose machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained to perform six-dimensional pose estimation. The resultant of the pose machine learning algorithm may be, for at least some of the frames of the video image, an estimated pose of the medical instrument in the reference frame of the video images and/or in the reference frame provided by the bone fiducial. That is, the resultant of the pose machine learning algorithm may be a plurality of poses, one pose each for at least some of the frames of the segmented video images. While in many cases a pose may be determined for each frame, in other cases it may not be possible to make a pose estimation for at least some frame because of video quality issues, such as motion blur caused by electronic shutter operation.
[0048] The next step in the example method is determining the locations based on the plurality of poses (block 712). In particular, for each frame for which a pose can be estimated, based on a model of the medical device the location of the distal tip can be determined in the reference frame of the video images and/or the bone fiducial. Thus, the resultant is a set of locations that, at least some of which, represent locations of the outer surface of the bone.
[0049] Figure 7 shows an example three-step process for determining the locations of the distal tip of the medial instrument. However, the method is merely an example, and many variations are possible. For example, a single machine learning model, such as a convolution neural network, may be set up and trained to perform all three steps as a single overall process, though there may be many hidden layers of the convolution neural network. That is, the convolution neural network may segment the medical instrument,
perform the six-dimensional pose estimation, and determine the location of the distal tip in each frame. The training data set in such a situation would include a data set in which each frame has the medical device segmented, the six-dimensional pose identified, and the location of the distal tip identified. The output of the determining step 706 may be a segmented video stream distinct from the video images captured at step 704. In such cases, the later method steps may use both segmented video stream and the video images to perform the further tasks. In other cases, the location information may be combined with the video images, such as being embedded in the video images, or added as metadata to each frame of the video images.
[0050] Figure 8 is an example video display showing portions of a femur and a bone fiducial during a registration procedure. The display may be shown, for example, on the display device 414 associated with the device cart 402, or any other suitable location. In particular, visible in the main part of the display of Figure 8 is the intercondylar notch 600, a portion of the lateral condyle 102, a portion the medial condyle 104, and the example bone fiducial 428. Shown in the upper right corner of the example display is a depiction of the bone, which may be a rendering 800 of the bone created from the three-dimensional bone model. Shown on the rendering 800 is a recommended area 802, the recommended area 802 being portions of the surface of the bone to be “painted” as part of the registration process. Shown in the lower right corner of the example display is a depiction of the bone, which again may be a rendering 804 of the bone created from the three-dimensional bone model. Shown on the rendering 804 are a plurality of surface features 806 on the bone model that have been identified as part of the registration process. Further shown in the lower right corner of the example display is progress indicator 808, showing the progress of providing and receiving of locations on the bone. The example progress indicator 808 is a horizontal bar having a length that is proportional to the number of locations received, but any suitable graphic or numerical display showing progress may be used (e.g., 0% to 100%).
[0051] Referring to both the main display and the lower right rendering, as the surgeon touches the outer surface of the bone within the images captured by the arthroscope 408 and camera 410, the surgical controller 418 receives the surface features on the bone, and may display each location both within the main display as dots or locations 806, and
within the rendering shown in the lower right corner. More specifically, the example surgical controller 418 overlays indications of identified surface features 806 on the display of the images captured by the arthroscope 408 and camera 410, and in the example case shown, also overlays indications of identified surface features 806 on the rendering 804 of the bone model. Moreover, as the number of identified locations 806 increases, the surgical controller 418 also updates the progress indicator 808.
[0052] Still referring to Figure 8, in spite of the diligence of the surgeon, not all locations identified by the surgical controller 418 based on the surgeon’s movement of the touch probe 424 result in valid locations on the surface of the bone. In the example of Figure 8, as the surgeon moves the touch probe 424 from the inside surface of the lateral condyle 102 to the inside surface of the medial condyle 104, the surgical controller 418, based on the example six-dimensional pose estimation, receives several locations 810 that likely represent locations at which the distal end of the touch probe 424 was not in contact with the bone.
[0053] Returning to Figure 7, the plurality of surface features 806 may be, or the example surgical controller 418 may generate, a registration model relative to the bone fiducial 428 (block 714). The registration model may take any suitable form, such as a computer-aided design (CAD) model or point cloud of data points in any suitable orthogonal basis. The registration model, regardless of the form, may have fewer overall data points or less “structure” than the bone model created by the non-invasive computer imaging (e.g., MRI). However, the goal of the registration model is to provide the basis for the coordinate transforms and scaling used to correlate the bone model to the registration model and relative to the bone fiducial 428. Thus, the next step in the example method is registering the bone model relative to the location of the bone fiducial based on the registration model (block 716). Registration may conceptually involve testing a plurality of coordinate transformations and scaling values to find a correlation that has a sufficiently high correlation or confidence factor. Once a correlation is found with the sufficiently high confidence factor, the bone model is said to be registered to the location of the bone fiducial. Thereafter, the example registration method may end (block 718); however, the surgical controller 418 may then use the registered bone model to provide computer-assisted navigation regarding a procedure involving the bone.
[0054] In the examples discussed to this point, registration of the bone model involves a touch-based registration technique using the touch probe 424 without a carried fiducial. However, other registration techniques are possible, and the specification now turns to a touchless registration technique. The example touchless registration technique again relies on placement of the bone fiducial 428, as illustrated in Figure 6. As before, when the viewing direction of the arthroscope 408 is relatively constant, the bone fiducial may have fewer faces with respective fiducials. Once placed, the bone fiducial 428 represents a fixed location on the outer surface of the bone in the view of the arthroscope 408, even as the position of the arthroscope 408 is moved and changed relative to the bone fiducial 428. Again, in order to relate or register the bone visible in the video images to the three-dimensional bone model, the surgical controller 418 (Figure 4) determines a plurality of surface features of an outer surface of the bone, and in this example determining the plurality of surface features is based on a touchless registration technique in which the surface features are identified based on motion of the arthroscope 408 and camera 410 relative to the bone fiducial 428.
[0055] Figure 9 shows an example touchless registration method. The example method may be implemented in software within a computer system, such as the surgical controller 418. In particular, the example method starts (block 900) and comprises obtaining a three-dimensional bone model (block 902). Much like the touch-based registration technique, in the touchless registration technique what is obtained is the three-dimensional bone model that may be created by segmenting a plurality of non- invasive images (e.g., MRI) taken preoperatively or intraoperatively.
[0056] The next step in the example method is capturing video images of the bone fiducial attached to the bone (block 904). Here again, the capturing may be performed intraoperatively. In the example case of an arthroscopic anterior cruciate ligament repair, the capturing of video images is by way of the arthroscope 408 and camera 410. However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera systems, or a portable computing device, such as a tablet or smart-phone device. The video images may be provided to the surgical controller 418 in any suitable form.
[0057] The next step in the example method is resolving motion of the camera relative to the bone fiducial based on the video images (block 906). In particular, not only does the bone fiducial 428 represent a fixed location with respect to the bone, but the bone fiducial also provides size and scale information, either directly or indirectly. For example, the fiducial visible on each face of the bone fiducial may be or define a predetermined size. By determining changes in aspect ratio and size of the fiducial in the frames of the video images, the surgical controller 418 may determine the motion of the arthroscope 408 and camera 410 from frame-to-frame. Over the course of a plurality of frames, the motion of the camera relative to the bone fiducial may be determined. Stated otherwise, for at least some of the frames of the video images, the pose of the arthroscope 408 relative to the bone fiducial 428 may be determined. The change of pose from frame-to-frame represents the motion of the arthroscope 408 relative to the bone fiducial 428.
[0058] The next step in the example method is identifying a plurality of surface features on the bone in the video images based on the motion of the camera (block 908). More particularly, in the example touchless registration technique, the plurality of surface features are identified from the frames of the video stream without the use of medical instrument physically touching the bone in the video images. Stated more precisely still, in the example touchless registration technique the plurality of surface features are identified from the frames of the video stream without the use of touch probe, aimer, or other medical instrument physically touching the bone in the video images. Determining the surface features may take any suitable form. In one example, determining the surface features may include applying the video images and an indication of motion of the camera to a machine learning algorithm trained to recognize optical texture, such as such as spatial changes in the brightness intensity, even in the absence of physical features, or optical texture caused by surface features, such as contours, ridges, and/or valleys. The machine learning algorithm may take any suitable form, such as neural network or convolution neural network trained with a training data set identifying surface features for a plurality of different poses of the camera and a plurality of different bones. Variations on the identifying are discussed in greater detail below.
[0059] The next steps in the example method is generating a registration model of the
bone relative to the bone fiducial (block 910), and registering the bone model relative to the location of the bone fiducial (block 912). The generation of the registration model and registering the bone model are similar to or the same as those discussed with respect to the touch-based registration technique of Figure 6 above. So as not to unduly lengthen the specification, the generation of the registration model and registering of the bone model based on the registration model is not repeated again here. Thereafter, the example registration method may end (block 914); however, the surgical controller 418 may then use the registered bone model to provide computer-assisted navigation regarding a procedure involving the bone. The discussion now turns to a more detailed description of the identifying of the plurality of surface features.
[0060] One of the issues faced in identifying the plurality of surface features on the bone is the fact that the bone and surrounding tissues are low texture. Further still, in the limited viewing angle of the arthroscope 408, the outer surface of the bone may have smoothly varying features. Thus, detecting surface features by finding and tracking prominent features is a non-trivial task.
[0061] Various related-art photogrammetric techniques exist for simultaneously extracting the three-dimensional structure and the motion of a camera from a plurality of frames of two-dimensional video images. More particularly, within a subset of computer vision are techniques referred to as structure from motion (SfM) techniques. In related- art structure from motion techniques, the computer system identifies prominent features within the video images, and tracks those features frame-to-frame. However, in the context of identifying a plurality of surface features on a bone in video images captured by an arthroscope 408 and camera 410, related-art structure from motion techniques are inadequate. Again, in the context of anterior-cruciate ligament repair, the bone and surrounding tissues are low textured surfaces, creating difficulties for structure-from- motion approaches. In more detail, arthroscopic sequences are specially challenging not only because of poor texture, but also because of the existence of deformable tissues, complex illumination, shadows, highlights, fog, condensation and very close range acquisition. In addition, the camera is hand-held, the lens scope rotates, the procedure is performed in wet medium, and the surgeon often switches the camera portal. Attempts of using visual SLAM pipelines, reported to work in laparoscopy, were unfruitful and
revealed the need of additional visual aids to accomplish the robustness required for real clinical uptake. Thus, related-art structure from motion techniques cannot consistently select prominent features of the outer surface of the bone, nor find correct correspondences between the features in consecutive frames, and thus have difficulty simultaneously solving for the motion of the camera and the three-dimensional structures. [0062] The example touchless registration technique addresses, at least in part, the issues of related-art structure from motion techniques by use of an artificial feature in the form of the bone fiducial 428. In particular, in the example methods and systems, the bone fiducial 428 is visible in the video images as the arthroscope 408 and camera 410 are moved. Because the bone fiducial 428 represents a known or knowable size and a fixed location, the motion of the arthroscope 408 and camera 410 can be precisely determined or resolved. When the motion is resolved with a high degree of accuracy, determination of the three-dimensional structure represented in the two-dimensional video images is easier to obtain. That is to say, with the motion of the arthroscope 408 and camera 410 resolved to high degree of accuracy, identifying surface features on the bone becomes a more determinative and easier computational task, improving the operation of the computer system in form of the surgical controller 418.
[0063] In particular, with the known camera motion, several techniques may be used to determine the three-dimensional structure represented in the two-dimensional video images. For example, one technique may be to perform feature extraction, matching, and reconstruction on each incoming frame. That is, for each incoming frame, features are extracted and explicitly matched, and then reconstruction is performed. As another example, features are extracted in one frame and tracked in subsequent frames (i.e., locating features in the subsequent frames does not involve feature extraction). Further still, a dense tracking approach may be used, where tracking is performed for most or all the pixels in the image. Tracking some or all the pixels provides dense scene reconstruction. Yet another approach is a deep learning approach, such as using neural radiance fields to do the feature extraction, matching, and reconstructions simultaneously an in an implicit manner. For all these approaches, the known camera motion is used to constrain the problem and can additionally use the 3D model of the bone to provide further constraints. These solutions may provide not only the reconstruction of the scene, but
also differential information associated to each reconstructed point (e.g., the normal vectors).
[0064] Unlike related-art structure from motion techniques, which rely on feature extraction and matching for retrieving structure and camera motion simultaneously, the various examples utilize the pose of the bone fiducial 428 in each frame to resolve the motion aspect. The structure of the bone fiducial 428 may be used directly - using the known or predetermined features of the bone fiducial 428, including the fiducials visible. In other cases, the surgical controller 418 may read a visible fiducial, and based on the reading obtain the size information, either from the fiducial directly or by accessing local- or internet-based database.
[0065] As mentioned above, the bone in the limited field of view of the arthroscope 408 is low texture, and for many patients the bone has smoothly varying features. Though in some cases the example touchless-registration techniques operate directly, in other cases one or more additional steps may be included to ensure an adequate sample set of surface features is identified on the bone. The additional steps may be conceptually divided into illumination strategies, application of coloration to the bone, and creating optical texture. Each will be addressed in turn.
[0066] Bone is matrix of calcium and collagen, with the collagen acting as mechanical support for the bone cells. Calcium and collagen may have different absorption and/or reflectivity coefficients as function of wavelength of photons incident on the surface of the bone. In certain situations, the surface features of the bone may be highlighted by illuminating the bone with photons having wavelengths outside the visible range, and correspondingly receiving the video images based on the illuminating photons. More particularly, in one example the bone may be illuminated by light in the near infrared range, such as wavelengths between and including 780 nanometers (nm) and 2500 nm. In other cases, the bone may be illuminated with having wavelengths between and including 488 nm and 620 nm. The difference in absorption and/or reflectivity of the collagen and calcium in the matrix may make the optical texture more prominent in the video images. The frames of the video images may be used directly by the surgical controller 418 to resolve the motion and identify the surface features. That is, the image capture array of the camera 410 in the camera may be selected to be responsive to
photons in the near infrared range, and the surgical controller 418 may resolve the motion of the arthroscope 408 and camera 410, and identify surface features, based on the near infrared photons reflected or emitted from the bone.
[0067] The second conceptual additional step is application of a surface coloration to the bone prior to identifying the surface features. In particular, in these examples a chemical compound is applied to the bone to make add optical texture to the bone within the video images. Stated otherwise, in these examples a chemical compound is applied to the bone to highlight optical texture of the bone. In one example, a biocompatible dye may be applied to the surface of the bone. The dye may take any suitable form, such as a dye similar to food coloring or butcher’s ink. The dye thus imparts a color to the bone, such as a blue, that makes identification of surface features easier for the surgical controller 418. The dye may be placed on large swaths of the surface area of the bone visible in the video images (e.g., covering 50% of more of the surface area, or covering 75% or more of the surface area). In other cases, the dye may be placed in periodically or irregularly placed dots or splotches on the surface of the bone.
[0068] As another example of coloration, fluorescent proteins, such as green fluorescent proteins (GRFP), may be applied to the surface of the bone. Green fluorescent proteins are proteins that fluoresce bright green in the presence of ultraviolet light. Proteins that fluoresce in other colors may be available now or in the future. As with the dye, the green fluorescent proteins impart a color to the bone, here green, which makes identification of surface features easier for the surgical controller 418. The fluorescent proteins may be placed on large swaths of the surface area of the bone visible in the video images (e.g., covering 50% of more of the surface area, or covering 75% or more of the surface area). In other cases, the green fluorescent proteins may be placed in periodically or irregularly placed dots splotches on the surface of the bone. Thus, this example combines an illumination strategy with application of coloration to achieve the goal of identifying surface features. That is, the light source for the arthroscope may be designed and constructed to produce ultraviolet light, and provide the combined ultraviolet and visible light the arthroscope 408 and the surgical site.
[0069] The third conceptual additional step to assist in identifying surface feature is creating optical texturized regions on the surface of the bone. That is, in many cases the
bone is smooth and has smoothly varying features. The additional step in these examples involves adding optical texture that assists with the identification of surface features by adding optical texture, but which does not severely affect the structural integrity of the bone. In one example, the optical texture may be added to a non-load-bearing location. In the example of an anterior cruciate ligament repair, the optical texture may be added within the intercondylar notch, on the inside surface of lateral condyle (e.g., above the lateral meniscus 112), and/or on the inside surface of the medial condyle (e.g., above the medial meniscus 114).
[0070] Adding the optical texture may take many forms. In one example, the optical texture may be added by a mechanical resection instrument having a rotating cutting element or burr, but where the burr is designed and constructed to limit the amount of bone removed. For example, burr may be designed and constructed to mill or create grooves having depths of no more than 0.1 millimeters (mm). In other cases, the optical texture may be created by counterbore dill bit designed and constructed to limit the drill depth, such as by a shoulder region limiting the drill depth to no more than 0.1 mm. For example, the surgeon could place a plurality of counterbores in the bone along non-load bearing surfaces. Further still, the surgeon may use any suitable instrument, such as the aimer 426 (Figure 4), to perform a mechanical abrasion of the surface of the bone, such as pulling the aimer along the surface of the bone and thereby making surface indentations.
[0071] The examples discussed to this point for the touchless registration technique separate the resolving of the motion of the camera relative to the bone fiducial and identifying the surface features. However, in other cases the resolving and identifying may be an integrated programmatic method, in which the bone fiducial is found in each frame, the motion is resolved frame-to-frame, and the identification of the surface features takes place substantially simultaneously with the resolving of the motion frame-to-frame. [0072] The specification now turns to a third technique for registering the bone model to the bone - a patient-specific instrument. In both the touch-based and touchless registration techniques discussed to this point, a registration model is created, and the registration model is used to register the bone model to the bone visible in the video images. Conceptually, the registration model is used to determine a coordinate
transformation and scaling to align the bone model to the actual bone. However, if the orientation of the bone in the video images is known or can be determined, use of the registration model may be omitted, and instead the coordinate transformations and scaling may be calculated directly.
[0073] Figure 10 shows a method in accordance with at least some embodiments. The example method may be implemented in software within one or more computer systems, such as, in part, the surgical controller 418. In particular, the example method starts (block 1000) and comprises obtaining a three-dimensional bone model (block 1002). Much like the prior techniques, in the patient-specific instrument registration technique what is obtained is the three-dimensional bone model that may be created by segmenting a plurality of non-invasive images (e.g., MRI) taken preoperatively or intraoperatively.
[0074] The next step in the example method is generating a patient-specific instrument that has a feature designed to couple to the bone represented in the bone model in only one orientation (block 1004). Generating the patient-specific instrument may first involve selecting a location at which the patient-specific instrument will attach. For example, a device or computer system may analyze the bone model and select the attachment location. In various examples, the attachment location may be a unique location in the sense that, if a patient-specific instrument is made to couple to the unique location, the patient-specific instrument will not couple to the bone at any other location. In the example case of an anterior cruciate ligament repair, the location selected may be at or near the upper or superior portion on the intercondylar notch. If the bone model shows another location with a unique feature, such as a bone spur or other raised or sunken surface anomaly, such a unique location may be selected as the attachment location for the patient-specific instrument.
[0075] Moreover, forming the patient-specific instrument may take any suitable form. In one example, a device or computer system may directly print, such as using a 3D printer, the patient-specific instrument. In other cases, the device or computer system may print a model of the attachment location, and the model may then become the mold for creating the patient-specific instrument. For example, the model may be the mold for an injection- molded plastic or casting technique. In some examples, the patient-specific instrument carries one or more fiducials, but as mentioned above, in other cases the patient-specific
instrument may itself be tracked and thus carry no fiducials.
[0076] The next step in the example method is coupling the patient-specific instrument to the bone, in some cases the patient-specific instrument having the fiducial coupled to an exterior surface (block 1006). As previously mentioned, the attachment location for the patient-specific instrument is selected to be unique such that the patient-specific instrument couples to the bone in only one location and in only one orientation. In the example case of an arthroscopic ACL repair, the patient-specific instrument may be inserted arthroscopically. That is, the attachment location may be selected such that a physical size of the patient-specific instrument enables insertion through the ports in the patient’s skin. In other case, the patient-specific instrument may be made or constructed of a flexible material that enables the patient-specific instrument to deform for insertion in the surgical site, yet return to the predetermined shape for coupling to the attachment location. However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the patient-specific instrument may be a rigid device with fewer size restrictions.
[0077] The next step in the example method is capturing video images of the patientspecific instrument (block 1008). Here again, the capturing may be performed intraoperatively. In the example case of an arthroscopic anterior cruciate ligament repair, the capturing of video images is by the surgical controller 418 by way of arthroscope 408 and camera 410. However, in open procedures where the skin is cut and pulled away, exposing the bone to the open air, the capturing may be by any suitable camera device, such as one or both cameras of a stereoscopic camera systems, or a portable computing device, such as a tablet or smart-phone device. In such cases, the video images may be provided to the surgical controller 418 in any suitable form.
[0078] The next step in the example method is registering the bone model based on the location of the patient-specific instrument (block 1010). That is, given that the patientspecific instrument couples to the bone at only one location and in only one orientation, the location and orientation of the patient-specific instrument is directly related to the location and origination of the bone, and thus the coordinate transformations and scaling for the registration may be calculated directly. Thereafter, the example method may end (block 1012); however, the surgical controller 418 may then use the registered bone
model to provide computer-assisted navigation regarding a surgical task or surgical procedure involving the bone.
[0079] For example, with the registered bone model the surgical controller 418 may provide guidance regarding a surgical task of a surgical procedure. The specific guidance is dependent upon the surgical procedure being performed and the stage of the surgical procedure. A non-exhaustive list of guidance comprises: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut and/or resect the bone; reaming the bone by a certain depth along a certain direction; placing a device (suture, anchor or other) at a certain location; placing a suture at a certain location; placing an anchor at a certain location; showing regions of the bone to touch and/or avoid; and identifying regions and/or landmarks of the anatomy. In yet still other cases, the guidance may include highlighting within a version of the video images displayed on a display device, which can be the arthroscopic display or a see- through display, or by communicating to a virtual reality device or a robotic tool.
[0080] Figure 11 shows an anterior or front elevation view of right knee, with the patella removed, and an example patient-specific instrument installed. In particular, visible in Figure 1 is a lower portion of the femur 100 including the outer or lateral condyle 102 and the inner or medial condyle 104. The femur 100 and condyles 102 and 104 are in operational relationship to a tibia 106 including the tibial tuberosity 108 and Gerdy’s tubercle 110. Disposed between the lateral condyle 102 and the medial condyle 104 is the intercondylar notch 600.
[0081] In the example of Figure 11 , disposed within the intercondylar notch 600 is a patient-specific instrument 1102. In particular, the patient-specific instrument 1102 of Figure 11 is designed and constructed to couple within the intercondylar notch 600 between the superior or apex of the intercondylar notch 600 and the medial condyle 104. In this way, the patient-specific instrument 1102 will be visible in the video images captured by the arthroscope 408 and camera 410, but will not interfere or impede with the anterior cruciate ligament repair tunnel, which is generally disposed through the lateral condyle 102.
[0082] In the example of Figure 11 , the patient-specific instrument 1102 includes a feature 1104 designed and constructed to couple to the bone in only one orientation.
Inasmuch as the example patient-specific instrument 1102 is shown installed, only a portion of the feature 1104 is visible in Figure 11 . However, the feature 1104 is, in effect, a negative image of the bone at the attachment location.
[0083] The example patient-specific instrument 1102 further includes a fiducial 1106 disposed on or coupled to an exterior surface of the patient-specific instrument 1102. The example fiducial 1106 is shown in the form a Quick Response (QR) code, but any machine-readable code may be used (e.g., one-dimensional bar code, two-dimensional bar code). While the example patient-specific instrument 1102 contains only a single fiducial, in other cases the patient-specific instrument 1102 may contain two or more fiducials disposed on respective faces. In yet still other cases, the patient-specific instrument may itself be tracked, and thus carry no fiducial.
[0084] Figure 12 shows an example computer system 1200. In one example, computer system 1200 may correspond to the surgical controller 418, device that creates the patient-specific instrument, a tablet device within the surgical room, or any other system that implements any or all the various methods discussed in this specification. The computer system 1200 may be connected (e.g., networked) to other computer systems in a local-area network (LAN), an intranet, and/or an extranet (e.g., device cart 402 network), or at certain times the Internet (e.g., when not in use in a surgical procedure). The computer system 1200 may be a server, a personal computer (PC), a tablet computer or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer system is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein.
[0085] The computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 1206 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 1208, which communicate with each other via a bus 1210.
[0086] Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly,
the processing device 1202 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 1202 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1202 is configured to execute instructions for performing any of the operations and steps discussed herein. Once programmed with specific instructions, the processing device 1202, and thus the entire computer system 1200, becomes a special-purpose device, such as the surgical controller 418.
[0087] The computer system 1200 may further include a network interface device 1212 for communicating with any suitable network (e.g., the device cart 402 network). The computer system 1200 also may include a video display 1214 (e.g., display device 414), one or more input devices 1216 (e.g., a microphone, a keyboard, and/or a mouse), and one or more speakers 1218. In one illustrative example, the video display 1214 and the input device(s) 1216 may be combined into a single component or device (e.g., an LCD touch screen).
[0088] The data storage device 1208 may include a computer-readable storage medium 1220 on which the instructions 1222 (e.g., implementing any methods and any functions performed by any device and/or component depicted described herein) embodying any one or more of the methodologies or functions described herein is stored. The instructions 1222 may also reside, completely or at least partially, within the main memory 1204 and/or within the processing device 1202 during execution thereof by the computer system 1200. As such, the main memory 1204 and the processing device 1202 also constitute computer-readable media. In certain cases, the instructions 1222 may further be transmitted or received over a network via the network interface device 1212.
[0089] While the computer-readable storage medium 1220 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of
instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
[0090] The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Claims
1 . A method of registration of a bone model, the method comprising: obtaining, by a surgical controller, the bone model; capturing, by the surgical controller, video images of a bone fiducial affixed to a bone, the capturing by way of a camera that is moving; resolving, by the surgical controller, motion of the camera relative to the bone fiducial based on the video images; identifying, by the surgical controller, a plurality of surface features on the bone in the video images based on the motion of the camera; generating, by the surgical controller, a registration model of the bone relative to the bone fiducial; and registering, by the surgical controller, the bone model relative to the location of the bone fiducial based on the registration model.
2. The method of claim 1 further comprising providing, by the surgical controller, navigation information regarding a procedure involving the bone.
3. The method of claim 1 wherein identifying the plurality of surface features on the bone further comprises illuminating the bone with a wavelength of photons that highlights optical texture.
4. The method of claim 3 wherein illuminating the bone with the wavelength of photons that highlights optical texture further comprises illuminating with light having wavelengths between and including 780 nanometers (nm) and 2500 nm.
5. The method of claim 3 wherein illuminating the bone with the wavelength of light that highlights optical texture further comprises illuminating the bone with the wavelength between and including 488 nanometers (nm) and 620 nm.
6. The method of claim 3 wherein illuminating the bone with the wavelength of light that highlights optical texture further comprises illuminating the bone with the wavelength of light for which a reflectivity is higher for collagen than for calcium.
7. The method of claim 1 further comprising, prior to identifying the plurality of surface features, applying a surface coloration to the bone.
8. The method of claim 7 wherein applying the surface coloration further comprises
applying dye to the bone.
9. The method of claim 7 wherein applying the surface coloration further comprises applying a green fluorescent protein to the bone.
10. The method of claim 1 further comprising, prior to identifying the plurality of surface features, creating a texturized region on the bone.
11. The method of claim 10 wherein creating the texturized region further comprises creating the texturized region at a location that is non-load bearing.
12. The method of claim 11 wherein creating the texturized region at the location that is non-load bearing further comprises, for an anterior cruciate ligament procedure, creating the texturized region in an intercondylar notch.
13. The method of claim 10 wherein creating the texturized region further comprises performing a mechanical abrasion.
14. The method of claim 1 wherein identifying the plurality of surface features further comprises applying the video images and an indication of motion of the camera to a machine learning algorithm trained to recognize optical texture of bone.
15. The method of claim 1 wherein the camera is affixed to an endoscope that is rigid.
16. The method of claim 15 wherein the endoscope is monocular.
17. The method of claim 1 wherein obtaining the bone model further comprises: segmenting a plurality of non-invasive images taken preoperatively or intraoperatively; and generating the bone model based on the segmentation.
18. A surgical controller comprising: a processor; a memory communicatively coupled to the processor, the memory storing instructions that, when executed by the processor, cause the processor to: receive a bone model; capture video images of a bone fiducial affixed to a bone, the video images captured while a camera is moving; resolve motion of the camera relative to the bone fiducial based on the video images; identify a plurality of surface features on the bone in the video images
based on the motion of the camera; generate a registration model of the bone relative to the bone fiducial; and register the bone model relative to the location of the bone fiducial based on the registration model.
19. The surgical controller of claim 18 wherein the instructions further cause the processor to provide navigation information regarding a procedure involving the bone.
20. The surgical controller of claim 18 wherein when the processor identifies the plurality of surface features on the bone, the instructions further cause the processor to command a light source to illuminate the bone with a wavelength of light that highlights optical texture.
21 . The surgical controller of claim 20 wherein when the processor commands the light source to illuminate the bone with the wavelength of light that highlights optical texture, the instructions cause the processor to command the light source to illuminate with light having wavelengths between and including 780 nanometers (nm) and 2500 nm.
22. The surgical controller of claim 20 wherein when the processor commands the light source to illuminate the bone with a wavelength of light that highlights optical texture, the instructions cause the processor to command the light source to illuminate the bone with a wavelength of light for which a reflectivity is higher for calcium than for collagen.
23. The surgical controller of claim 20 wherein when the processor commands the light source to illuminate the bone with the wavelength of light that highlights optical texture, the instructions cause the processor to command the light source to illuminate the bone with the wavelength of light for which a reflectivity is higher for collagen than for calcium.
24. The surgical controller of claim 18 wherein when the processor identifies the plurality of surface features, the instructions cause the processor to apply the video images and an indication of motion of the camera to a machine learning algorithm trained to recognize optical texture of bone.
25. A method registration of a bone model, the method comprising: obtaining, by a surgical controller, the bone model; capturing, by the surgical controller, video images of a bone fiducial affixed to a bone; determining, by the surgical controller, locations of a distal tip of a medical
instrument visible within the video images, the distal tip touching the bone in at least some of frames of the video images, and the medical instrument does not have a fiducial; generating, by the surgical controller, a registration model of the bone relative to the bone fiducial based on the locations of the distal tip of the medical instrument; and registering, by the surgical controller, the bone model relative to the location of the bone fiducial based on the registration model.
26. The method of claim 25 further comprising providing, by the surgical controller, navigation information regarding a procedure involving the bone.
27. The method of claim 25 wherein determining the location of the distal tip of the medical instrument further comprises: segmenting, by the surgical controller, the medical instrument in the video images; estimating, by the surgical controller, a plurality of poses of the medical instrument within a respective plurality of frames of the video images; and determining the locations based on the plurality of poses.
28. The method of claim 27 wherein estimating the plurality of poses further comprises applying the video images to a machine learning algorithm trained to perform sixdimensional pose estimation.
29. The method of claim 28 wherein applying to the machine learning algorithm trained to perform six-dimensional pose estimation further comprises applying the video images to a convolution neural network trained to perform the six-dimensional pose estimation.
30. The method of claim 25 wherein the medical instrument is at least one selected from a group comprising: a touch probe; an aimer; and a drill guide.
31 . The method of claim 25 wherein camera is affixed to an endoscope that is rigid.
32. The method of claim 31 wherein the endoscope is monocular.
33. The method of claim 25 wherein obtaining the bone model further comprises: segmenting a plurality of non-invasive images taken preoperatively or intraoperatively; and generating the bone model based on the segmentation.
34. A surgical controller comprising:
a processor; a memory communicatively coupled to the processor, the memory storing instructions that, when executed by the processor, cause the processor to: obtain a bone model; capture video images of a bone fiducial affixed to a bone; determine locations of a distal tip of a medical instrument visible within the video images, the distal tip touching the bone in at least some of frames of the video images, and the medical instrument does not have a fiducial; generate a registration model of the bone relative to the bone fiducial based on the locations of the distal tip of the medical instrument; and register the bone model relative to the location of the bone fiducial based on the registration model.
35. The surgical controller of claim 34 wherein the instructions further cause the processor to provide navigation information regarding a procedure involving the bone.
36. The surgical controller of claim 34 wherein when the processor determines the location of the distal tip of the medical instrument, the instructions cause the processor to: segment the medical instrument in the video images; estimate a plurality of poses of the medical instrument within a respective plurality of frames of the video images; and determine the locations based on the plurality of poses.
37. The surgical controller of claim 36 wherein when the processor estimates the plurality of poses, the instructions cause the processor to apply the video images to a machine learning model trained to perform six-dimensional pose estimation.
38. The surgical controller of claim 37 wherein when the processor applies the video images to the machine learning model, the instructions cause the processor to apply the video images to a convolution neural network trained to perform six-dimensional pose estimation.
39. The surgical controller of claim 34 wherein the medical instrument in the video images is at least one selected from a group comprising: a touch probe; an aimer; and a
drill guide.
40. A method registration of a bone model, the method comprising: obtaining, by a device, the bone model; generating, by the device, a patient-specific instrument that has a feature that is configured to couple to the bone represented in the bone model in only one orientation; coupling the patient-specific instrument to the bone; and then capturing, by a surgical controller, video images of the patient-specific instrument, the capturing by way of a camera; and registering, by the surgical controller, the bone model based on the location of the patient-specific instrument.
41. The method of claim 40 further comprising guiding, by the surgical controller, a surgical task of a surgical procedure, the guiding based on the bone model after registration.
42. The method of claim 41 wherein guiding the surgical task further comprises at least one selected from a group comprising: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut the bone; and highlighting, within a version of the video images displayed on a display device, regions of the bone to avoid.
43. The method of claim 41 wherein the surgical procedure is an open procedure.
44. The method of claim 41 wherein the surgical procedure is an arthroscopic surgical procedure.
45. The method of claim 44 wherein the camera is affixed to an arthroscope that is rigid.
46. The method of claim 45 wherein the arthroscope is monocular.
47. The method of claim 44 wherein the arthroscopic surgical procedure is a surgery on one selected from a group consisting of: a knee; a hip; a shoulder; a wrist; and an ankle.
48. The method of claim 44 wherein the arthroscopic surgical procedure is at least one selected from a group consisting of: an anterior-cruciate ligament repair; reduction of femoroacetabular impingement; and repair of a rotator cuff.
49. The method of claim 40 wherein obtaining the bone model further comprises: segmenting a plurality of non-invasive images taken preoperatively or
intraoperatively; and generating the bone model based on the segmentation.
50. A surgical controller comprising: a processor; a memory communicatively coupled to the processor, the memory storing instructions that, when executed by the processor, cause the processor to: obtain a bone model; capture video images of a fiducial of a patient-specific instrument, the capture by way of a camera; and registering, by the surgical controller, the bone model based on the location of the patient-specific instrument.
51. The surgical controller of claim 50 wherein the instructions further cause the processor to guide a surgical task of a surgical procedure, the guidance based on the bone model after registration.
52. The surgical controller of claim 51 wherein the guidance is at least one selected from a group comprising: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut and/or resect the bone; reaming the bone by a certain depth along a certain direction; placing a device (suture, anchor or other) at a certain location; placing a suture at a certain location; placing an anchor at a certain location; showing regions of the bone to touch and/or avoid; and identifying regions and/or landmarks of the anatomy.
53. The surgical controller of claim 52 wherein the guidance is provided by highlighting within a version of the video images displayed on a display device, which can be the arthroscopic display or a see-through display, or by communicating to a virtual reality device or a robotic tool.
54. The surgical controller of claim 51 wherein the guidance is at least one selected from a group comprising: changing a drill path entry point; changing a drill path exit point; aligning an aimer along a planned drill path; showing location at which to cut the bone; and highlighting, within a version of the video images displayed on a display device, regions of the bone to avoid.
55. The surgical controller of claim 52 wherein the surgical procedure is an open
procedure.
56. The surgical controller of claim 52 wherein the surgical procedure is an arthroscopic surgical procedure.
57. The surgical controller of claim 52 wherein the surgical procedure is a surgery on one selected from a group consisting of: a knee; a hip; a shoulder; a wrist; and an ankle.
58. The surgical controller of claim 52 wherein the surgical procedure is at least one selected from a group consisting of: an anterior-cruciate ligament repair; reduction of femoroacetabular impingement; and repair of a rotator cuff.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363457908P | 2023-04-07 | 2023-04-07 | |
| US63/457,908 | 2023-04-07 | ||
| PCT/US2024/017487 WO2024211015A1 (en) | 2023-04-07 | 2024-02-27 | Methods and systems of registering a three-dimensional bone model |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| AU2024253887A1 true AU2024253887A1 (en) | 2025-09-11 |
Family
ID=92972605
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2024253887A Pending AU2024253887A1 (en) | 2023-04-07 | 2024-02-27 | Methods and systems of registering a three-dimensional bone model |
Country Status (4)
| Country | Link |
|---|---|
| EP (1) | EP4687624A1 (en) |
| CN (1) | CN120916682A (en) |
| AU (1) | AU2024253887A1 (en) |
| WO (1) | WO2024211015A1 (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120814919B (en) * | 2025-09-17 | 2025-12-05 | 济南显微智能科技有限公司 | Three-dimensional external vision mirror operation system |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7468075B2 (en) * | 2001-05-25 | 2008-12-23 | Conformis, Inc. | Methods and compositions for articular repair |
| CN108778179A (en) * | 2016-02-26 | 2018-11-09 | 思想外科有限公司 | Method and system for instructing a user to position a robot |
| US10667867B2 (en) * | 2017-05-03 | 2020-06-02 | Stryker European Holdings I, Llc | Methods of pose estimation of three-dimensional bone models in surgical planning a total ankle replacement |
| JP7277967B2 (en) * | 2018-01-08 | 2023-05-19 | リヴァンナ メディカル、インク. | 3D imaging and modeling of ultrasound image data |
| WO2019245854A2 (en) * | 2018-06-19 | 2019-12-26 | Tornier, Inc. | Extended reality visualization of range of motion |
| WO2020092968A1 (en) * | 2018-11-02 | 2020-05-07 | The Trustees Of Dartmouth College | Method and apparatus to measure bone hemodynamics and discriminate healthy from diseased bone, and open reduction internal fixation implant with integrated optical sensors |
| WO2023034194A1 (en) * | 2021-08-31 | 2023-03-09 | Smith & Nephew, Inc. | Methods and systems of ligament repair |
-
2024
- 2024-02-27 AU AU2024253887A patent/AU2024253887A1/en active Pending
- 2024-02-27 EP EP24785510.9A patent/EP4687624A1/en active Pending
- 2024-02-27 CN CN202480018387.5A patent/CN120916682A/en active Pending
- 2024-02-27 WO PCT/US2024/017487 patent/WO2024211015A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| CN120916682A (en) | 2025-11-07 |
| EP4687624A1 (en) | 2026-02-11 |
| WO2024211015A1 (en) | 2024-10-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3958779B1 (en) | System for computer guided surgery | |
| Burschka et al. | Scale-invariant registration of monocular endoscopic images to CT-scans for sinus surgery | |
| EP3273854B1 (en) | Systems for computer-aided surgery using intra-operative video acquired by a free moving camera | |
| US12475586B2 (en) | Systems and methods for generating three-dimensional measurements using endoscopic video data | |
| CN102711650B (en) | Image integration based registration and navigation for endoscopic surgery | |
| US20230190136A1 (en) | Systems and methods for computer-assisted shape measurements in video | |
| EP3716879A1 (en) | Motion compensation platform for image guided percutaneous access to bodily organs and structures | |
| US20240358224A1 (en) | Methods and systems of ligament repair | |
| Liu et al. | Automatic markerless registration and tracking of the bone for computer-assisted orthopaedic surgery | |
| Raposo et al. | Video-based computer navigation in knee arthroscopy for patient-specific ACL reconstruction | |
| Hu et al. | Markerless navigation system for orthopaedic knee surgery: A proof of concept study | |
| AU2024253887A1 (en) | Methods and systems of registering a three-dimensional bone model | |
| US20250031942A1 (en) | Methods and systems for intraoperatively selecting and displaying cross-sectional images | |
| AU2022401872B2 (en) | Bone reamer video based navigation | |
| US20250032189A1 (en) | Methods and systems for generating 3d models of existing bone tunnels for surgical planning | |
| Raposo et al. | Video-based computer aided arthroscopy for patient specific reconstruction of the anterior cruciate ligament | |
| US20250169890A1 (en) | Systems and methods for point and tool activation | |
| WO2025250376A1 (en) | Structured light for touchless 3d registration in video-based surgical navigation | |
| US20250322514A1 (en) | Automatic surgical marker motion detection using scene representations for view synthesis | |
| Bergmeier et al. | Workflow and simulation of image-to-physical registration of holes inside spongy bone | |
| US20240197410A1 (en) | Systems and methods for guiding drilled hole placement in endoscopic procedures | |
| US20250204991A1 (en) | Smart, video-based joint distractor positioning system | |
| WO2025240248A1 (en) | Burr tracking for surgical navigation procedures | |
| Constantinescu et al. | Constrained statistical modelling of knee flexion from multi-pose magnetic resonance imaging | |
| CN121463927A (en) | Methods and systems for registering internal and external coordinate systems for surgical guidance. |