WO2025027505A1 - System and method of patient registration - Google Patents
System and method of patient registration Download PDFInfo
- Publication number
- WO2025027505A1 WO2025027505A1 PCT/IB2024/057327 IB2024057327W WO2025027505A1 WO 2025027505 A1 WO2025027505 A1 WO 2025027505A1 IB 2024057327 W IB2024057327 W IB 2024057327W WO 2025027505 A1 WO2025027505 A1 WO 2025027505A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- registration
- image
- points
- generating
- patient
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 111
- 230000001815 facial effect Effects 0.000 claims description 13
- 238000002591 computed tomography Methods 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 description 50
- 230000008569 process Effects 0.000 description 33
- 210000003128 head Anatomy 0.000 description 22
- 230000003287 optical effect Effects 0.000 description 22
- 230000011218 segmentation Effects 0.000 description 19
- 238000004422 calculation algorithm Methods 0.000 description 16
- 239000000523 sample Substances 0.000 description 15
- 238000013528 artificial neural network Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 13
- 238000013527 convolutional neural network Methods 0.000 description 12
- 230000033001 locomotion Effects 0.000 description 10
- 238000012549 training Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 9
- 238000003491 array Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 8
- 239000007943 implant Substances 0.000 description 8
- 230000015654 memory Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 7
- 210000003484 anatomy Anatomy 0.000 description 6
- 230000004807 localization Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000005672 electromagnetic field Effects 0.000 description 5
- 238000001356 surgical procedure Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000000399 orthopedic effect Effects 0.000 description 3
- 210000003625 skull Anatomy 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000002604 ultrasonography Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000002675 image-guided surgery Methods 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 206010003402 Arthropod sting Diseases 0.000 description 1
- 238000002679 ablation Methods 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000011731 head segmentation Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 238000011017 operating method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000004279 orbit Anatomy 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present disclosure relates to a surgical navigation system, and particularly to a method for registering a patient pre- and intra-operatively to an image data.
- Guided surgery relies on knowing the position of a patient relative to equipment used for the surgery.
- Various forms include Image Guided Surgery (IGS).
- IGS Image Guided Surgery
- a registration process is performed to determine a transformation between reference frames to allow determining where the patient is relative to the equipment.
- IGS Image Guided Surgery
- Manual intervention from a surgeon may be required to complete the registration process which may be required before the start of a surgical procedure. Less manual intervention in the registration process may decrease time to registration, improve reproducibility and improve the ease of registration.
- a method includes acquiring clinical images of a subject, segmenting the clinical image to form a segmented clinical image, determining a clinical point cloud for the segmented clinical image, acquiring a registration image from a registration device, generating a registration point cloud for the subject, selecting at least a sub-portion of the both the clinical point cloud and the registration point cloud and registering a subject space to an image space based on the selected at least sub-portion of the both the clinical point cloud and the registration point cloud.
- a system in another aspect of the disclosure, includes a registration device generating images of a subject and a controller segmenting a clinical image to form a segmented clinical image, determining a clinical point cloud for the segmented clinical image, acquiring a registration image, generating a registration point cloud for the subject, selecting at least a sub-portion of the both the clinical point cloud and the registration point cloud and registering a subject space to an image space based on the selected at least sub-portion of the both the clinical point cloud and the registration point cloud.
- An image of the subject may be used for diagnosis and treatment of the subject. Such an image may be referred to as a treatment or clinical image.
- the clinical image may be based on image data acquired with an appropriate imaging system, as discussed herein.
- the clinical image may be a projection and/or reconstruction of the acquired image data.
- the clinical image may be acquired of the subject at any appropriate time such as prior to or during a procedure.
- the clinical image may define a clinical image space.
- a position of an instrument relative to a subject, who has been imaged, may be determined with a tracking system.
- the position of the instrument may be displayed relative to the acquired clinical image due to a registration of a subject space to the clinical image space.
- the registration may occur by determining the position of various points on the subject and correlating them to points in the clinical image space.
- the correlation may allow for a determination and generation of a transformation map between the physical or subject space of the subject and the clinical image space of the clinical image.
- the tracked position of an instrument may be displayed relative to the clinical image.
- a subject may be registered to the clinical image.
- the registration may be substantially automatic by a registration system.
- the registration system may acquire a registration image of the subject. Further, during a procedure an automatic or updated registration may occur due to a determination that the subject has moved and the registration system may again register the subject space to the clinical image space.
- FIG. 1 is an environmental view of a surgical navigation system or computer aided surgical system, according to various embodiments
- Fig. 2A is a high-level block diagram of the registration controller of Fig. 1 ;
- FIG. 2B is a detailed block diagrammatic view of the registration device of Fig. 1 ;
- Fig. 2C Is an environmental view of the navigation system with an imaging system
- FIG. 3A is a detail view of a patient having a reference frame therein, according to various embodiments;
- Fig. 3B is the patient of Fig. 3A relative to different scanning positions;
- Fig. 3C are point clouds from the scan of Fig. 3B;
- Fig. 3D is an example of point cloud stitching from the points of Fig. 3C;
- Figs. 3E and 3F are facial images having points and segments thereon;
- Fig. 3G is a representation of points determined by the scanning process
- Fig. 3H is an example of filtered points that have been cropped around an area of the reference frame and the nasion, according to various embodiments;
- Fig. 3I is a high-level flowchart of a method for performing a registration process
- FIG. 3J is a representation of an optical image of a patient having a reference frame therein, according to various embodiments
- Fig. 3K is a representation of a segmented optical image of a patient having a reference frame therein, according to various embodiments
- FIG. 4 is a flowchart of a method for training a trained classifier
- FIG. 5 is a detail flowchart of at least a portion of a touchless registration process
- FIG. 6 is a detail flowchart of the processing block of Fig. 5;
- Fig. 7 is a detailed flowchart of the cropping block of Fig. 6.
- clinical image data can be acquired of a patient to assist in illustrating a location of an instrument relative to a patient.
- clinical image space i.e., defined by a coordinate system of an image generated or reconstructed from image data
- patient space i.e., defined by a coordinate system of a physical space relative to a patient to assist in this display and navigation.
- a navigation system 10 that can be used for various procedures is illustrated.
- the navigation system 10 can be used to track the location of a device 12, such as a pointer probe, relative to a patient 14 to assist in the implementation or performance of a surgical procedure.
- the navigation system 10 may be used to navigate or track other devices including: catheters, probes, needles, leads, electrodes implants, etc.
- examples include ablation catheters, deep brain stimulation (DBS) leads or electrodes, micro-electrode (ME) leads or electrodes for recording, etc.
- the navigated device may be used in any region of the body.
- the navigation system 10 and the various devices may be used in any appropriate procedure, such as one that is generally minimally invasive, arthroscopic, percutaneous, stereotactic, or an open procedure.
- an exemplary navigation system 10 including an image registration system 16 are discussed herein, one skilled in the art will understand that the disclosure is merely for clarity of the present discussion and any appropriate imaging system, navigation system, patient specific data, and non-patient specific data can be used. It will be understood that the navigation system 10 can incorporate or be used with any appropriate preoperatively or intraoperatively acquired image data.
- the navigation system 10 includes the image registration system 16 used to acquire and compare pre- and intraoperative, including real-time image data of the patient 14.
- the system may register and/or maintain registration to intraoperatively acquired clinical image data.
- the system may register and/or maintain registration to preoperative clinical image data until the end of the procedure or until relative movement, such as skull movement, is detected. If movement is detected, such as with the distance sensors as discussed herein, the registration is maintained by allowing collection of additional registration data and/or is re-registered.
- the registration system 16 may, for example, use visible light, infrared light, electromagnetic energy, light detection and ranging (lidar), or thermal technologies emitted from and/or received by a registration device 18.
- the registration device 18 may include a camera system, such as a stereo camera system, including but not limited to the Intel® RealSenseTM D415 or D430 depth camera sold by Intel Corporation, Einstar 3D scanner from Shining3D. According to various embodiments, therefore, registration device 18 may transmit an image or multiple images, points (point cloud) of line segments with points as data signals, mesh, etc. to a registration controller 20.
- the registration controller 20 may determine the position of the registration device 18 by way of a reference locator 26 as further described below in the examples set forth.
- the reference locator 26 may be an optional feature.
- the registration device 18 may be an optical device, an electromagnetic device, and/or a lidar device used to obtain one or more registration images or points for registration. That is, the registration device 18 may include a sensor for generating points corresponding to a registration image or a video stream. If a video is captured, registration images from each frame of the video stream may be captured therefrom.
- the registration device 18 may be fixed relative to the subject and/or moveable relative to the subject. In either case, the registration device 18 may be handheld, robot mounted or gimbal mounted to obtain the registration image or points.
- the registration controller 20 ultimately determines or identifies data that corresponds to various physical features of the patient, distance, or positions of the features as described in detail below used for registration.
- this data may relate to markers and/or points on the patient.
- the registration image from the registration device 18 may include information or data, as discussed herein, that is useful for registration to the clinical image of the subject acquired with an imaging system, as discussed herein.
- the clinical image data of the subject may be pre- or intraoperative image data.
- the clinical image data may be used to generate the clinical image that is displayed.
- the longitudinal axis 14A of the patient 14 is substantially in line with the longitudinal axis 22 of the operating table 24.
- the upper body of the patient 14 is elevated but the longitudinal axes 14A and 22 are aligned.
- Fiducial marker or point data useful for registration obtained from the registration image of the registration system 16 can then be forwarded to the navigation computer and/or processor controller or workstation 28 having a display device 36 to display the clinical image data 38 and a user interface 40.
- Workstation 28 may also have an audible device 37 such as a speaker, buzzer, or vibration generator for generating an audible signal.
- the display device 36 and/or the display may generate a visual and/or audible signal corresponding to a registration or a lack of registration of a patient space to the clinical image space which is described in more detail below.
- the workstation 28 can also include or be connected to an image processor, a navigation processor, and a memory to hold instruction and data.
- the image data is not necessarily retained in the controller 20 but may also be directly transmitted to the workstation 28.
- processing for the navigation system and/or the registration system 16 and optimization can all be done with a single or multiple processors all of which may or may not be included in the workstation 28.
- the registration controller 20 may be incorporated into the workstation 28.
- the workstation 28 provides facilities for displaying the clinical image data 38 as the clinical image on the display device 36, saving, digitally manipulating, or printing a hard copy image of the received image data.
- the user interface 40 which may be a keyboard, mouse, touch pen, touch screen or other suitable device, allows a physician or user 42 to provide inputs to control the image registration system 16 or adjust the display settings of the display device 36.
- the workstation 28 may also direct registration device 18 to adjust the position relative to the patient 14.
- the navigation system 10 can further include a tracking system, such as, but not limited to, an electromagnetic (EM) tracking system 46 or an optical tracking system 46’. Either or both can be used alone or together in the navigation system 10.
- EM tracking system 46 can be understood to relate to any appropriate tracking system.
- the optical tracking system 46’ can include the StealthStation® Treon® , StealthStation® S7, StealthStation® S8 and the StealthStation® Tria® all of which are sold by Medtronic Navigation, Inc.
- Other tracking system modalities may include acoustic, radiation, radar, infrared, etc.
- the EM tracking system 46 includes a coil array or EM localizer 48, such as a coil array and/or second coil array 50, a coil array controller 52, a navigation probe interface 54, the device 12 (e.g., instrument, tool, catheter, needle, pointer probe, or instruments, as discussed herein) and a dynamic reference frame (DRF) 44.
- An instrument tracking device 34a can also be associated with, such as fixed to, the device 12 or a guiding device for an instrument (or coupled to the registration device 18 as mentioned above).
- the dynamic reference frame 44 can include a dynamic reference frame holder 56 and a removable tracking device 34b. Alternatively, the dynamic reference frame 44 can include the tracking device 34b that can be formed integrally or separately from the DRF holder 56.
- the DRF 44 can be provided as separate pieces and can be positioned at any appropriate position on the anatomy.
- the tracking device 34b of the DRF 44 can be fixed to the skin of the patient 14 with an adhesive.
- the DRF 44 can be positioned near a leg, arm, etc. of the patient 14.
- the DRF 44 does not need to be provided with a head frame or require any specific base or holding portion.
- the tracking devices 26, 34, 34a, 34b or any tracking device as discussed herein, can include a sensor, a transmitter, or combinations thereof. Further, the tracking devices can be wired or wireless to provide a signal emitter or receiver within the navigation system.
- the tracking device can include an electromagnetic coil to sense a field produced by the EM localizing array formed by the EM localizers 48, 50 or reflectors that can reflect a signal to be received by the optical tracking system 46’.
- the tracking device can receive a signal, transmit a signal, or combinations thereof to provide information to the navigation system 10 to determine a location of the tracking device 34, 34a, 34b.
- the navigation system 10 can then determine the position of the instrument or tracking device to allow for navigation relative to the patient and patient space.
- the coil arrays or localizers 48, 50 may also be supplemented or replaced with a mobile localizer.
- the mobile localizer may be one such as that described in U.S. Patent Application Serial No. 10/941 ,782, filed Sept. 15, 2004, now U.S. Pat. App. Pub. No. 2005/0085720, entitled "METHOD AND APPARATUS FOR SURGICAL NAVIGATION", herein incorporated by reference.
- the localizer array can transmit signals that are received by the tracking devices 26, 34, 34a, 34b.
- the tracking devices 34, 34a, 34b can then transmit or receive signals based upon the transmitted or received signals from or to the arrays or localizers 48, 50.
- an isolator circuit or assembly may be included in a transmission line to interrupt a line carrying a signal or a voltage to the navigation probe interface 54.
- the isolator circuit included in the isolator box may be included in the navigation probe interface 54, the device 12, the dynamic reference frame 44, the transmission lines coupling the devices, or any other appropriate location.
- the isolator assembly is operable to isolate any of the instruments or patient coincidence instruments or portions that are in contact with the patient should an undesirable electrical surge or voltage take place.
- tracking systems 46, 46’ or parts of the tracking systems 46, 46’ may be incorporated into the registration system 16, including the workstation 28. Incorporating the tracking system 46, 46’ may provide an integrated imaging and tracking system. This can be particularly useful in creating a fiducial-less system without separate physical or implanted markers attached to the patient. Moreover, fiducial marker-less systems can include a tracking device and a contour determining system, including those discussed herein.
- the coil array 48 is controlled or driven by the coil array controller 52.
- the coil array controller 52 drives each coil in the coil array 48 in a time division multiplex or a frequency division multiplex manner.
- each coil may be driven separately at a distinct time or all of the coils may be driven simultaneously with each being driven by a different frequency.
- the tracking device 34, 34a, 34b Upon driving the coils in the coil array 48, 50 with the coil array controller 52, electromagnetic fields are generated within the patient 14 in the area where the medical procedure is being performed, which is again sometimes referred to as patient space.
- the electromagnetic fields generated in the patient space induce currents in the tracking device 34, 34a, 34b positioned on or in the device 12, DRF 44, etc.
- These induced signals from the tracking devices 34, 34a, 34b are delivered to the navigation probe interface 54 and subsequently forwarded to the coil array controller 52.
- the navigation probe interface 54 can also include amplifiers, filters and buffers to directly interface with the tracking device 34b attached to the device 12.
- the tracking device 34b, or any other appropriate portion may employ a wireless communications channel, such as that disclosed in U.S. Patent No. 6,474,341 , entitled “Surgical Communication Power System,” issued November 5, 2002, herein incorporated by reference, as opposed to being coupled directly to the navigation probe interface 54.
- Various portions of the navigation system 10, such as the device 12, the dynamic reference frame 44, are equipped with at least one, and generally multiple, EM or other tracking devices 34a, 34b, that may also be referred to as localization sensors.
- the EM tracking devices 34a, 34b can include one or more coils that are operable with the EM localizer arrays 48, 50.
- An alternative tracking device may include an optical device or devices 58 and may be used in addition to or in place of the electromagnetic tracking devices 34a, 34b.
- the optical tacking device may work with the optional optical tracking system 46’.
- the optical tracking device 58 may include marks or sticker type devices affixed to the skin of the patient.
- any appropriate tracking device can be used in the navigation system 10.
- the EM tracking device 34a on the device 12 can be in a handle or inserter that interconnects with an attachment and may assist in placing an implant or in driving a member.
- the device 12 can include a graspable or manipulable portion at a proximal end and the tracking device 34a may be fixed near the manipulable portion of the device 12 or at a distal working end, as discussed herein.
- the tracking device 34a can include an electromagnetic tracking sensor to sense the electromagnetic field generated by the coil array 48, 50 that can induce a current in the electromagnetic device 34a.
- the tracking device 34a can be driven (i.e., like the coil array above) and the coil arrays 48, 50 can receive a signal produced by the tracking device 34a.
- the dynamic reference frame 44 may be fixed to the head 60 of the patient 14 adjacent to the region being navigated so that any movement of the patient 14 is detected as relative motion between the coil arrays 48, 50 and the dynamic reference frame 44.
- the dynamic reference frame 44 can be interconnected with the patient in any appropriate manner, including those discussed herein. Relative motion is forwarded to the coil array controller 52, which updates the registration and maintains accurate navigation, further discussed herein. Alternatively, when motion is detected, re-registration may be performed.
- the dynamic reference frame 44 may include any appropriate tracking device. Therefore, the dynamic reference frame 44 may also be EM, optical, acoustic, etc. If the dynamic reference frame 44 is electromagnetic it can be configured as a pair of orthogonally oriented coils, each having the same center or may be configured in any other non-coaxial or co-axial coil configurations.
- the navigation system 10 operates as follows.
- the navigation system 10 creates a map of points, which may include all points, in the registration image data generated from the registration device 18 which can include external and internal portions that correspond to points in the patient’s anatomy in patient space.
- This map generated with the registration device 18 may then be transformed (e.g., a transformation map is made) to the clinical image data acquired for the subject 14, such as pre- or intraoperatively.
- the workstation 28 in combination with the coil array controller 52 uses the transformation map to identify the corresponding point on the clinical image data and/or atlas model, which is displayed on display 36. This identification is known as navigation or localization.
- An icon representing the localized point of the instruments is shown on the display 36 in an appropriate manner relative to the clinical image data which may be one or several two-dimensional image planes, as well as on three- and four-dimensional images and models.
- the navigation system 10 To enable navigation, the navigation system 10 must be able to detect both the position of the patient’s anatomy and the position of the device 12 or an attachment member (e.g., tracking device 34a) attached to the device 12. Knowing the location of these two items allows the navigation system 10 to compute and display the position of the device 12 or any portion thereof in relation to the patient 14.
- the EM tracking system 46 is employed to track the device 12 and the anatomy of the patient 14 simultaneously.
- the EM tracking system 46 if it is using an electromagnetic tracking assembly, essentially works by positioning the coil arrays 48, 50 adjacent to the patient 14 to generate a magnetic field, which can be low energy, and generally referred to as a navigation field. Because every point in the navigation field or patient space is associated with a unique field strength, the electromagnetic tracking system 46 can determine the position of the device 12 by measuring the field strength at the tracking device 34a location.
- the dynamic reference frame 44 is fixed to the patient 14 to identify the location of the patient in the navigation field.
- the electromagnetic tracking system 46 continuously computes or calculates the relative position of the dynamic reference frame 44 and the device 12 during localization and relates this spatial information to patient registration data to enable navigation of the device 12 within and/or relative to the patient 14.
- the points or portions that are selected to perform registration can be image points or point clouds of points from the registration image that are compared to points derived from clinical images.
- the points may be identified at any appropriate time, such as while registration is taking place.
- the points can include landmarks such as anatomical landmarks, measurements between landmarks, positioned members (e.g., fiducial markers, DRF(s)), and combinations thereof as described in more below.
- the landmarks are identifiable in the clinical and registration image data and identifiable and accessible on the patient 14.
- the landmarks can include individual or distinct points on the patient 14 or contours (e.g., three-dimensional contours) defined by the patient 14.
- the registration controller 20 may be a separate computer or device or may be incorporated into the workstation 28.
- the registration controller 20 may access selected clinical image data, such as preprocedure clinical image data and may be in communication with a pre-procedure image system 110.
- the pre-procedure image system 110 may include, but is not limited to, a computed tomography (CT) system generating a computerized tomography (CT) image, an X-Ray system generating an X-ray image, O-arm® imaging system, a magnetic resonance image (MRI) system generating an MRI image, or an ultrasound system generating an ultrasound image. Examples of a pre-procedure clinical image system are set forth below in Fig. 2C.
- the pre-procedure image system 110 may obtain pre-procedure clinical images that are provided to the registration controller 20 for comparison with a registration image.
- the pre-procedure clinical image system 110 may provide a digital image file to the registration controller 20. It is understood, however, that the clinical image data may also be acquired during an operative procedure thus being intraoperative clinical image data or images.
- a CT image, MRI image may act as the clinical image.
- video frames may also be used as the clinical image.
- discussion of clinical images or image data is understood to be any image data of the subject to which registration may be made.
- the registration controller 20 may also be in communication with a network 112.
- the network 112 such as the Internet, may have a wired or wireless network connection.
- Various types of data may be communicated through the network 112 including from a remote control 114 that may be used to operate the system.
- the remote control 114 may be a separate component or a component integrated into a system such as the workstation 28.
- the remote control 114 may include a system to initiate the registration process, acquire the pre-procedure image data, etc.
- the network 112 is in communication with a network interface 116.
- the network interface 116 allows communication from the registration controller 20 to the network 112 and ultimately to other components such as the workstation 28 or various other devices.
- the network interface 116 allows the network 112 to communicate in remote locations other than the operating room in which the navigation device 10 is located.
- the registration controller 20 may also be communication with the registration device 18, the display device 36 and the audible device 37.
- the display device 36 and the audible device 37 are part of the workstation 28. However, separate display devices and audible devices may be provided especially when the registration controller 20 is located away from the workstation 28.
- the registration controller 20 may be processor, such as microprocessor-based and programmed to perform various functions.
- the blocks provided within the registration controller 20 may be separate processors or modules programmed to perform various functions.
- An actuator controller 120 is used to control actuators 152 of the registration device 18 when used, as set forth in Figure 2C. As described in more detail below, the registration device 18 may be scanned or moved using the actuators 152. The registration device 18 may also be fixed and/or moved manually, such as by the user. The registration device 18 may include a physical structure, as discussed herein, that may be moved relative to the subject 14. The actuators 152 may be motors or other systems that move the registration device 18. The actuator controller 120 may move the motors based upon received sensor signals from the registration device 18 or the tracking device 34a, 166 and are received at the position sensor input 122. Sensors may also be individual sensors, combined sensors, and include any appropriate number.
- Sensors may include position sensors that may be distance sensors that sense the distance from the patient and encoders used to sense the position of the moving actuators.
- the distance sensors may be infrared distance sensors.
- the actuator controller 120 and the signals from the position sensors in the registration device 18 received at the position sensor input 122 are provided to a position controller 124.
- Position controller 124 based on the position sensor input 122, controls actuators at the registration device 18 using the actuator controller 120.
- An illumination controller 130 if selected, is used to control a light source at the registration device 18.
- An image processor 132 receives registration imaging signals from the registration device 18.
- the registration device 18 generates registration image signals from a registration image sensor as will be described in more detail below.
- the registration device 18 may acquire or generate registration image signals that may be used to register the patient to the pre-procedure image data.
- the image processor 132 may include a trained classifier 132A.
- the trained classifier 132A is one system (e.g., trained machine learning system) used for identifying registration images or portions thereof (e.g., a nasion) acquired from the registration device 18.
- the trained classifier 132A may include weights W that are trained according to the procedures set forth below.
- the trained classifier 132A in general, has a plurality of weights W that are adjusted using numerous classified images or over time.
- the trained classifier 132A may be a convolutional neural network (CNN), an autoencoder algorithm, a recurrent neural network (RNN) algorithm, and transformer neural network algorithm, a generative adversarial network (GAN) algorithm, linear regression, support vector machine (SVM) algorithm, random forest algorithm, hidden Markov model, and/or any combination thereof.
- CNN convolutional neural network
- RNN recurrent neural network
- GAN generative adversarial network
- SVM support vector machine
- random forest algorithm random forest algorithm
- hidden Markov model hidden Markov model
- the at least one processor may be configured to utilize a combination of a CNN algorithm or transformer-based neural network algorithm in conjunction with an SVM algorithm.
- the trained classifier may also be machine learned or a software based algorithm that uses measures between points.
- the image processor 132 may generate points from the registration image from the registration device 18, such as points in a point cloud, which are used to identify various facial features or points thereof so that facial recognition is performed and/or identifying various features of the reference frame thus being able to delineate patient face and reference frame. For example, able to identify those points that belong to only one of the patient face or the reference frame to segment or separate it from other portions of the image or points in a point cloud.
- Various images or snapshot images obtained with the registration device 18 e.g., partial or selected registration image frames
- one or more methods may be used such as for forming a stitched image including random sampling or full stitching.
- the random sampling method chooses a snapshot as a reference snapshot. Then, a set of frames is separated into a number of different groups, G. A random snapshot for each group is chosen and registered to the reference snapshot with each registration adding data to the original frame.
- the full stitching method registers adjacent snapshots. Adjacent snapshots are snapshots that were acquired sequentially in time. Starting from the first snapshot, adjacent snapshots are registered to each other, until a goodness of fit measure crosses a certain threshold. A goodness of fit measure indicates how well two snapshots register together and is a percentage of the two cloud points that fit to the within a selected percentage or threshold. Registration data is therefore formed from the recognized points within the image frames above. Ultimately, the output of the image processor 132 is communicated to a registration processor 134.
- the registration processor 134 may perform a registration of the clinical image data, such as the pre-procedure image data in point, or point cloud, to the registration image. This allows the patient space, defined by the patient 14 and physical space relative to the patient 14 to be registered.
- the registration device 18 may acquire an image that is referred to as the registration image of at least a portion of the patient 14.
- the registration image may be converted to points or a point cloud.
- a common point or fiducial point between the registration image and the clinical image may be used to perform the registration of the patient space to the clinical image space.
- a position of the points on the patient may be based upon the determination of a pose of the registration device 18 and the patient 14 when acquiring the registration image to determine a position of the points on the patient in the physical space defined by and relative to the patient 14.
- the registration process may be similar to that discussed above and include a generation or determination of a transformation map between the position of the points determined of the patient 14 and of the similar or same points such as head points in the clinical image data.
- a user interface 142 coupled to the registration controller 20 is used for providing control signals to the various controllers and modules within the registration controller 20.
- Examples of the user interface 142 include a keyboard, a mouse or a touch screen.
- a timer 144 may also be included within the registration controller 20.
- the timer 144 may record the time of the images received from the registration device 18. This may allow a correlation of a time of determining a position of the registration device, as discussed herein, for use with determining a position of the patient 14 for the registration process.
- the registration device 18 may have a plurality of position sensors 150.
- Each of the actuators 152 and/or arms 153 may have position sensor feedback from a position sensor associated therewith.
- the position sensors 150 generate a plurality of position signals that are ultimately communicated to the registration controller 20.
- Control signals from the actuator controller 120 are communicated as signals 120A to the actuators 152.
- the number and types of actuators 152 may vary depending upon the type of system.
- the actuators 152 may move a selected portion or the entire registration device 18.
- the actuators 152 may or may not include only the sensors and light sources depending upon the configuration.
- a distance sensor 156 may allow the registration device 18 to communicate a distance signal to the registration controller 20 to determine the position and provide feedback relative to the position to the position controller 124.
- Different types of distance sensors including radar, infrared light time of travel, or laser may be used.
- Another specific type of distance sensor is a passive infrared (PIR) sensor which may be used to thermally sense the distance of the mask to the patient.
- a PIR sensor has transmitter and receiver. The transmitter of a PIR sensor may transmit the light (e.g., omnidirectionally), and the receiver receives a reflected IR light off of the patient. Consequently, each PIR sensor determines the distance.
- the distance sensor 156 calculates the distance to the head and gives the output based on a distance a movable robotic arm would need to be adjusted to continue the procedure of registration.
- a plurality of light sources 160 may be used to illuminate the patient 14 and are controlled by the illumination controller within the registration controller 20.
- the plurality of light sources 160 may surround or be adjacent to an image sensor 154 and be controlled to obtain a useful image.
- the image sensor 154 may have parameters that may be set.
- An image parameter controller 155 may be used to adjust camera settings such as but not limited to aperture, shutter speed, ISO, quality (number of pixels) and white balance. Light sources, however, may not be required with the registration device and ambient light may be enough to capture the registration image(s).
- the registration device 18 may also include a transmitter/receiver 162.
- the transmitter/receiver 162 may be referred to as a transceiver 162.
- the transceiver 162 may be used for communicating signals to and from the registration controller 20.
- the transceiver 162 may, for example, communicate using Bluetooth® wireless communication or another type of wireless technology.
- the transceiver 162 may also be a wired device.
- the transceiver 162 communicates with a transceiver 162 located within the registration controller 20. Although direct lines are shown in Fig. 2A between the registration controller 20 and the registration device 18, the transceiver 162 may be used to communicate wirelessly or in wired fashion with the registration device 18.
- Fig. 2C a diagrammatic view illustrating an overview of a procedure room or arena is set forth, similar to Fig. 1.
- the primary difference between Fig. 1 and Fig. 2C is the inclusion of the imaging system 180 and further details of the registration device 18 disposed on movable arms 153 and moveable with actuators 152, as described above.
- the clinical image may be obtained with any appropriate imaging system, including the imaging system 180.
- the registration images or points thereof from the registration device 18 and the imaging system 180 are compared to obtain the registration.
- the procedure room may include a surgical suite having the navigation system 10 that can be used relative to the patient or subject 14.
- the navigation system 10 can be used to track the location of one or more tracking devices, tracking devices may include an imaging system tracking device 162 to track the imaging system 180. Also, a tool tracking device 166 similar or identical to the tracking device 34a may be included on a tool 168 similar to identical to the device 12.
- the tool 12, 168 may be any appropriate tool such as a drill, forceps, catheter, speculum or other tool operated by the user 42.
- the tool 168 may also include an implant, such as a stent, a spinal implant or orthopedic implant.
- the navigation system 10 may be used to navigate any type of instrument, implant, stent or delivery system, including: guide wires, arthroscopic systems, orthopedic implants, spinal implants, deep brain stimulation (DBS) probes, etc.
- the instruments may be used to navigate or map any region of the body.
- the navigation system 10 and the various instruments may be used in any appropriate procedure, such as one that is generally minimally invasive or an open procedure including cranial procedures.
- the imaging device 180 may be used to acquire pre-, intra-, or post-operative or real-time clinical image data of a subject, such as the patient 14. It will be understood, however, that any appropriate subject can be imaged, and any appropriate procedure may be performed relative to the subject.
- the imaging device 180 comprises an O-arm® imaging device sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado, USA.
- the imaging device 180 may have a generally annular gantry housing 182 in which an image capturing portion is moveably placed.
- the image capturing portion may include an x-ray source or emission portion and an x-ray receiving or image receiving portion located generally or as practically possible 180 degrees from each other and mounted on a rotor relative to a track or rail.
- the image capturing portion can be operable to rotate 360 degrees during image acquisition.
- the image capturing portion may rotate around a central point or axis, allowing image data of the subject 14 to be acquired from multiple directions or in multiple planes.
- the imaging device 180 can include those disclosed in U.S. Pat. Nos. 7,188,998; 7,108,421 ; 7,106,825; 7,001 ,045; and 6,940,941 ; all of which are incorporated herein by reference, or any appropriate portions thereof.
- the imaging device 180 can utilize flat plate technology having a 1 ,720 by 1 ,024 pixel viewing area.
- the position of the imaging device 180, and/or portions therein such as the image capturing portion can be substantially precisely (e.g., within at least 2 centimeters, including at least one centimeter, and further including fractions thereof including at least 10 microns) known relative to any other portion of the imaging device 180.
- the imaging device 180 can know and recall precise coordinates relative to a fixed or selected coordinate system. This can allow the imaging system 180 to know its position relative to the patient 14 or other references.
- the precise knowledge of the position of the image capturing portion can be used in conjunction with a tracking system to determine the position of the image capturing portion and the image data relative to the tracked subject, such as the patient 14.
- the imaging device 180 can also be tracked with the tracking device 163.
- the clinical image data defining the clinical image space acquired of the patient 14 can, according to various embodiments, be inherently or automatically registered relative to an object space. This inherently or automatically registered may be in addition or alternative to the registration with the registration device 18 as disclosed herein.
- the object or patient space can be the space defined by a patient 14 in the navigation system 10.
- the automatic registration can be achieved by including the tracking device 163 on the imaging device 180 and/or the determinable precise location of the image capturing portion.
- imageable portions, virtual fiducial points and other features can also be used to allow for registration, automatic or otherwise. It will be understood, however, that clinical image data can be acquired of any subject which will define the patient or subject space.
- Patient space is an exemplary subject space. Registration allows for a transformation between patient space and clinical image space.
- the patient 14 may be fixed within navigation space defined by the navigation system 10 to allow for or maintain registration and/or the registration device 18 may be used to obtain and/or maintain registration.
- registration of the clinical image space to the patient space or subject space allows for navigation of the instrument 12, 168 with reference to the clinical image data.
- a position of the instrument 168 can be illustrated relative to clinical image data acquired of the patient 14 on the display device 36, such as superimposed as a graphical representation (e.g., icon) representing the tool 12, 168 in a selected manner, such as mimicking the tool 12, 168.
- Various tracking systems such as one including the optical localizer 48’ or the electromagnetic (EM) localizer 48 can be used to track the instrument 168.
- more than one tracking system can be used to track the instrument 168 in the navigation system 10.
- these can include an electromagnetic tracking (EM) system having the EM localizer 48 and/or an optical tracking system having the optical localizer 48’.
- EM electromagnetic tracking
- optical tracking system having the optical localizer 48’.
- Either or both of the tracking systems can be used to track selected tracking devices, as discussed herein. It will be understood, unless discussed otherwise, that a tracking device can be a portion trackable with a selected tracking system.
- a tracking device need not refer to the entire member or structure to which the tracking device is affixed or associated.
- the imaging device 180 may be an imaging device other than the O-arm® imaging device and may include in addition or alternatively a fluoroscopic C-arm.
- Other exemplary imaging devices may include fluoroscopes such as bi-plane fluoroscopic systems, ceiling mounted fluoroscopic systems, cath-lab fluoroscopic systems, fixed C-arm fluoroscopic systems, isocentric C-arm fluoroscopic systems, 3D fluoroscopic systems, etc.
- fluoroscopes such as bi-plane fluoroscopic systems, ceiling mounted fluoroscopic systems, cath-lab fluoroscopic systems, fixed C-arm fluoroscopic systems, isocentric C-arm fluoroscopic systems, 3D fluoroscopic systems, etc.
- Other appropriate imaging devices can also include MRI, CT, ultrasound, etc.
- an imaging device controller 196 may control the imaging device 80 and can receive the image data generated at the image capturing portion and store the images for later use.
- the controller 196 can also control the rotation of the image capturing portion of the imaging device 180.
- the controller 196 need not be integral with the gantry housing 182 but may be separate therefrom.
- the controller 196 may be a portion of the navigation system 10 that may include a processing and/or control system including a processing unit or processing system 198.
- the controller 196 may be integral with the gantry housing 182 and may include a second and separate processor, such as that in a portable computer.
- the patient 14 can be fixed onto the operating table 24.
- the table 104 can be an Axis Jackson ® operating table sold by OSI, a subsidiary of Mizuho Ikakogyo Co., Ltd., having a place of business in Tokyo, Japan or Mizuho Orthopedic Systems, Inc. having a place of business in California, USA.
- Patient positioning devices can be used with the table and include a Mayfield ® clamp or those set forth in commonly assigned U.S. Pat. Appl. No. 10/405,068 entitled “An Integrated Electromagnetic Navigation and Patient Positioning Device”, filed April 1 , 2003 which is hereby incorporated by reference.
- the position of the patient 14 relative to the imaging device 80 can be determined by the navigation system 10.
- the tracking device 163 can be used to track and locate at least a portion of the imaging device 180, for example the gantry housing 182.
- the patient 14 can be tracked with the dynamic reference frame 44, as discussed in Fig. 1 , which may be invasive and/or not invasive or minimally invasive. That is, a patient tracking device or dynamic reference device 44 may be used to receive or generate signals that are communicated to an interface portion 99.
- the position of the patient 14 relative to the imaging device 180 and relative to the registration device 18 of Fig. 1 can be determined initially and when movement, such as skull movement is detected.
- the location of the imaging portion can be determined relative to the housing 182 due to its precise position on the rail within the housing 182, substantially inflexible rotor, etc.
- the imaging device 180 can include a known positional accuracy and repeatability of within 10 microns, for example, if the imaging device 180 is an O- Arm® imaging device sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado. Precise positioning of the imaging portion is further described in U.S. Patent Nos.
- the imaging device 180 can generate and/or emit x-rays from the x-ray source that propagate through the patient 14 and are received by the x-ray imaging receiving portion.
- the image capturing portion generates image data representing the intensities of the received x-rays.
- the image capturing portion can include an image intensifier that first converts the x-rays to visible light and a camera (e.g. a charge couple device) that converts the visible light into digital image data.
- the image capturing portion may also be a digital device that converts x-rays directly to digital image data for forming images, thus potentially avoiding distortion introduced by first converting to visible light.
- Two dimensional and/or three-dimensional fluoroscopic image data that may be taken by the imaging device 180 can be captured and stored in the imaging device controller 196. Multiple image data taken by the imaging device 180 may also be captured and assembled to provide a larger view or image of a whole region of a patient 14, as opposed to being directed to only a portion of a region of the patient 14. For example, multiple image data of the patient’s 14 spine may be appended together to provide a full view or complete set of image data of the spine. Any one or more of these types of image data may be clinical image data.
- the clinical image data can then be forwarded from the image device controller 196 to the navigation computer and/or processor system 198 that can be a part of a controller or workstation 28. It will also be understood that the clinical image data is not necessarily first retained in the controller 196, but may also be directly transmitted to the workstation 28.
- the workstation 28 can provide facilities for displaying the image data as an image 38 on the display 36, saving, digitally manipulating, or printing a hard copy image of the received image data.
- the user interface 40 allows the user 42 to provide inputs to control the imaging device 180, via the image device controller 96, or adjust the display settings of the display 36.
- the workstation 28 may also direct the image device controller 196 to adjust the image capturing portion of the imaging device 180 to obtain various two- dimensional images along different planes in order to generate representative two- dimensional and three-dimensional image data.
- the navigation system 10 can further include the tracking system including either or both of the electromagnetic (EM) localizer 48 and/or the optical localizer 48’.
- the tracking systems may include a controller and interface portion 99.
- the interface portion 99 can be connected to the processor system 198, which can include a processor included within a computer.
- the EM tracking system may include the STEALTHSTATION® AXIEMTM Navigation System, sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado; or can be the EM tracking system described in U.S. Patent No. 7,751 ,865 issued July 6, 2010, and entitled "METHOD AND APPARATUS FOR SURGICAL NAVIGATION"; U.S. Patent No.
- the navigation system 10 may also be or include any appropriate tracking system, including a STEALTHSTATION® TREON® or S7TM tracking systems having an optical localizer, which may be used as the optical localizer 48’, and sold by Medtronic Navigation, Inc. of Louisville, Colorado.
- Other tracking systems include an acoustic, radiation, radar, etc. The tracking systems can be used according to generally known or described techniques in the above incorporated references. Details will not be included herein except when to clarify selected operation of the subject disclosure.
- Wired or physical connections can interconnect the tracking systems 46, 46’, imaging device 180, etc.
- various portions, such as the instrument 168 may employ a wireless communications channel, such as that disclosed in U.S. Patent No. 6,474,341 , entitled “Surgical Communication Power System,” issued November 5, 2002, herein incorporated by reference, as opposed to being coupled directly to the processor system 198.
- the tracking devices 163, 166 can generate a field and/or signal that is sensed by the tracking system(s)
- the instrument can also include more than one type or modality of tracking device 166, such as an EM tracking device and/or an optical tracking device.
- the instrument 68 can include a graspable or manipulable portion at a proximal end and the tracking devices may be fixed near the manipulable portion of the instrument 68.
- the navigation system 10 may be a hybrid system that includes components from various tracking systems.
- Figs. 3A-3D one example of a process for acquiring registration images is illustrated.
- the patient 14, and in this case the head of the patient, is illustrated with a reference frame 44 affixed thereto.
- the reference frame 44 is shown alone in Fig. 3A and Fig. 3B.
- the registration device 18 is shown in various positions.
- the registration device 18 may be or at least include a point gathering and/or determining system. For example, lidar, stereo-camera, depth camera, etc.
- the registration device 18 allows the generation of a mesh, point cloud and/or image frames of point clouds that may be stitched together.
- the registration device In the first position, Scan 1 , the registration device generates a plurality of points 310 corresponding to different points on the patient 14. A second set of points is generated by the registration device 18 as Scan n. Although two scans are illustrated in Fig. 3B, a plurality of scans at a plurality of angles at a plurality of speeds and other types of scanned parameters may be used.
- the registration device 18 is scanned along a path 314 to obtain a plurality of images.
- the points from the registration device such as a handheld camera, are used to obtain stitched together registration images as mentioned above.
- Each of the scan frame or positions may allow for generating one or more points, such as via processing of a depth image or a lidar scan.
- Each point may have position data, such as x,y,z data, and additional data such as normals and color information.
- Each frame may have one or more points that match or overlay at least one point in another frame, i.e., have the same x,y,z data. Thus, different frames may be stitched together to form a registration image.
- facial recognition may be used to obtain key features of the image from a machine learning based classification technique. That is, key points from the image may be recognized using facial recognition as a means to segment the image, or a Convolutional Neural Network (CNN) or transformer-based neural network trained to identify and segment points belonging to the reference frame and patient head.
- CNN Convolutional Neural Network
- the first plurality of points 310 and the second plurality of points 312 are communicated to the image processor 132.
- the image processor 132 stiches the plurality of images of points together to form a point cloud 320 illustrated in Fig. 3D.
- the plurality of points 310, 312 are point clouds.
- the point clouds from different scans are merged (e.g., stitched) in Fig. 3D to form registration data of a final registration point cloud.
- the stitching may be performed according to various embodiments, as discussed above. Regardless, the stitching allows a generation of a point cloud, as illustrated in Fig. 3D that includes a selected volume or area, such as of the face 332, to perform a registration.
- the registration device 18 allows a new registration image to be formed (e.g., acquired) every 30 milliseconds as the registration device is moved along the scan path 314.
- the image capture may be referred to as a snapshot or frames.
- Each of the points may have a “normal” associated therewith, thus allowing each of the points in the point cloud to have a position (i.e., x,y,z data) and an orientation relative to a surface via the normal.
- the stitching may choose a random snapshot as a reference snapshot.
- the set of snapshots may be separated into a number of different groups, G, wherein the number G is any appropriate number and may be based on a total number of snapshots acquired, volume imaged, etc.
- Each snapshot in the groups G may be added to the original reference frame. Additionally or alternatively, according to various embodiments, adjacent snapshots may be stitched together is sequence, when the snapshots were acquired sequentially in time and space around the patient. Thus, each of the snapshots are stitched to a previous snapshot and may be mathematically adjusted to fit together.
- registration features or landmarks such as various points 320 may be manually or automatically identified by the comparison module 136 in an image on the patient 14.
- a user may move a tracked probe or member to track or locate a pose of various points on the patient and/or in an image (e.g., move a pointer with a mouse).
- instructions may be executed by a processor, such as in the comparison module 136, to identify the points 320. This may be based on trained machine learning systems, selected algorithms, image segmentation, including those discussed herein.
- the features or points 320 may include such anthropometric locations like head points such as the edges of the eyes (eye points), the position of the ear lobes (ear points), chin (chin points), mouth (mouth points), nose (nose points) and various other locations.
- segments 322 between points 320 may be used for comparison with measurements in the clinical image data.
- the points 320 and/or the distances 322 may be used to determine a registration.
- Points may include a predetermined number of landmarks features such as the nasion 344A, the eyes 344B, the brow 344C and the tip of the nose 344D.
- the points in the pre-acquired image may be compared to the points 320, the distances 322, and/or the distance of the image sensor 154 to make a registration.
- Fig. 3E represents the entire face and possible landmark features, such as defined by points, therein being used in the registration process.
- the comparison module 136 may generate a signal indicative of whether or not a registration is possible and/or is made between the patient space and the image space.
- the user 42 may block a portion of points or measurements from view of the camera or image sensor 154. For example, a certain number of points may be identified in the clinical image data and the same or all of the points may be identified on the patient 14 in the registered image data from the image sensor 154 In some configurations, only points a predetermined distance from a reference frame and only a non-moving portion (sub-portion) of a face may be used. This will be described in more detail in Fig.
- the points in the registration image and their physical position relative to the patient 14 in patient space may be determined by a distance by the distance sensor 156 and/or with the registration image data. Further, the position relative to the patient 14 may be further determined due to movement of the motors associated with the actuators 152 that may be moved by electric signals that allow the position controller 124 of Fig. 2A to control the precise position of the registration device 18. For example, the motor may be rotated a predetermined number of rotations based upon feedback provided by a position signal from the position sensors 150 which may be a potentiometer, an encoder, or part of the motor as a servo motor.
- the scan may take place by scanning with the registration device 18 the face 332 of the patient that includes one or more landmarks 336 by moving the registration device 18 relative to the face, such as around and/or along a longitudinal axis 334 (e.g., Fig. 3B).
- landmark points 336 are recorded by the registration image.
- the comparison module 136 the comparison module 136 where it is compared to the clinical image and/or the points determined therefrom.
- the registration device 18 may use any appropriate wavelength, including one or more wavelengths, such as infrared light, visible light, or both, to obtain the image of the subject that may include positions of anthropometric points that are unique to each patient 14, such as the patient face.
- the anthropometric points may be the landmark points 336 and may be determined in any appropriate manner relative to the face 332, such as from the top to the bottom of the head of the patient 14. Scanning with eh registration device 18 may take place in less than 10 seconds and may work at various distances from the head, such as around one meter.
- not all of the landmark points 336 may be considered by the registration system 16 by cropping the registration image and/or the clinical image strategically.
- the reference frame 44 may be mounted at a fixed position within a boundary 340 and points related to the reference frame 44 may be used for the registration. This cropping of points from an image, such as a registration image, may allow for faster or more efficient registration.
- the boundary 340 may be a circle or other shape a predetermined distance Di from the reference frame 44. The distance Di may be chosen so that nonoverlapping areas are considered as well as a position that has non-potentially moving areas.
- a face area 342 may be defines as a boundary for non-moving positions.
- the area 342 may extend a predetermined distance from the nasion 344A (nasion point).
- the area 342 may be a circle, oval or other shape to include a predetermined number of features such as the nasion 344A, the eyes 344B (eye point(s)), the brow 344C (brow point(s)) and the tip of the nose 344D (nose point).
- the same features and same boundaries may be used for images from both the registration device 18 and from the pre-operative images.
- the reference frame 44 may be used for registration.
- frame initialization may occur.
- the frame initialization input may be a single point on the reference frame that is on a flat plane radius, which may be determined based on reference frame size, with the normal of the surface at that point which may be, called a Finit point.
- the Finit point may be automatically and/or manually detected in the scan and is predefined in a model of the reference frame.
- a sphere around the Finit point is cropped from the rest of the scan mesh or points.
- the size of the sphere can be predetermined and/or manually selected based on the reference frame size and/or training of a machine learning system.
- a segmented registration image having a sphere of point cloud data centered on the Finit point of a selected radius.
- the selected radius again, may be based on the size of the reference frame 44.
- the segmented reference frame scan is and the model of the reference frame are aligned so the Finit points in each space match. Further, a normal of the scan is matched to a normal of the model of the reference frame. That is, , perpendicular to the CAD model’s surface).
- the registration image of the segmented reference frame is registered to the model using iterative closest point to plane.
- the segmented reference frame may then be rotated 36 degrees and registered again to the model. This is repeated a number of times, such as 10, with the best fit registration chosen as the final orientation for registration.
- a head or face registration initialization may occur.
- the head initialization may takes as an input a location of the nasion in the clinical image and in the registration image.
- the clinical image nasion may be determined automatically by selected known segmentation techniques such as facial landmarks detection algorithms, CNN based point cloud segmentation or transformer-based neural network .
- the registration image nasion point is determined automatically by a selected face detection algorithm based on CNNs, transformer-based neural network, and deep learning methods such as YOLO (you only look once algorithms), feature detection and matching on edges or Haar wavelets etc.
- Neural networks and deep learning may be a technique used in segmentation.
- a curvature matching system such as a machine learning trained system (e.g., a neural network) may be used to identify the nasion.
- a selected curvature measured over a selected distance may be identified in the registration image as the nasion.
- Segmentation may take place in two stages. Stage (1 ) may provide segmentation of a whole skin surface or anatomical mask (e.g., "face”). While stage (2) identifies the certain regions or selected point(s) within the identified face (e.g. "nose", "nasion point”). [0094] A sphere of a selected radius is cropped around the selected point, such as the nasion, and may be called a Hinit point. This cropping or segmentation may remove excess noise and the reference frame. A selected process, such as Coherent Point Drift (CPD), is used to register the registration image of the head and clinical image of the head segmentation together. Similar to the reference frame initialization process, the registration image may be rotated 36 degrees around the Hinit point and registering at each iteration and the best match is chosen for the registration.
- CPD Coherent Point Drift
- a high-level method 346 is illustrated in a flow chart for operating the system described above.
- a clinical image is obtained using one of the systems described above.
- the clinical image data obtained in block 348 may have one or more landmark points identified therein.
- the clinical image data may identify a whole skin anatomical mask portion or segment in a first stage optional stage.
- the skin surface or anatomical mask may then be segmented therein. Landmarks may be identified thereon.
- the registration system 18 that may be initiated, for example by a user interface to start the system.
- the user interface may be a remote control from which signals are communicated through the network 112 illustrated in Fig. 2A.
- a direct wire or wireless communication may be used to initiate operation of the system through the registration controller 20 through the user interface 142 and/or the workstation 28.
- the registration device may be positioned, such as selectively aligned or scanned relative to a portion of the patient.
- the motors of the actuators are moved to move the registration device 18.
- the registration device 18 may also be moved manually by a user.
- registration images or data of the patient are obtained.
- the registration data may or be processed to generate a point cloud of data.
- registration image data of certain landmark features such as the position of bones, the nasion, physical features (e.g., corner of an eye), and distances between physical features (e.g., distance between two corners of two eyes) are obtained. Facial features and their relationships are illustrated in Figs. 3A to 3G.
- the registration images may, optionally, be stored in the memory 138.
- a comparison is made in block 362 between registration data corresponding the registration features, such as the registration distances in the registration images, and the same features in the clinical images.
- the comparison determines whether the registration features of the clinical and registration images correlate and a registration of the patient space of the patient 14 and the clinical image space is possible and/or has occurred.
- block 366 may generate an audible and/or visual indicator by way of the display device 36 or the audible device 37 to indicate or provide an indication to the user 42 that correlation has been successful.
- the correlation may be used to generate the transformation map, as discussed above, to allow a registration of the patient space to the image space. This registration may be output in block 370.
- a second audible and/or visual indicator 368 may be used to indicate to the user 42 that the correlation is not successful. This may result in corrective measures such as moving the patient or the registration device 18 to acquire a second registration image.
- the error of the procedure may be checked and if the error is not within the defined threshold, reregistration may be performed. It is understood that the indication signals are optional in the registration process.
- the process illustrated in Figs. 3A-3H allows complete replacement of manual registration with the automatic registration described herein using an electromagnetic tracking system or any appropriate tracking system. That is, the registration except for the initiation process, such as acquiring the registration images by moving the registration device, may be automated. An increase in speed and possibly an elevated level of accuracy of registration may be obtained along with allowing the formerly manual registration user to perform other tasks. In one exemplary embodiment, the registration was significantly faster than a manual process. The system may allow the registration to be performed automatically, is easily controlled remotely, and preventive maintenance is relatively easy on such systems.
- the image 380A may be generated with the registration device and be a complete or whole registration image.
- the image 380A may be a mesh from which points are generated and/or be defined by the points of a point cloud.
- the registration device may generate a mesh and the registration device output may be a 3D scanner output (mesh) that is converted to the point cloud.
- a selected system such as CNN may be trained, as discussed herein, to identify and segment points belonging to head and patient reference frame.
- the segmented portion may include only detected reference frame points or sub-sample points within known radius around detected frame when registering frame and ignore other points in point cloud.
- the radius may be Di and this may allow cropping the image, including the point cloud.
- both may be used and be segmented from other points, such as background points (e.g., a patient support or patient holding portion 381 ).
- an image 380B such as a point cloud, that uses segmentation into a head region 382 and an optical tracking device image 384 is set forth.
- the optical tracking device image 384 could be, however, any appropriate patient reference such as the reference 44 discussed above (e.g., EM, optical, acoustic, etc.)
- the head region 382 and the optical tracking device image 384 may be segmented from the other portions, such as a black portion 386, as discussed herein.
- the classifier may be appropriately trained (e.g., CNN) or execute a selected and appropriate algorithm. The segmentation process, however, allows at least some of the data points to be ignored in the registration process.
- a method for training the trained classifier 132A of the image processor 132 of Fig. 2A is set forth in the process 410.
- the trained classified may be a machine learning process, such as a convolution neural network (CNN) or transformer-based neural network .
- CNN convolution neural network
- a plurality of training set images is generated or provided to the system to train the system to recognize points in the registration image.
- the training data may relate to both clinical images and registration images.
- a trained classified may be provided for both of the clinical images and the registration images or separate trained classifiers may be provided for each.
- a plurality of training sets has a plurality of images along with a correct identification of various features therein.
- the training sets have known landmarks, selected radii, and classification of each point as belonging to at least one of patient head or patient reference frame; points may also be identified as background therein.
- proper identification is achieved in acquire registration images via automatic identification with the trained classifier during a procedure.
- the input images are provided to the trained classifier.
- an output of the trained classifier of test images is compared to the known output and/or identified by an expert (e.g., surgeon) provided with the trained images and/or separate therefrom.
- classifier weights W are adjusted based on comparing. For example, weights of the CNN or transformer-based neural network may be adjusted to achieve a selected or “known” identification of selected features in input images, such as input registration images.
- training is ended.
- the trained classifier may then be stored and/or accessed to identify selected features in input images, such as during a procedure.
- an output of the trained classifier may be a cloud of landmark points or a segmentation of patient head and reference frame from the rest of the point cloud that are identified in obtained registration images and/or the clinical images.
- the two sets of points may be correlated to perform a registration of the image space of the clinical images and the patient space identified with the registration images. The registration allows a transformation map between the two spaces.
- a method of performing registration for a patient is set forth in more detail.
- the patient is positioned in a fixed location, such as on an operating table or bed.
- the skin tone, age, sex, facial hair and the like may be accounted for when the registration imaging takes place.
- the lighting, the scanner position and the camera settings may be varied based upon the above.
- the reference frame such as the electromagnetic (EM) reference frame 44 described above, is affixed to the patient.
- the reference frame may be affixed, such as with a threaded fastener, stuck, or taped, to the skull of a patient.
- the registration model and nasion position may be detected in the registration exam based on the skin segmentation model using an algorithm such a facial landmarks detection, feature detection, or machine learning based method. Instructions may be provided for manually holding a handheld registration device 18. However, the position of the reference frame and the scanner may be varied. It should be noted that automated scanning or robotic scanning may be also or alternatively performed.
- the amount of scanning and the length of time for scanning may vary depending on the equipment (e.g., scanner) used.
- a fixed or random path may be used during scanning.
- the scan may be stopped.
- the scanner may be held at predetermined distance during scanning.
- block 518 has the operator, such as a surgeon, hold a button and perform scanning.
- the scanning may be performed so that registration images at different positions may be obtained.
- Video may be used to obtain multiple registration images or individual registration images may be used in block 520.
- the scan speed and sweeps by the scanner may be performed at different distances and speeds relative to the patient part being scanned. More regular speeds may be provided by an instrument, such as the moveable arms in Fig. 2C. However, manual scanning may also be provided.
- various features such as the face, in certain examples, and a patient tracker, such as reference frame 44, may be captured.
- various features may be recognized such as the eye sockets, nose, forehead and a nasion.
- certain features within boundaries may be obtained.
- various processing of the images such as generating the points to be used in registration may be obtained.
- registration in block 526 may be performed as discussed below in further detail.
- the user facing actions 612 may include scanning the face and reference frame in block 614 to capture the registration image and loading CT or other types of clinical images from a clinical application in block 616. Two different inputs and related flow paths are performed to achieve registration in block 526.
- the scanned face and reference frame as registration images of block 614 are provided to block 620 which detects the nasion from the registration images.
- the reference frame is detected in the received registration images from the block 614.
- the registration images may be cropped and further denoised mathematically and/or according to the processes as discussed above.
- Fig. 7. Details of the cropping process based on boundaries is described in Fig. 7.
- the processed registration images are stitched together using point cloud stitching.
- the stitched point cloud becomes a registration point cloud that is used for registration.
- the process of point cloud stitching was illustrated and described above in Figs. 3A-3D.
- point cloud stitching uses multiple registration images and coordinates the points therefrom to find a final point cloud of registration points.
- block 630 builds a registration model or segmented clinical image from the clinical image.
- Binary segmentation may be used to form the registration model.
- Skin segmentation may be performed on the clinical image data.
- the skin segmentation may be performed as the registration device 18 may generate data, such as a point cloud, of a surface of the patient that relates to the skin. Skin segmentation may be performed according to any appropriate technique and may be used to generate a mesh from which a point cloud may be generated.
- the one or more types of segmentation form a segmented clinic image.
- the nasion and/or other facial features may be determined from the registration model.
- the registration model may also be converted into the operating space in block 634.
- the registration model is provided to block 526 where registration between the two sets of point clouds is determined.
- the registration point cloud derived from block 626 and the clinical point cloud from the clinical images are compared.
- an error metric or pass/f ail indicator is provided in block 640 where the error metric is within a predetermined range, the registration is determined as successful.
- a pass/f ail indicator such as a visual or audible indicator, may be provided.
- the cropping feature may crop together and/or separately landmarks of the patient, such as the face (e.g., nasion), and the reference frame.
- the two processes are illustrated as sub-processes 624a, 624b.
- the registration may occur with only one or both being identified in the registration images and the clinical images. According to various embodiments, identifying the reference frame and separating it from the face or head may assist in an efficiency and/or speed of the registration.
- the reference frame is detected.
- a reference frame boundary is generated around the reference frame to be large enough to incorporate desired reference frame points therein.
- the boundary may be a selected or set radius from a point of the reference frame.
- the boundary may be two dimensional or three dimensional.
- the area or volume within the boundary may then be cropped, at least in the registration image. The cropped portion may then be used for the registration process.
- the reference frame is registered to the clinical images by including a plurality of reference frame points a predetermined distance from the reference frame.
- the face is detected.
- the face may be detected by edges or predetermined features such as the eyes, nasion, or nose, or using CNN based (or transformer-based) neural network -methods to identify relevant landmarks or segment the patient face or head from the point cloud.
- a face boundary is determined in block 722. The face boundary was briefly mentioned above.
- the face boundary is generated to provide an area or volume over which points (which may be referred to as head points) for registration may be used, such as of a face of the patient.
- the boundary may be from one or more identified landmarks, such as a nasion in the image.
- the boundary may be chosen to provide points of one or more non-moving areas of the face. For example, points in and around the chin may not be chosen because they may move during the operating procedure.
- the area or volume within the boundary may then be cropped, at least in the registration image. The cropped portion may then be used for the registration process.
- block 724 registers the face and the points in the boundary.
- the face points in the boundary may be cropped for an efficient and/or faster registration.
- the cropping process provided in block 624 is used in the registration block 526 as mentioned above.
- the cropped points may allow for a reduced number of points for registration, rather than an entire point set from an image.
- the cropping may allow for registration of a selected portion.
- the reference frame may be cropped and used for registration without using other points in the image.
- points of a face may be cropped separate from other portions, such as the reference frame for registration.
- the registration process may be applied to a sub-set of points from the images, such as only the reference frame, only face points, etc. It is understood by one skilled in the art, however, that all or selected multiple sets of points may be used for registration.
- Instructions may be executed by a processor and may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects.
- the execution of the instructions may be substantially automatic, such as with the processor, once a selected input or data is received. Thus, a user may not or need not provide multiple inputs for a process or outcome to occur.
- the term shared processor circuit encompasses a single processor circuit that executes some or all code from multiple modules.
- the term group processor circuit encompasses a processor circuit that, in combination with additional processor circuits, executes some or all code from one or more modules.
- references to multiple processor circuits encompass multiple processor circuits on discrete dies, multiple processor circuits on a single die, multiple cores of a single processor circuit, multiple threads of a single processor circuit, or a combination of the above.
- the term shared memory circuit encompasses a single memory circuit that stores some or all code from multiple modules.
- the term group memory circuit encompasses a memory circuit that, in combination with additional memories, stores some or all code from one or more modules.
- the apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs.
- the computer programs include processor-executable instructions that are stored on at least one non-transitory, tangible computer-readable medium.
- the computer programs may also include or rely on stored data.
- the computer programs may include a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services and applications, etc.
- BIOS basic input/output system
- the computer programs may include: (i) assembly code; (ii) object code generated from source code by a compiler; (iii) source code for execution by an interpreter; (iv) source code for compilation and execution by a just-in-time compiler, (v) descriptive text for parsing, such as HTML (hypertext markup language) or XML (extensible markup language), etc.
- source code may be written in C, C++, C#, Objective-C, Haskell, Go, SQL, Lisp, Java®, ASP, Perl, Javascript®, HTML5, Ada, ASP (active server pages), Perl, Scala, Erlang, Ruby, Flash®, Visual Basic®, Lua, or Python®.
- Communications may include wireless communications described in the present disclosure can be conducted in full or partial compliance with IEEE standard 802.11 -2012, IEEE standard 802.16-2009, and/or IEEE standard 802.20-2008.
- IEEE 802.11 -2012 may be supplemented by draft IEEE standard 802.11 ac, draft IEEE standard 802.11 ad, and/or draft IEEE standard 802.11 ah.
- a processor or module or ‘controller’ may be replaced with the term ‘circuit.’
- the term ‘module’ may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
- ASIC Application Specific Integrated Circuit
- FPGA field programmable gate array
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Disclosed is a system to register a subject, e.g., physical, space to an image space. The registration may be performed automatically by a registration system with a registration device. The registration device may acquire an image of a subject space generate a point cloud and compare the point cloud to points in a pre-operative image.
Description
SYSTEM AND METHOD OF PATIENT REGISTRATION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of and priority to U.S. Provisional Patent Application No. 63/516,413 filed 28 July 2023. This application includes subject matter similar to that disclosed in U.S. Pat. App. No. 63/516,409 (Attorney Docket. No. A0004834US01 I 5074A-000286-US-PS1 ). The entire disclosure of the above application is incorporated herein by reference.
FIELD
[0002] The present disclosure relates to a surgical navigation system, and particularly to a method for registering a patient pre- and intra-operatively to an image data.
BACKGROUND
[0003] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
[0004] Guided surgery relies on knowing the position of a patient relative to equipment used for the surgery. Various forms include Image Guided Surgery (IGS). A registration process is performed to determine a transformation between reference frames to allow determining where the patient is relative to the equipment.
[0005] Manual intervention from a surgeon may be required to complete the registration process which may be required before the start of a surgical procedure. Less manual intervention in the registration process may decrease time to registration, improve reproducibility and improve the ease of registration.
SUMMARY
[0006] In one aspect of the disclosure, a method includes acquiring clinical images of a subject, segmenting the clinical image to form a segmented clinical image, determining a clinical point cloud for the segmented clinical image, acquiring a registration image from a registration device, generating a registration point cloud for the subject, selecting at least a sub-portion of the both the clinical point cloud and the registration point cloud and registering a subject space to an image space based on the selected at least sub-portion of the both the clinical point cloud and the registration point cloud.
[0007] In another aspect of the disclosure, a system includes a registration device generating images of a subject and a controller segmenting a clinical image to form a segmented clinical image, determining a clinical point cloud for the segmented clinical image, acquiring a registration image, generating a registration point cloud for the subject, selecting at least a sub-portion of the both the clinical point cloud and the registration point cloud and registering a subject space to an image space based on the selected at least sub-portion of the both the clinical point cloud and the registration point cloud.
[0008] An image of the subject may be used for diagnosis and treatment of the subject. Such an image may be referred to as a treatment or clinical image. The clinical image may be based on image data acquired with an appropriate imaging system, as discussed herein. The clinical image may be a projection and/or reconstruction of the acquired image data. The clinical image may be acquired of the subject at any appropriate time such as prior to or during a procedure. The clinical image may define a clinical image space. A position of an instrument relative to a subject, who has been imaged, may be determined with a tracking system. The position of the instrument may be displayed relative to the acquired clinical image due to a registration of a subject space to the clinical image space.
[0009] The registration may occur by determining the position of various points on the subject and correlating them to points in the clinical image space. The correlation may allow for a determination and generation of a transformation map between the physical or subject space of the subject and the clinical image space of the clinical image. Based at least in part on the registration, the tracked position of an instrument may be displayed relative to the clinical image.
[0010] During a procedure, a subject may be registered to the clinical image. The registration may be substantially automatic by a registration system. The registration system may acquire a registration image of the subject. Further, during a procedure an automatic or updated registration may occur due to a
determination that the subject has moved and the registration system may again register the subject space to the clinical image space.
[0011] Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
DRAWINGS
[0001] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
[0002] Fig. 1 is an environmental view of a surgical navigation system or computer aided surgical system, according to various embodiments;
[0003] Fig. 2A is a high-level block diagram of the registration controller of Fig. 1 ;
[0004] Fig. 2B is a detailed block diagrammatic view of the registration device of Fig. 1 ;
[0005] Fig. 2C Is an environmental view of the navigation system with an imaging system;
[0006] Fig. 3A is a detail view of a patient having a reference frame therein, according to various embodiments;
[0007] Fig. 3B is the patient of Fig. 3A relative to different scanning positions;
[0008] Fig. 3C are point clouds from the scan of Fig. 3B;
[0009] Fig. 3D is an example of point cloud stitching from the points of Fig. 3C;
[0010] Figs. 3E and 3F are facial images having points and segments thereon;
[0011] Fig. 3G is a representation of points determined by the scanning process;
[0012] Fig. 3H is an example of filtered points that have been cropped around an area of the reference frame and the nasion, according to various embodiments;
[0013] Fig. 3I is a high-level flowchart of a method for performing a registration process;
[0014] Fig. 3J is a representation of an optical image of a patient having a reference frame therein, according to various embodiments;
[0015] Fig. 3K is a representation of a segmented optical image of a patient having a reference frame therein, according to various embodiments;
[0016] Fig. 4 is a flowchart of a method for training a trained classifier;
[0017] Fig. 5 is a detail flowchart of at least a portion of a touchless registration process;
[0018] Fig. 6 is a detail flowchart of the processing block of Fig. 5;
[0019] Fig. 7 is a detailed flowchart of the cropping block of Fig. 6.
DETAILED DESCRIPTION
[0020] The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Although the following description illustrates and describes a procedure relative to a cranium of a patient, the current disclosure is not to be understood to be limited to such a procedure. For example, a procedure can also be performed relative to a spinal column, heart, vascular system, etc. Therefore, discussion herein relating to a specific region of the anatomy will be understood to be applicable to all regions of the anatomy, unless specifically described otherwise.
[0021] As discussed herein, various systems and elements can be used to assist in a surgical procedure. For example, clinical image data can be acquired of a patient to assist in illustrating a location of an instrument relative to a patient. Generally, clinical image space (i.e., defined by a coordinate system of an image generated or reconstructed from image data) can be registered to patient space (i.e., defined by a coordinate system of a physical space relative to a patient to assist in this display and navigation).
[0022] With reference to Fig. 1 , a navigation system 10 that can be used for various procedures is illustrated. The navigation system 10 can be used to track the location of a device 12, such as a pointer probe, relative to a patient 14 to assist in the implementation or performance of a surgical procedure. It should
be further noted that the navigation system 10 may be used to navigate or track other devices including: catheters, probes, needles, leads, electrodes implants, etc. According to various embodiments, examples include ablation catheters, deep brain stimulation (DBS) leads or electrodes, micro-electrode (ME) leads or electrodes for recording, etc. Moreover, the navigated device may be used in any region of the body. The navigation system 10 and the various devices may be used in any appropriate procedure, such as one that is generally minimally invasive, arthroscopic, percutaneous, stereotactic, or an open procedure. Although an exemplary navigation system 10 including an image registration system 16 are discussed herein, one skilled in the art will understand that the disclosure is merely for clarity of the present discussion and any appropriate imaging system, navigation system, patient specific data, and non-patient specific data can be used. It will be understood that the navigation system 10 can incorporate or be used with any appropriate preoperatively or intraoperatively acquired image data.
[0023] The navigation system 10 includes the image registration system 16 used to acquire and compare pre- and intraoperative, including real-time image data of the patient 14. In various embodiments, the system may register and/or maintain registration to intraoperatively acquired clinical image data. In various embodiments, the system may register and/or maintain registration to preoperative clinical image data until the end of the procedure or until relative movement, such as skull movement, is detected. If movement is detected, such as with the distance
sensors as discussed herein, the registration is maintained by allowing collection of additional registration data and/or is re-registered.
[0024] The registration system 16 may, for example, use visible light, infrared light, electromagnetic energy, light detection and ranging (lidar), or thermal technologies emitted from and/or received by a registration device 18. In various embodiments, the registration device 18 may include a camera system, such as a stereo camera system, including but not limited to the Intel® RealSense™ D415 or D430 depth camera sold by Intel Corporation, Einstar 3D scanner from Shining3D. According to various embodiments, therefore, registration device 18 may transmit an image or multiple images, points (point cloud) of line segments with points as data signals, mesh, etc. to a registration controller 20. The registration controller 20 may determine the position of the registration device 18 by way of a reference locator 26 as further described below in the examples set forth. The reference locator 26 may be an optional feature.
[0025] As noted above, the registration device 18 may be an optical device, an electromagnetic device, and/or a lidar device used to obtain one or more registration images or points for registration. That is, the registration device 18 may include a sensor for generating points corresponding to a registration image or a video stream. If a video is captured, registration images from each frame of the video stream may be captured therefrom. The registration device 18may be fixed relative to the subject and/or moveable relative to the subject. In either case, the registration device 18 may be handheld, robot mounted or gimbal mounted to obtain the registration image or points. The registration controller 20 ultimately
determines or identifies data that corresponds to various physical features of the patient, distance, or positions of the features as described in detail below used for registration. In various embodiments, this data may relate to markers and/or points on the patient. The registration image from the registration device 18 may include information or data, as discussed herein, that is useful for registration to the clinical image of the subject acquired with an imaging system, as discussed herein. The clinical image data of the subject may be pre- or intraoperative image data. The clinical image data may be used to generate the clinical image that is displayed.
[0026] In the example of Fig. 1 , the longitudinal axis 14A of the patient 14 is substantially in line with the longitudinal axis 22 of the operating table 24. In this example, the upper body of the patient 14 is elevated but the longitudinal axes 14A and 22 are aligned.
[0027] Fiducial marker or point data useful for registration obtained from the registration image of the registration system 16 can then be forwarded to the navigation computer and/or processor controller or workstation 28 having a display device 36 to display the clinical image data 38 and a user interface 40. Workstation 28 may also have an audible device 37 such as a speaker, buzzer, or vibration generator for generating an audible signal. The display device 36 and/or the display may generate a visual and/or audible signal corresponding to a registration or a lack of registration of a patient space to the clinical image space which is described in more detail below. The workstation 28 can also include or be connected to an image processor, a navigation processor, and a memory to hold instruction and data. It will also be understood that the image data is not
necessarily retained in the controller 20 but may also be directly transmitted to the workstation 28. Moreover, processing for the navigation system and/or the registration system 16 and optimization can all be done with a single or multiple processors all of which may or may not be included in the workstation 28. For example, the registration controller 20 may be incorporated into the workstation 28.
[0028] The workstation 28 provides facilities for displaying the clinical image data 38 as the clinical image on the display device 36, saving, digitally manipulating, or printing a hard copy image of the received image data. The user interface 40, which may be a keyboard, mouse, touch pen, touch screen or other suitable device, allows a physician or user 42 to provide inputs to control the image registration system 16 or adjust the display settings of the display device 36. The workstation 28 may also direct registration device 18 to adjust the position relative to the patient 14.
[0029] With continuing reference to Fig. 1 , the navigation system 10 can further include a tracking system, such as, but not limited to, an electromagnetic (EM) tracking system 46 or an optical tracking system 46’. Either or both can be used alone or together in the navigation system 10. The discussion herein of the EM tracking system 46 can be understood to relate to any appropriate tracking system. The optical tracking system 46’ can include the StealthStation® Treon® , StealthStation® S7, StealthStation® S8 and the StealthStation® Tria® all of which are sold by Medtronic Navigation, Inc. Other tracking system modalities may include acoustic, radiation, radar, infrared, etc.
[0030] The EM tracking system 46 includes a coil array or EM localizer 48, such as a coil array and/or second coil array 50, a coil array controller 52, a navigation probe interface 54, the device 12 (e.g., instrument, tool, catheter, needle, pointer probe, or instruments, as discussed herein) and a dynamic reference frame (DRF) 44. An instrument tracking device 34a can also be associated with, such as fixed to, the device 12 or a guiding device for an instrument (or coupled to the registration device 18 as mentioned above). The dynamic reference frame 44 can include a dynamic reference frame holder 56 and a removable tracking device 34b. Alternatively, the dynamic reference frame 44 can include the tracking device 34b that can be formed integrally or separately from the DRF holder 56.
[0031] Moreover, the DRF 44 can be provided as separate pieces and can be positioned at any appropriate position on the anatomy. For example, the tracking device 34b of the DRF 44 can be fixed to the skin of the patient 14 with an adhesive. Also, the DRF 44 can be positioned near a leg, arm, etc. of the patient 14. Thus, the DRF 44 does not need to be provided with a head frame or require any specific base or holding portion.
[0032] The tracking devices 26, 34, 34a, 34b or any tracking device as discussed herein, can include a sensor, a transmitter, or combinations thereof. Further, the tracking devices can be wired or wireless to provide a signal emitter or receiver within the navigation system. For example, the tracking device can include an electromagnetic coil to sense a field produced by the EM localizing array formed by the EM localizers 48, 50 or reflectors that can reflect a signal to be
received by the optical tracking system 46’. Nevertheless, one will understand that the tracking device can receive a signal, transmit a signal, or combinations thereof to provide information to the navigation system 10 to determine a location of the tracking device 34, 34a, 34b. The navigation system 10 can then determine the position of the instrument or tracking device to allow for navigation relative to the patient and patient space.
[0033] The coil arrays or localizers 48, 50 may also be supplemented or replaced with a mobile localizer. The mobile localizer may be one such as that described in U.S. Patent Application Serial No. 10/941 ,782, filed Sept. 15, 2004, now U.S. Pat. App. Pub. No. 2005/0085720, entitled "METHOD AND APPARATUS FOR SURGICAL NAVIGATION", herein incorporated by reference. As is understood the localizer array can transmit signals that are received by the tracking devices 26, 34, 34a, 34b. The tracking devices 34, 34a, 34b can then transmit or receive signals based upon the transmitted or received signals from or to the arrays or localizers 48, 50.
[0034] Further included in the navigation system 10 may be an isolator circuit or assembly (not illustrated separately). The isolator circuit or assembly may be included in a transmission line to interrupt a line carrying a signal or a voltage to the navigation probe interface 54. Alternatively, the isolator circuit included in the isolator box may be included in the navigation probe interface 54, the device 12, the dynamic reference frame 44, the transmission lines coupling the devices, or any other appropriate location. The isolator assembly is operable to isolate any of the instruments or patient coincidence instruments or portions that
are in contact with the patient should an undesirable electrical surge or voltage take place.
[0035] It should further be noted that the entire tracking systems 46, 46’ or parts of the tracking systems 46, 46’ may be incorporated into the registration system 16, including the workstation 28. Incorporating the tracking system 46, 46’ may provide an integrated imaging and tracking system. This can be particularly useful in creating a fiducial-less system without separate physical or implanted markers attached to the patient. Moreover, fiducial marker-less systems can include a tracking device and a contour determining system, including those discussed herein.
[0036] The EM tracking system 46 uses the coil arrays 48, 50 to create an electromagnetic field used for navigation. The coil arrays 48, 50 can include a plurality of coils that are each operable to generate distinct electromagnetic fields into the navigation region of the patient 14, which is sometimes referred to as patient space. Representative electromagnetic systems are set forth in U.S. Patent No. 5,913,820, entitled “Position Location System,” issued June 22, 1999 and U.S. Patent No. 5,592,939, entitled “Method and System for Navigating a Catheter Probe,” issued January 14, 1997, each of which are hereby incorporated by reference.
[0037] The coil array 48 is controlled or driven by the coil array controller 52. The coil array controller 52 drives each coil in the coil array 48 in a time division multiplex or a frequency division multiplex manner. In this regard, each coil may
be driven separately at a distinct time or all of the coils may be driven simultaneously with each being driven by a different frequency.
[0038] Upon driving the coils in the coil array 48, 50 with the coil array controller 52, electromagnetic fields are generated within the patient 14 in the area where the medical procedure is being performed, which is again sometimes referred to as patient space. The electromagnetic fields generated in the patient space induce currents in the tracking device 34, 34a, 34b positioned on or in the device 12, DRF 44, etc. These induced signals from the tracking devices 34, 34a, 34b are delivered to the navigation probe interface 54 and subsequently forwarded to the coil array controller 52. The navigation probe interface 54 can also include amplifiers, filters and buffers to directly interface with the tracking device 34b attached to the device 12. Alternatively, the tracking device 34b, or any other appropriate portion, may employ a wireless communications channel, such as that disclosed in U.S. Patent No. 6,474,341 , entitled “Surgical Communication Power System,” issued November 5, 2002, herein incorporated by reference, as opposed to being coupled directly to the navigation probe interface 54.
[0039] Various portions of the navigation system 10, such as the device 12, the dynamic reference frame 44, are equipped with at least one, and generally multiple, EM or other tracking devices 34a, 34b, that may also be referred to as localization sensors. The EM tracking devices 34a, 34b can include one or more coils that are operable with the EM localizer arrays 48, 50. An alternative tracking device may include an optical device or devices 58 and may be used in addition to or in place of the electromagnetic tracking devices 34a, 34b. The optical tacking
device may work with the optional optical tracking system 46’. The optical tracking device 58 may include marks or sticker type devices affixed to the skin of the patient. One skilled in the art will understand, however, that any appropriate tracking device can be used in the navigation system 10. An additional representative alternative localization and tracking system is set forth in U.S. Patent No. 5,983,126, entitled “Catheter Location System and Method,” issued November 9, 1999, which is hereby incorporated by reference. Alternatively, the localization system may be a hybrid system that includes components from various systems.
[0040] In brief, the EM tracking device 34a on the device 12 can be in a handle or inserter that interconnects with an attachment and may assist in placing an implant or in driving a member. The device 12 can include a graspable or manipulable portion at a proximal end and the tracking device 34a may be fixed near the manipulable portion of the device 12 or at a distal working end, as discussed herein. The tracking device 34a can include an electromagnetic tracking sensor to sense the electromagnetic field generated by the coil array 48, 50 that can induce a current in the electromagnetic device 34a. Alternatively, the tracking device 34a can be driven (i.e., like the coil array above) and the coil arrays 48, 50 can receive a signal produced by the tracking device 34a.
[0041] The dynamic reference frame 44 may be fixed to the head 60 of the patient 14 adjacent to the region being navigated so that any movement of the patient 14 is detected as relative motion between the coil arrays 48, 50 and the dynamic reference frame 44. The dynamic reference frame 44 can be
interconnected with the patient in any appropriate manner, including those discussed herein. Relative motion is forwarded to the coil array controller 52, which updates the registration and maintains accurate navigation, further discussed herein. Alternatively, when motion is detected, re-registration may be performed. The dynamic reference frame 44 may include any appropriate tracking device. Therefore, the dynamic reference frame 44 may also be EM, optical, acoustic, etc. If the dynamic reference frame 44 is electromagnetic it can be configured as a pair of orthogonally oriented coils, each having the same center or may be configured in any other non-coaxial or co-axial coil configurations.
[0042] Briefly, the navigation system 10 operates as follows. The navigation system 10 creates a map of points, which may include all points, in the registration image data generated from the registration device 18 which can include external and internal portions that correspond to points in the patient’s anatomy in patient space. This map generated with the registration device 18 may then be transformed (e.g., a transformation map is made) to the clinical image data acquired for the subject 14, such as pre- or intraoperatively. After this transformation map is established, whenever the tracked device 12 is used, the workstation 28 in combination with the coil array controller 52 uses the transformation map to identify the corresponding point on the clinical image data and/or atlas model, which is displayed on display 36. This identification is known as navigation or localization. An icon representing the localized point of the instruments is shown on the display 36 in an appropriate manner relative to the
clinical image data which may be one or several two-dimensional image planes, as well as on three- and four-dimensional images and models.
[0043] To enable navigation, the navigation system 10 must be able to detect both the position of the patient’s anatomy and the position of the device 12 or an attachment member (e.g., tracking device 34a) attached to the device 12. Knowing the location of these two items allows the navigation system 10 to compute and display the position of the device 12 or any portion thereof in relation to the patient 14. The EM tracking system 46 is employed to track the device 12 and the anatomy of the patient 14 simultaneously.
[0044] The EM tracking system 46, if it is using an electromagnetic tracking assembly, essentially works by positioning the coil arrays 48, 50 adjacent to the patient 14 to generate a magnetic field, which can be low energy, and generally referred to as a navigation field. Because every point in the navigation field or patient space is associated with a unique field strength, the electromagnetic tracking system 46 can determine the position of the device 12 by measuring the field strength at the tracking device 34a location. The dynamic reference frame 44 is fixed to the patient 14 to identify the location of the patient in the navigation field. The electromagnetic tracking system 46 continuously computes or calculates the relative position of the dynamic reference frame 44 and the device 12 during localization and relates this spatial information to patient registration data to enable navigation of the device 12 within and/or relative to the patient 14. Navigation can include image guidance or imageless guidance.
[0045] The points or portions that are selected to perform registration can be image points or point clouds of points from the registration image that are compared to points derived from clinical images. The points may be identified at any appropriate time, such as while registration is taking place. The points can include landmarks such as anatomical landmarks, measurements between landmarks, positioned members (e.g., fiducial markers, DRF(s)), and combinations thereof as described in more below. The landmarks are identifiable in the clinical and registration image data and identifiable and accessible on the patient 14. The landmarks can include individual or distinct points on the patient 14 or contours (e.g., three-dimensional contours) defined by the patient 14.
[0046] As discussed above, registration of the patient space or physical space to the clinical image data or clinical image space can utilize the correlation or matching of physical or virtual fiducial points observed intra-operative and the image fiducial points of clinical images. This may be performed by comparing (e.g., matching) point clouds from clinical images and registration images. The point clouds may be based on fiducial portions identified in the clinical images and/or registration images, such as the DRF. . The physical fiducial points in the present example are one example anatomical landmarks. The physical fiducial points can also include a determined contour (e.g., a physical space 3D contour) using various techniques and line segments between points, as discussed herein.
[0047] Referring now to Fig. 2A, details of the registration controller 20 are illustrated. As mentioned above, the registration controller 20 may be a separate computer or device or may be incorporated into the workstation 28. The
registration controller 20 may access selected clinical image data, such as preprocedure clinical image data and may be in communication with a pre-procedure image system 110. The pre-procedure image system 110 may include, but is not limited to, a computed tomography (CT) system generating a computerized tomography (CT) image, an X-Ray system generating an X-ray image, O-arm® imaging system, a magnetic resonance image (MRI) system generating an MRI image, or an ultrasound system generating an ultrasound image. Examples of a pre-procedure clinical image system are set forth below in Fig. 2C.
[0048] The pre-procedure image system 110 may obtain pre-procedure clinical images that are provided to the registration controller 20 for comparison with a registration image. The pre-procedure clinical image system 110 may provide a digital image file to the registration controller 20. It is understood, however, that the clinical image data may also be acquired during an operative procedure thus being intraoperative clinical image data or images. A CT image, MRI image may act as the clinical image. Likewise, video frames may also be used as the clinical image. Herein, discussion of clinical images or image data is understood to be any image data of the subject to which registration may be made.
[0049] The registration controller 20 may also be in communication with a network 112. The network 112, such as the Internet, may have a wired or wireless network connection. Various types of data may be communicated through the network 112 including from a remote control 114 that may be used to operate the system. The remote control 114 may be a separate component or a component integrated into a system such as the workstation 28. The remote
control 114 may include a system to initiate the registration process, acquire the pre-procedure image data, etc.
[0050] The network 112 is in communication with a network interface 116. The network interface 116 allows communication from the registration controller 20 to the network 112 and ultimately to other components such as the workstation 28 or various other devices. The network interface 116 allows the network 112 to communicate in remote locations other than the operating room in which the navigation device 10 is located.
[0051] The registration controller 20 may also be communication with the registration device 18, the display device 36 and the audible device 37. The display device 36 and the audible device 37, in this example, are part of the workstation 28. However, separate display devices and audible devices may be provided especially when the registration controller 20 is located away from the workstation 28.
[0052] The registration controller 20 may be processor, such as microprocessor-based and programmed to perform various functions. The blocks provided within the registration controller 20 may be separate processors or modules programmed to perform various functions.
[0053] An actuator controller 120 is used to control actuators 152 of the registration device 18 when used, as set forth in Figure 2C. As described in more detail below, the registration device 18 may be scanned or moved using the actuators 152. The registration device 18 may also be fixed and/or moved manually, such as by the user. The registration device 18 may include a physical
structure, as discussed herein, that may be moved relative to the subject 14. The actuators 152 may be motors or other systems that move the registration device 18. The actuator controller 120 may move the motors based upon received sensor signals from the registration device 18 or the tracking device 34a, 166 and are received at the position sensor input 122. Sensors may also be individual sensors, combined sensors, and include any appropriate number. Sensors may include position sensors that may be distance sensors that sense the distance from the patient and encoders used to sense the position of the moving actuators. The distance sensors may be infrared distance sensors. The actuator controller 120 and the signals from the position sensors in the registration device 18 received at the position sensor input 122 are provided to a position controller 124. Position controller 124, based on the position sensor input 122, controls actuators at the registration device 18 using the actuator controller 120.
[0054] An illumination controller 130, if selected, is used to control a light source at the registration device 18.
[0055] An image processor 132 receives registration imaging signals from the registration device 18. The registration device 18 generates registration image signals from a registration image sensor as will be described in more detail below. The registration device 18 may acquire or generate registration image signals that may be used to register the patient to the pre-procedure image data. The image processor 132 may include a trained classifier 132A. The trained classifier 132A is one system (e.g., trained machine learning system) used for identifying registration images or portions thereof (e.g., a nasion) acquired from
the registration device 18. The trained classifier 132A may include weights W that are trained according to the procedures set forth below. The trained classifier 132A, in general, has a plurality of weights W that are adjusted using numerous classified images or over time. The trained classifier 132A may be a convolutional neural network (CNN), an autoencoder algorithm, a recurrent neural network (RNN) algorithm, and transformer neural network algorithm, a generative adversarial network (GAN) algorithm, linear regression, support vector machine (SVM) algorithm, random forest algorithm, hidden Markov model, and/or any combination thereof. For example, in some embodiments, the at least one processor may be configured to utilize a combination of a CNN algorithm or transformer-based neural network algorithm in conjunction with an SVM algorithm. The trained classifier may also be machine learned or a software based algorithm that uses measures between points. The image processor 132 may generate points from the registration image from the registration device 18, such as points in a point cloud, which are used to identify various facial features or points thereof so that facial recognition is performed and/or identifying various features of the reference frame thus being able to delineate patient face and reference frame. For example, able to identify those points that belong to only one of the patient face or the reference frame to segment or separate it from other portions of the image or points in a point cloud. Various images or snapshot images obtained with the registration device 18 (e.g., partial or selected registration image frames) may be obtained and stitched together to generate the image, which may be the registration 22mage. According to various embodiments one or more methods
may be used such as for forming a stitched image including random sampling or full stitching. In various embodiments of the random sampling method chooses a snapshot as a reference snapshot. Then, a set of frames is separated into a number of different groups, G. A random snapshot for each group is chosen and registered to the reference snapshot with each registration adding data to the original frame. In various embodiments the full stitching method registers adjacent snapshots. Adjacent snapshots are snapshots that were acquired sequentially in time. Starting from the first snapshot, adjacent snapshots are registered to each other, until a goodness of fit measure crosses a certain threshold. A goodness of fit measure indicates how well two snapshots register together and is a percentage of the two cloud points that fit to the within a selected percentage or threshold. Registration data is therefore formed from the recognized points within the image frames above. Ultimately, the output of the image processor 132 is communicated to a registration processor 134.
[0056] The registration processor 134 may perform a registration of the clinical image data, such as the pre-procedure image data in point, or point cloud, to the registration image. This allows the patient space, defined by the patient 14 and physical space relative to the patient 14 to be registered. As discussed above and further herein, the registration device 18 may acquire an image that is referred to as the registration image of at least a portion of the patient 14. The registration image may be converted to points or a point cloud. A common point or fiducial point between the registration image and the clinical image may be used to perform the registration of the patient space to the clinical image space. A position of the
points on the patient may be based upon the determination of a pose of the registration device 18 and the patient 14 when acquiring the registration image to determine a position of the points on the patient in the physical space defined by and relative to the patient 14. The registration process may be similar to that discussed above and include a generation or determination of a transformation map between the position of the points determined of the patient 14 and of the similar or same points such as head points in the clinical image data.
[0057] A user interface 142 coupled to the registration controller 20 is used for providing control signals to the various controllers and modules within the registration controller 20. Examples of the user interface 142 include a keyboard, a mouse or a touch screen.
[0058] A timer 144 may also be included within the registration controller 20. The timer 144 may record the time of the images received from the registration device 18. This may allow a correlation of a time of determining a position of the registration device, as discussed herein, for use with determining a position of the patient 14 for the registration process.
[0059] Referring now to Fig. 2B, the registration device 18 is schematically illustrated in further detail. The registration device 18 may have a plurality of position sensors 150. Each of the actuators 152 and/or arms 153 may have position sensor feedback from a position sensor associated therewith. The position sensors 150 generate a plurality of position signals that are ultimately communicated to the registration controller 20. Control signals from the actuator
controller 120 are communicated as signals 120A to the actuators 152. The number and types of actuators 152 may vary depending upon the type of system.
[0060] The actuators 152 may move a selected portion or the entire registration device 18. The actuators 152 may or may not include only the sensors and light sources depending upon the configuration.
[0061] A distance sensor 156 may allow the registration device 18 to communicate a distance signal to the registration controller 20 to determine the position and provide feedback relative to the position to the position controller 124. Different types of distance sensors including radar, infrared light time of travel, or laser may be used. Another specific type of distance sensor is a passive infrared (PIR) sensor which may be used to thermally sense the distance of the mask to the patient. A PIR sensor has transmitter and receiver. The transmitter of a PIR sensor may transmit the light (e.g., omnidirectionally), and the receiver receives a reflected IR light off of the patient. Consequently, each PIR sensor determines the distance. The distance sensor 156 calculates the distance to the head and gives the output based on a distance a movable robotic arm would need to be adjusted to continue the procedure of registration.
[0062] A plurality of light sources 160 may be used to illuminate the patient 14 and are controlled by the illumination controller within the registration controller 20. The plurality of light sources 160 may surround or be adjacent to an image sensor 154 and be controlled to obtain a useful image. The image sensor 154 may have parameters that may be set. An image parameter controller 155 may be used to adjust camera settings such as but not limited to aperture, shutter
speed, ISO, quality (number of pixels) and white balance. Light sources, however, may not be required with the registration device and ambient light may be enough to capture the registration image(s).
[0063] The registration device 18 may also include a transmitter/receiver 162. The transmitter/receiver 162 may be referred to as a transceiver 162. The transceiver 162 may be used for communicating signals to and from the registration controller 20. The transceiver 162 may, for example, communicate using Bluetooth® wireless communication or another type of wireless technology. The transceiver 162 may also be a wired device. The transceiver 162 communicates with a transceiver 162 located within the registration controller 20. Although direct lines are shown in Fig. 2A between the registration controller 20 and the registration device 18, the transceiver 162 may be used to communicate wirelessly or in wired fashion with the registration device 18.
[0064] Referring now to Fig. 2C, a diagrammatic view illustrating an overview of a procedure room or arena is set forth, similar to Fig. 1. The primary difference between Fig. 1 and Fig. 2C is the inclusion of the imaging system 180 and further details of the registration device 18 disposed on movable arms 153 and moveable with actuators 152, as described above. Prior to the process above the clinical image may be obtained with any appropriate imaging system, including the imaging system 180. Ultimately, the registration images or points thereof from the registration device 18 and the imaging system 180 are compared to obtain the registration. In various embodiments, the procedure room may include a surgical suite having the navigation system 10 that can be used relative to the patient or
subject 14. The navigation system 10 can be used to track the location of one or more tracking devices, tracking devices may include an imaging system tracking device 162 to track the imaging system 180. Also, a tool tracking device 166 similar or identical to the tracking device 34a may be included on a tool 168 similar to identical to the device 12. The tool 12, 168 may be any appropriate tool such as a drill, forceps, catheter, speculum or other tool operated by the user 42. The tool 168 may also include an implant, such as a stent, a spinal implant or orthopedic implant. It should further be noted that the navigation system 10 may be used to navigate any type of instrument, implant, stent or delivery system, including: guide wires, arthroscopic systems, orthopedic implants, spinal implants, deep brain stimulation (DBS) probes, etc. Moreover, the instruments may be used to navigate or map any region of the body. The navigation system 10 and the various instruments may be used in any appropriate procedure, such as one that is generally minimally invasive or an open procedure including cranial procedures.
[0065] The imaging device 180 may be used to acquire pre-, intra-, or post-operative or real-time clinical image data of a subject, such as the patient 14. It will be understood, however, that any appropriate subject can be imaged, and any appropriate procedure may be performed relative to the subject. In the example shown, the imaging device 180 comprises an O-arm® imaging device sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado, USA. The imaging device 180 may have a generally annular gantry housing 182 in which an image capturing portion is moveably placed. The image capturing portion may include an x-ray source or emission portion and an x-ray
receiving or image receiving portion located generally or as practically possible 180 degrees from each other and mounted on a rotor relative to a track or rail. The image capturing portion can be operable to rotate 360 degrees during image acquisition. The image capturing portion may rotate around a central point or axis, allowing image data of the subject 14 to be acquired from multiple directions or in multiple planes. The imaging device 180 can include those disclosed in U.S. Pat. Nos. 7,188,998; 7,108,421 ; 7,106,825; 7,001 ,045; and 6,940,941 ; all of which are incorporated herein by reference, or any appropriate portions thereof. In one example, the imaging device 180 can utilize flat plate technology having a 1 ,720 by 1 ,024 pixel viewing area.
[0066] The position of the imaging device 180, and/or portions therein such as the image capturing portion, can be substantially precisely (e.g., within at least 2 centimeters, including at least one centimeter, and further including fractions thereof including at least 10 microns) known relative to any other portion of the imaging device 180. The imaging device 180, according to various embodiments, can know and recall precise coordinates relative to a fixed or selected coordinate system. This can allow the imaging system 180 to know its position relative to the patient 14 or other references. In addition, as discussed herein, the precise knowledge of the position of the image capturing portion can be used in conjunction with a tracking system to determine the position of the image capturing portion and the image data relative to the tracked subject, such as the patient 14.
[0067] The imaging device 180 can also be tracked with the tracking device 163. The clinical image data defining the clinical image space acquired of the patient 14 can, according to various embodiments, be inherently or automatically registered relative to an object space. This inherently or automatically registered may be in addition or alternative to the registration with the registration device 18 as disclosed herein. The object or patient space can be the space defined by a patient 14 in the navigation system 10. The automatic registration can be achieved by including the tracking device 163 on the imaging device 180 and/or the determinable precise location of the image capturing portion. According to various embodiments, as discussed herein, imageable portions, virtual fiducial points and other features can also be used to allow for registration, automatic or otherwise. It will be understood, however, that clinical image data can be acquired of any subject which will define the patient or subject space. Patient space is an exemplary subject space. Registration allows for a transformation between patient space and clinical image space.
[0068] The patient 14 may be fixed within navigation space defined by the navigation system 10 to allow for or maintain registration and/or the registration device 18 may be used to obtain and/or maintain registration. As discussed further herein, registration of the clinical image space to the patient space or subject space allows for navigation of the instrument 12, 168 with reference to the clinical image data. When navigating the instrument 168, a position of the instrument 168 can be illustrated relative to clinical image data acquired of the patient 14 on the display device 36, such as superimposed as a graphical representation (e.g., icon)
representing the tool 12, 168 in a selected manner, such as mimicking the tool 12, 168. Various tracking systems, such as one including the optical localizer 48’ or the electromagnetic (EM) localizer 48 can be used to track the instrument 168.
[0069] As discussed above, more than one tracking system can be used to track the instrument 168 in the navigation system 10. According to various embodiments, these can include an electromagnetic tracking (EM) system having the EM localizer 48 and/or an optical tracking system having the optical localizer 48’. Either or both of the tracking systems can be used to track selected tracking devices, as discussed herein. It will be understood, unless discussed otherwise, that a tracking device can be a portion trackable with a selected tracking system. A tracking device need not refer to the entire member or structure to which the tracking device is affixed or associated.
[0070] It is further appreciated that the imaging device 180 may be an imaging device other than the O-arm® imaging device and may include in addition or alternatively a fluoroscopic C-arm. Other exemplary imaging devices may include fluoroscopes such as bi-plane fluoroscopic systems, ceiling mounted fluoroscopic systems, cath-lab fluoroscopic systems, fixed C-arm fluoroscopic systems, isocentric C-arm fluoroscopic systems, 3D fluoroscopic systems, etc. Other appropriate imaging devices can also include MRI, CT, ultrasound, etc.
[0071] In various embodiments, an imaging device controller 196 may control the imaging device 80 and can receive the image data generated at the image capturing portion and store the images for later use. The controller 196 can also control the rotation of the image capturing portion of the imaging device 180.
It will be understood that the controller 196 need not be integral with the gantry housing 182 but may be separate therefrom. For example, the controller 196 may be a portion of the navigation system 10 that may include a processing and/or control system including a processing unit or processing system 198. The controller 196, however, may be integral with the gantry housing 182 and may include a second and separate processor, such as that in a portable computer.
[0072] The patient 14 can be fixed onto the operating table 24. According to one example, the table 104 can be an Axis Jackson ® operating table sold by OSI, a subsidiary of Mizuho Ikakogyo Co., Ltd., having a place of business in Tokyo, Japan or Mizuho Orthopedic Systems, Inc. having a place of business in California, USA. Patient positioning devices can be used with the table and include a Mayfield ® clamp or those set forth in commonly assigned U.S. Pat. Appl. No. 10/405,068 entitled “An Integrated Electromagnetic Navigation and Patient Positioning Device”, filed April 1 , 2003 which is hereby incorporated by reference.
[0073] The position of the patient 14 relative to the imaging device 80 can be determined by the navigation system 10. The tracking device 163 can be used to track and locate at least a portion of the imaging device 180, for example the gantry housing 182. The patient 14 can be tracked with the dynamic reference frame 44, as discussed in Fig. 1 , which may be invasive and/or not invasive or minimally invasive. That is, a patient tracking device or dynamic reference device 44 may be used to receive or generate signals that are communicated to an interface portion 99.
[0074] Accordingly, the position of the patient 14 relative to the imaging device 180 and relative to the registration device 18 of Fig. 1 can be determined initially and when movement, such as skull movement is detected. Further, the location of the imaging portion can be determined relative to the housing 182 due to its precise position on the rail within the housing 182, substantially inflexible rotor, etc. The imaging device 180 can include a known positional accuracy and repeatability of within 10 microns, for example, if the imaging device 180 is an O- Arm® imaging device sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado. Precise positioning of the imaging portion is further described in U.S. Patent Nos. 7,188,998; 7,108,421 ; 7,106,825; 7,001 ,045; and 6,940,941 ; all of which are incorporated herein by reference, According to various embodiments, the imaging device 180 can generate and/or emit x-rays from the x-ray source that propagate through the patient 14 and are received by the x-ray imaging receiving portion. The image capturing portion generates image data representing the intensities of the received x-rays. Typically, the image capturing portion can include an image intensifier that first converts the x-rays to visible light and a camera (e.g. a charge couple device) that converts the visible light into digital image data. The image capturing portion may also be a digital device that converts x-rays directly to digital image data for forming images, thus potentially avoiding distortion introduced by first converting to visible light.
[0075] Two dimensional and/or three-dimensional fluoroscopic image data that may be taken by the imaging device 180 can be captured and stored in the imaging device controller 196. Multiple image data taken by the imaging device
180 may also be captured and assembled to provide a larger view or image of a whole region of a patient 14, as opposed to being directed to only a portion of a region of the patient 14. For example, multiple image data of the patient’s 14 spine may be appended together to provide a full view or complete set of image data of the spine. Any one or more of these types of image data may be clinical image data.
[0076] The clinical image data can then be forwarded from the image device controller 196 to the navigation computer and/or processor system 198 that can be a part of a controller or workstation 28. It will also be understood that the clinical image data is not necessarily first retained in the controller 196, but may also be directly transmitted to the workstation 28. The workstation 28 can provide facilities for displaying the image data as an image 38 on the display 36, saving, digitally manipulating, or printing a hard copy image of the received image data. The user interface 40 allows the user 42 to provide inputs to control the imaging device 180, via the image device controller 96, or adjust the display settings of the display 36. The workstation 28 may also direct the image device controller 196 to adjust the image capturing portion of the imaging device 180 to obtain various two- dimensional images along different planes in order to generate representative two- dimensional and three-dimensional image data.
[0077] With continuing reference to FIG. 2C, the navigation system 10 can further include the tracking system including either or both of the electromagnetic (EM) localizer 48 and/or the optical localizer 48’. The tracking systems may include a controller and interface portion 99. The interface portion
99 can be connected to the processor system 198, which can include a processor included within a computer. The EM tracking system may include the STEALTHSTATION® AXIEM™ Navigation System, sold by Medtronic Navigation, Inc. having a place of business in Louisville, Colorado; or can be the EM tracking system described in U.S. Patent No. 7,751 ,865 issued July 6, 2010, and entitled "METHOD AND APPARATUS FOR SURGICAL NAVIGATION"; U.S. Patent No. 5,913,820, entitled “Position Location System,” issued June 22, 1999; and U.S. Patent No. 5,592,939, entitled “Method and System for Navigating a Catheter Probe,” issued January 14, 1997; all of which are herein incorporated by reference. It will be understood that the navigation system 10 may also be or include any appropriate tracking system, including a STEALTHSTATION® TREON® or S7™ tracking systems having an optical localizer, which may be used as the optical localizer 48’, and sold by Medtronic Navigation, Inc. of Louisville, Colorado. Other tracking systems include an acoustic, radiation, radar, etc. The tracking systems can be used according to generally known or described techniques in the above incorporated references. Details will not be included herein except when to clarify selected operation of the subject disclosure.
[0078] Wired or physical connections can interconnect the tracking systems 46, 46’, imaging device 180, etc. Alternatively, various portions, such as the instrument 168 may employ a wireless communications channel, such as that disclosed in U.S. Patent No. 6,474,341 , entitled “Surgical Communication Power System,” issued November 5, 2002, herein incorporated by reference, as opposed to being coupled directly to the processor system 198. Also, the tracking devices
163, 166 can generate a field and/or signal that is sensed by the tracking system(s)
46, 46’.
[0079] Various portions of the navigation system 10, such as the instrument 168, and others as will be described in detail below, can be equipped with at least one, and generally multiple, of the tracking devices 166. The instrument can also include more than one type or modality of tracking device 166, such as an EM tracking device and/or an optical tracking device. The instrument 68 can include a graspable or manipulable portion at a proximal end and the tracking devices may be fixed near the manipulable portion of the instrument 68.
[0080] Additional representative or alternative localization and tracking system is set forth in U.S. Patent No. 5,983,126, entitled “Catheter Location System and Method,” issued November 9, 1999, which is hereby incorporated by reference. The navigation system 10 may be a hybrid system that includes components from various tracking systems.
[0081] Referring now to Figs. 3A-3D, one example of a process for acquiring registration images is illustrated. The patient 14, and in this case the head of the patient, is illustrated with a reference frame 44 affixed thereto. The reference frame 44 is shown alone in Fig. 3A and Fig. 3B. In Fig. 3B, the registration device 18 is shown in various positions. As discussed above, the registration device 18 may be or at least include a point gathering and/or determining system. For example, lidar, stereo-camera, depth camera, etc. The registration device 18 allows the generation of a mesh, point cloud and/or image frames of point clouds that may be stitched together.
[0082] In the first position, Scan 1 , the registration device generates a plurality of points 310 corresponding to different points on the patient 14. A second set of points is generated by the registration device 18 as Scan n. Although two scans are illustrated in Fig. 3B, a plurality of scans at a plurality of angles at a plurality of speeds and other types of scanned parameters may be used. The registration device 18 is scanned along a path 314 to obtain a plurality of images. The points from the registration device, such as a handheld camera, are used to obtain stitched together registration images as mentioned above. Each of the scan frame or positions may allow for generating one or more points, such as via processing of a depth image or a lidar scan. Each point may have position data, such as x,y,z data, and additional data such as normals and color information. Each frame may have one or more points that match or overlay at least one point in another frame, i.e., have the same x,y,z data. Thus, different frames may be stitched together to form a registration image.
[0083] Various features may be identified in the registration image, such as the patient tracker 44. In various embodiments, facial recognition may be used to obtain key features of the image from a machine learning based classification technique. That is, key points from the image may be recognized using facial recognition as a means to segment the image, or a Convolutional Neural Network (CNN) or transformer-based neural network trained to identify and segment points belonging to the reference frame and patient head.
[0084] In Fig. 3C, the first plurality of points 310 and the second plurality of points 312 are communicated to the image processor 132. The image processor
132 stiches the plurality of images of points together to form a point cloud 320 illustrated in Fig. 3D. The plurality of points 310, 312 are point clouds. However, the point clouds from different scans are merged (e.g., stitched) in Fig. 3D to form registration data of a final registration point cloud. The stitching may be performed according to various embodiments, as discussed above. Regardless, the stitching allows a generation of a point cloud, as illustrated in Fig. 3D that includes a selected volume or area, such as of the face 332, to perform a registration.
[0085] In one example, the registration device 18 allows a new registration image to be formed (e.g., acquired) every 30 milliseconds as the registration device is moved along the scan path 314. The image capture may be referred to as a snapshot or frames. Each of the points may have a “normal” associated therewith, thus allowing each of the points in the point cloud to have a position (i.e., x,y,z data) and an orientation relative to a surface via the normal. The stitching may choose a random snapshot as a reference snapshot. The set of snapshots may be separated into a number of different groups, G, wherein the number G is any appropriate number and may be based on a total number of snapshots acquired, volume imaged, etc. Each snapshot in the groups G may be added to the original reference frame. Additionally or alternatively, according to various embodiments, adjacent snapshots may be stitched together is sequence, when the snapshots were acquired sequentially in time and space around the patient. Thus, each of the snapshots are stitched to a previous snapshot and may be mathematically adjusted to fit together.
[0086] Referring now to Figs. 3E and 3F, registration features or landmarks such as various points 320 may be manually or automatically identified by the comparison module 136 in an image on the patient 14. In a manual configuration, a user may move a tracked probe or member to track or locate a pose of various points on the patient and/or in an image (e.g., move a pointer with a mouse). In an automatic configuration, instructions may be executed by a processor, such as in the comparison module 136, to identify the points 320. This may be based on trained machine learning systems, selected algorithms, image segmentation, including those discussed herein.
[0087] The features or points 320 may include such anthropometric locations like head points such as the edges of the eyes (eye points), the position of the ear lobes (ear points), chin (chin points), mouth (mouth points), nose (nose points) and various other locations. In Fig. 3E, segments 322 between points 320 may be used for comparison with measurements in the clinical image data. The points 320 and/or the distances 322 may be used to determine a registration. Points may include a predetermined number of landmarks features such as the nasion 344A, the eyes 344B, the brow 344C and the tip of the nose 344D. As discussed above, the points in the pre-acquired image may be compared to the points 320, the distances 322, and/or the distance of the image sensor 154 to make a registration. Fig. 3E represents the entire face and possible landmark features, such as defined by points, therein being used in the registration process.
[0088] Based on the position of the points, the measurements, or both, the comparison module 136 may generate a signal indicative of whether or not a
registration is possible and/or is made between the patient space and the image space. During a procedure, the user 42 may block a portion of points or measurements from view of the camera or image sensor 154. For example, a certain number of points may be identified in the clinical image data and the same or all of the points may be identified on the patient 14 in the registered image data from the image sensor 154 In some configurations, only points a predetermined distance from a reference frame and only a non-moving portion (sub-portion) of a face may be used. This will be described in more detail in Fig. 3H The points in the registration image and their physical position relative to the patient 14 in patient space may be determined by a distance by the distance sensor 156 and/or with the registration image data. Further, the position relative to the patient 14 may be further determined due to movement of the motors associated with the actuators 152 that may be moved by electric signals that allow the position controller 124 of Fig. 2A to control the precise position of the registration device 18. For example, the motor may be rotated a predetermined number of rotations based upon feedback provided by a position signal from the position sensors 150 which may be a potentiometer, an encoder, or part of the motor as a servo motor.
[0089] Referring now to Figs. 3G and 3H, the scan may take place by scanning with the registration device 18 the face 332 of the patient that includes one or more landmarks 336 by moving the registration device 18 relative to the face, such as around and/or along a longitudinal axis 334 (e.g., Fig. 3B). In Fig. 3G, landmark points 336 are recorded by the registration image. Ultimately, the data points determined from and/or the scanned image is provided to the
comparison module 136 where it is compared to the clinical image and/or the points determined therefrom. The registration device 18 may use any appropriate wavelength, including one or more wavelengths, such as infrared light, visible light, or both, to obtain the image of the subject that may include positions of anthropometric points that are unique to each patient 14, such as the patient face. According to various embodiments, the anthropometric points may be the landmark points 336 and may be determined in any appropriate manner relative to the face 332, such as from the top to the bottom of the head of the patient 14. Scanning with eh registration device 18 may take place in less than 10 seconds and may work at various distances from the head, such as around one meter.
[0090] In Fig. 3H (and discussed further herein with reference to Figs. 3J and 3K), in some examples, not all of the landmark points 336 may be considered by the registration system 16 by cropping the registration image and/or the clinical image strategically. According to various embodiments, the reference frame 44 may be mounted at a fixed position within a boundary 340 and points related to the reference frame 44 may be used for the registration. This cropping of points from an image, such as a registration image, may allow for faster or more efficient registration. The boundary 340 may be a circle or other shape a predetermined distance Di from the reference frame 44. The distance Di may be chosen so that nonoverlapping areas are considered as well as a position that has non-potentially moving areas. According to various embodiments, a face area 342 may be defines as a boundary for non-moving positions. For example, the area 342 may extend a predetermined distance from the nasion 344A (nasion point). The area 342 may
be a circle, oval or other shape to include a predetermined number of features such as the nasion 344A, the eyes 344B (eye point(s)), the brow 344C (brow point(s)) and the tip of the nose 344D (nose point). The same features and same boundaries may be used for images from both the registration device 18 and from the pre-operative images.
[0091] In one example, the reference frame 44 may be used for registration. In such an instance, frame initialization may occur. The frame initialization input may be a single point on the reference frame that is on a flat plane radius, which may be determined based on reference frame size, with the normal of the surface at that point which may be, called a Finit point. The Finit point may be automatically and/or manually detected in the scan and is predefined in a model of the reference frame. In the point cloud scan, a sphere around the Finit point is cropped from the rest of the scan mesh or points. The size of the sphere can be predetermined and/or manually selected based on the reference frame size and/or training of a machine learning system. This results in a segmented registration image having a sphere of point cloud data centered on the Finit point of a selected radius. The selected radius, again, may be based on the size of the reference frame 44. The segmented reference frame scan is and the model of the reference frame are aligned so the Finit points in each space match. Further, a normal of the scan is matched to a normal of the model of the reference frame. That is, , perpendicular to the CAD model’s surface). The registration image of the segmented reference frame is registered to the model using iterative closest point to plane. The segmented reference frame may then be rotated 36 degrees and
registered again to the model. This is repeated a number of times, such as 10, with the best fit registration chosen as the final orientation for registration.
[0092] In a similar manner, a head or face registration initialization may occur. In one example, the head initialization may takes as an input a location of the nasion in the clinical image and in the registration image. The clinical image nasion may be determined automatically by selected known segmentation techniques such as facial landmarks detection algorithms, CNN based point cloud segmentation or transformer-based neural network . The registration image nasion point is determined automatically by a selected face detection algorithm based on CNNs, transformer-based neural network, and deep learning methods such as YOLO (you only look once algorithms), feature detection and matching on edges or Haar wavelets etc. Neural networks and deep learning may be a technique used in segmentation. In segmentation of a whole there may be part-segmentation and key-point detection (i.e., in a segmented portion) that may use deep learning and neural networks to perform the segmentation and detection. According to various embodiments, a curvature matching system, such as a machine learning trained system (e.g., a neural network), may be used to identify the nasion. A selected curvature measured over a selected distance may be identified in the registration image as the nasion.
[0093] Segmentation may take place in two stages. Stage (1 ) may provide segmentation of a whole skin surface or anatomical mask (e.g., "face"). While stage (2) identifies the certain regions or selected point(s) within the identified face (e.g. "nose", "nasion point").
[0094] A sphere of a selected radius is cropped around the selected point, such as the nasion, and may be called a Hinit point. This cropping or segmentation may remove excess noise and the reference frame. A selected process, such as Coherent Point Drift (CPD), is used to register the registration image of the head and clinical image of the head segmentation together. Similar to the reference frame initialization process, the registration image may be rotated 36 degrees around the Hinit point and registering at each iteration and the best match is chosen for the registration.
[0095] Referring now to Fig. 31, a high-level method 346 is illustrated in a flow chart for operating the system described above. In block 348, a clinical image is obtained using one of the systems described above. The clinical image data obtained in block 348 may have one or more landmark points identified therein. As discussed above, the clinical image data may identify a whole skin anatomical mask portion or segment in a first stage optional stage. The skin surface or anatomical mask may then be segmented therein. Landmarks may be identified thereon. In block 352 the registration system 18 that may be initiated, for example by a user interface to start the system. The user interface may be a remote control from which signals are communicated through the network 112 illustrated in Fig. 2A. However, a direct wire or wireless communication may be used to initiate operation of the system through the registration controller 20 through the user interface 142 and/or the workstation 28.
[0096] In block 354, the registration device may be positioned, such as selectively aligned or scanned relative to a portion of the patient. In various
embodiments, the motors of the actuators are moved to move the registration device 18. As discussed above, however, the registration device 18 may also be moved manually by a user.
[0097] In block 358, registration images or data of the patient are obtained. The registration data may or be processed to generate a point cloud of data. For example, registration image data of certain landmark features such as the position of bones, the nasion, physical features (e.g., corner of an eye), and distances between physical features (e.g., distance between two corners of two eyes) are obtained. Facial features and their relationships are illustrated in Figs. 3A to 3G. In block 360, the registration images may, optionally, be stored in the memory 138.
[0098] A comparison is made in block 362 between registration data corresponding the registration features, such as the registration distances in the registration images, and the same features in the clinical images. The comparison determines whether the registration features of the clinical and registration images correlate and a registration of the patient space of the patient 14 and the clinical image space is possible and/or has occurred. When correlation is successful, in block 364, block 366 may generate an audible and/or visual indicator by way of the display device 36 or the audible device 37 to indicate or provide an indication to the user 42 that correlation has been successful. The correlation may be used to generate the transformation map, as discussed above, to allow a registration of the patient space to the image space. This registration may be output in block 370. In block 364, when the correlation is unsuccessful, a second audible and/or visual
indicator 368 may be used to indicate to the user 42 that the correlation is not successful. This may result in corrective measures such as moving the patient or the registration device 18 to acquire a second registration image. The error of the procedure may be checked and if the error is not within the defined threshold, reregistration may be performed. It is understood that the indication signals are optional in the registration process.
[0099] The process illustrated in Figs. 3A-3H allows complete replacement of manual registration with the automatic registration described herein using an electromagnetic tracking system or any appropriate tracking system. That is, the registration except for the initiation process, such as acquiring the registration images by moving the registration device, may be automated. An increase in speed and possibly an elevated level of accuracy of registration may be obtained along with allowing the formerly manual registration user to perform other tasks. In one exemplary embodiment, the registration was significantly faster than a manual process. The system may allow the registration to be performed automatically, is easily controlled remotely, and preventive maintenance is relatively easy on such systems.
[00100] Referring now to Figs. 3J and 3K, a raw or complete image 380A before segmentation is illustrated. The image 380A may be generated with the registration device and be a complete or whole registration image. The image 380A may be a mesh from which points are generated and/or be defined by the points of a point cloud. According to various embodiments, the registration device may generate a mesh and the registration device output may be a 3D scanner output
(mesh) that is converted to the point cloud. A selected system, such as CNN may be trained, as discussed herein, to identify and segment points belonging to head and patient reference frame. The segmented portion may include only detected reference frame points or sub-sample points within known radius around detected frame when registering frame and ignore other points in point cloud. As noted above, with reference to Fig. 3H, the radius may be Di and this may allow cropping the image, including the point cloud. According to various embodiments, using only patient head points or sub-sample points within known radius around detected head when registering patient and ignore other points in point cloud. Alternatively, both may be used and be segmented from other points, such as background points (e.g., a patient support or patient holding portion 381 ).
[00101] In Fig. 3K, an image 380B, such as a point cloud, that uses segmentation into a head region 382 and an optical tracking device image 384 is set forth. The optical tracking device image 384 could be, however, any appropriate patient reference such as the reference 44 discussed above (e.g., EM, optical, acoustic, etc.) The head region 382 and the optical tracking device image 384 may be segmented from the other portions, such as a black portion 386, as discussed herein. The classifier may be appropriately trained (e.g., CNN) or execute a selected and appropriate algorithm. The segmentation process, however, allows at least some of the data points to be ignored in the registration process. In other words, the black portion 386 is discarded in the segmentation as not relevant for the registration. The process of Fig. 3I may be applied to the relevant segmented portions, such as for registration.
[00102] Referring now to Fig. 4, a method for training the trained classifier 132A of the image processor 132 of Fig. 2A is set forth in the process 410. The trained classified may be a machine learning process, such as a convolution neural network (CNN) or transformer-based neural network . In block 412, a plurality of training set images is generated or provided to the system to train the system to recognize points in the registration image. The training data may relate to both clinical images and registration images. In various embodiments, a trained classified may be provided for both of the clinical images and the registration images or separate trained classifiers may be provided for each. A plurality of training sets has a plurality of images along with a correct identification of various features therein. For example, the training sets have known landmarks, selected radii, and classification of each point as belonging to at least one of patient head or patient reference frame; points may also be identified as background therein. In general, after enough training sets are provided, proper identification is achieved in acquire registration images via automatic identification with the trained classifier during a procedure. In block 414, the input images are provided to the trained classifier. In block 416, an output of the trained classifier of test images is compared to the known output and/or identified by an expert (e.g., surgeon) provided with the trained images and/or separate therefrom. In block 418, classifier weights W are adjusted based on comparing. For example, weights of the CNN or transformer-based neural network may be adjusted to achieve a selected or “known” identification of selected features in input images, such as input registration images.
[00103] In block 420, it is determined whether the accuracy of the training and the output is within an acceptable range. When the accuracy is not within an acceptable range in block 420, blocks 412-418 are repeated. In block 420, when the accuracy is acceptable in block 422, training is ended. The trained classifier may then be stored and/or accessed to identify selected features in input images, such as during a procedure.
[00104] For example, an output of the trained classifier may be a cloud of landmark points or a segmentation of patient head and reference frame from the rest of the point cloud that are identified in obtained registration images and/or the clinical images. The two sets of points may be correlated to perform a registration of the image space of the clinical images and the patient space identified with the registration images. The registration allows a transformation map between the two spaces.
[00105] Referring now to Fig. 5, a method of performing registration for a patient is set forth in more detail. In block 510, the patient is positioned in a fixed location, such as on an operating table or bed. The skin tone, age, sex, facial hair and the like may be accounted for when the registration imaging takes place. For example, the lighting, the scanner position and the camera settings may be varied based upon the above.
[00106] In block 512, the reference frame, such as the electromagnetic (EM) reference frame 44 described above, is affixed to the patient. The reference frame may be affixed, such as with a threaded fastener, stuck, or taped, to the skull of a patient. In block 514, the registration model and nasion position may be
detected in the registration exam based on the skin segmentation model using an algorithm such a facial landmarks detection, feature detection, or machine learning based method. Instructions may be provided for manually holding a handheld registration device 18. However, the position of the reference frame and the scanner may be varied. It should be noted that automated scanning or robotic scanning may be also or alternatively performed. The amount of scanning and the length of time for scanning may vary depending on the equipment (e.g., scanner) used. A fixed or random path may be used during scanning. For example, when a suitable amount of data is obtained, the scan may be stopped. In block 516, the scanner may be held at predetermined distance during scanning. For a manual scanner, block 518 has the operator, such as a surgeon, hold a button and perform scanning. The scanning may be performed so that registration images at different positions may be obtained. Video may be used to obtain multiple registration images or individual registration images may be used in block 520. The scan speed and sweeps by the scanner may be performed at different distances and speeds relative to the patient part being scanned. More regular speeds may be provided by an instrument, such as the moveable arms in Fig. 2C. However, manual scanning may also be provided.
[00107] In block 522, various features such as the face, in certain examples, and a patient tracker, such as reference frame 44, may be captured. For the face, various features may be recognized such as the eye sockets, nose, forehead and a nasion. As described in more detail below, certain features within boundaries may be obtained. In block 524, various processing of the images, such
as generating the points to be used in registration may be obtained. Ultimately, registration in block 526 may be performed as discussed below in further detail.
[00108] Referring now to Fig. 6, the processing block 524 is illustrated in further detail. Inputs are provided from user facing actions 612. The user facing actions 612 may include scanning the face and reference frame in block 614 to capture the registration image and loading CT or other types of clinical images from a clinical application in block 616. Two different inputs and related flow paths are performed to achieve registration in block 526. The scanned face and reference frame as registration images of block 614 are provided to block 620 which detects the nasion from the registration images. Likewise, in block 622, the reference frame is detected in the received registration images from the block 614. In block 624, the registration images may be cropped and further denoised mathematically and/or according to the processes as discussed above. Details of the cropping process based on boundaries is described in Fig. 7. In block 626, the processed registration images are stitched together using point cloud stitching. The stitched point cloud becomes a registration point cloud that is used for registration. The process of point cloud stitching was illustrated and described above in Figs. 3A-3D. Ultimately, point cloud stitching uses multiple registration images and coordinates the points therefrom to find a final point cloud of registration points.
[00109] From the CT image or other type of clinical image obtained in block 616, block 630 builds a registration model or segmented clinical image from the clinical image. Binary segmentation may be used to form the registration model.
Skin segmentation may be performed on the clinical image data. The skin segmentation may be performed as the registration device 18 may generate data, such as a point cloud, of a surface of the patient that relates to the skin. Skin segmentation may be performed according to any appropriate technique and may be used to generate a mesh from which a point cloud may be generated. Ultimately, the one or more types of segmentation form a segmented clinic image.
[00110] In block 632, the nasion and/or other facial features may be determined from the registration model. Likewise, the registration model may also be converted into the operating space in block 634. Ultimately, the registration model is provided to block 526 where registration between the two sets of point clouds is determined. The registration point cloud derived from block 626 and the clinical point cloud from the clinical images are compared. When the two sets of point clouds are registered, an error metric or pass/f ail indicator is provided in block 640 where the error metric is within a predetermined range, the registration is determined as successful. In addition to, or instead of, providing an error metric, a pass/f ail indicator, such as a visual or audible indicator, may be provided.
[00111] Referring now to Fig. 7, details of the cropping block 624 are illustrated. The cropping feature may crop together and/or separately landmarks of the patient, such as the face (e.g., nasion), and the reference frame. The two processes are illustrated as sub-processes 624a, 624b. The registration may occur with only one or both being identified in the registration images and the clinical images. According to various embodiments, identifying the reference frame and
separating it from the face or head may assist in an efficiency and/or speed of the registration.
[00112] In sub-process 624a, in block 710, the reference frame is detected. In block 712, a reference frame boundary is generated around the reference frame to be large enough to incorporate desired reference frame points therein. As discussed above, the boundary may be a selected or set radius from a point of the reference frame. The boundary may be two dimensional or three dimensional. The area or volume within the boundary may then be cropped, at least in the registration image. The cropped portion may then be used for the registration process. Thus, ultimately, the reference frame is registered to the clinical images by including a plurality of reference frame points a predetermined distance from the reference frame.
[00113] In sub-process 624b, in block 720, the face is detected. The face may be detected by edges or predetermined features such as the eyes, nasion, or nose, or using CNN based (or transformer-based) neural network -methods to identify relevant landmarks or segment the patient face or head from the point cloud. A face boundary is determined in block 722. The face boundary was briefly mentioned above. The face boundary is generated to provide an area or volume over which points (which may be referred to as head points) for registration may be used, such as of a face of the patient. The boundary may be from one or more identified landmarks, such as a nasion in the image. The boundary may be chosen to provide points of one or more non-moving areas of the face. For example, points in and around the chin may not be chosen because they may move during the
operating procedure. The area or volume within the boundary may then be cropped, at least in the registration image. The cropped portion may then be used for the registration process.
[00114] After block 722, block 724 registers the face and the points in the boundary. The face points in the boundary may be cropped for an efficient and/or faster registration.
[00115] Accordingly, the cropping process provided in block 624 is used in the registration block 526 as mentioned above. The cropped points may allow for a reduced number of points for registration, rather than an entire point set from an image. Further, the cropping may allow for registration of a selected portion. For example, the reference frame may be cropped and used for registration without using other points in the image. Similarly, points of a face may be cropped separate from other portions, such as the reference frame for registration. Thus, the registration process may be applied to a sub-set of points from the images, such as only the reference frame, only face points, etc. It is understood by one skilled in the art, however, that all or selected multiple sets of points may be used for registration.
[00116] The systems and methods illustrated above allow the placement of manual registrations and allows an automatic and continuous highly accurate registration, which may be compared to manual registrations. The entire automated registration may take about 20 to 30 seconds. Because the device may be remote control, the system may be actuated from anywhere in the world.
[00117] Example embodiments are provided so that this disclosure will be thorough and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.
[00118] Instructions may be executed by a processor and may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. The execution of the instructions may be substantially automatic, such as with the processor, once a selected input or data is received. Thus, a user may not or need not provide multiple inputs for a process or outcome to occur. The term shared processor circuit encompasses a single processor circuit that executes some or all code from multiple modules. The term group processor circuit encompasses a processor circuit that, in combination with additional processor circuits, executes some or all code from one or more modules. References to multiple processor circuits encompass multiple processor circuits on discrete dies, multiple processor circuits on a single die, multiple cores of a single processor circuit, multiple threads of a single processor circuit, or a combination of the above. The term shared memory circuit encompasses a single memory circuit that stores some or all code from
multiple modules. The term group memory circuit encompasses a memory circuit that, in combination with additional memories, stores some or all code from one or more modules.
[00119] The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The computer programs include processor-executable instructions that are stored on at least one non-transitory, tangible computer-readable medium. The computer programs may also include or rely on stored data. The computer programs may include a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services and applications, etc.
[00120] The computer programs may include: (i) assembly code; (ii) object code generated from source code by a compiler; (iii) source code for execution by an interpreter; (iv) source code for compilation and execution by a just-in-time compiler, (v) descriptive text for parsing, such as HTML (hypertext markup language) or XML (extensible markup language), etc. As examples only, source code may be written in C, C++, C#, Objective-C, Haskell, Go, SQL, Lisp, Java®, ASP, Perl, Javascript®, HTML5, Ada, ASP (active server pages), Perl, Scala, Erlang, Ruby, Flash®, Visual Basic®, Lua, or Python®.
[00121] Communications may include wireless communications described in the present disclosure can be conducted in full or partial compliance with IEEE standard 802.11 -2012, IEEE standard 802.16-2009, and/or IEEE standard 802.20-2008. In various implementations, IEEE 802.11 -2012 may be supplemented by draft IEEE standard 802.11 ac, draft IEEE standard 802.11 ad, and/or draft IEEE standard 802.11 ah.
[00122] A processor or module or ‘controller’ may be replaced with the term ‘circuit.’ The term ‘module’ may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
[00123] The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the invention, and all such modifications are intended to be included within the scope of the invention.
Claims
1 . A method comprising: receiving registration images of a patient from a registration device; detecting at least one of head points and reference frame points from the images using facial recognition; generating registration data for the patient based on the head points and the reference frame points; segmenting a clinical image to generate clinical head points; comparing the registration data to the clinical image; and wherein comparing the registration data to the clinical head points is operable to register a physical space to an image space of the clinical image.
2. The method of claim 1 wherein generating the registration data comprises detecting the head points in the registration image and generating registration data within a first boundary of the reference frame and a second boundary of a predetermined head point.
3. The method of claim 2 further comprising determining the first boundary by generating the first boundary a first predetermined distance from the reference frame.
4. The method of claim 1 wherein generating the registration data comprises generating a nasion point and the reference frame point.
5. The method of claim 1 wherein generating the registration data comprises generating registration data from the registration device mounted on a movable arm.
6. The method of claim 1 wherein generating the registration data comprises generating a point cloud of registration points generated from data collected with the registration device.
7. The method of claim 1 wherein generating the registration data comprises generating a final point cloud of registration points formed from point clouds generated from a plurality of the registration images.
8. The method of claim 1 wherein generating the registration data comprises generating a final point cloud of registration points formed from point clouds generated from a plurality of the registration images that are stitched together.
9. The method of claim 1 wherein detecting head points comprises generating a nasion point and at least one of an eye point, a tip of nose point, a brow point or a reference frame.
10. The method of claim 1 wherein generating the registration data comprises generating registration data using a trained classifier.
11 . The method of claim 1 wherein receiving registration images comprises receiving a video stream from the registration device.
12. The method of claim 1 further comprising receiving the clinical image as at least one of a computed tomography image or a magnetic resonance image.
13. A system comprising: a registration device generating registration images of a patient; a controller executing instructions: detecting at least one of head points and reference frame points from the images using facial recognition, generating registration data for the patient based on the head points and the reference frame points, segmenting a clinical image to generate clinical head points, and comparing the registration data to the clinical image; and wherein comparing the registration data to the clinical head points image is operable to register a physical space to an image space of the clinical image.
14. The system of claim 13 wherein the controller generates the registration data by detecting the head points in the registration image and generating registration data within a first boundary of the reference frame and a second boundary of a predetermined head point.
15. The system of claim 14 wherein the controller determines the first boundary by generating the first boundary a first predetermined distance from the reference frame.
16. The system of claim 15 wherein the controller determines the second boundary by generating the second boundary a second predetermined distance from at least one of the head points.
17. The system of claim 13 wherein the controller generates the registration data by generating a nasion point and the reference frame point.
18. The system of claim 13 wherein the registration data comprises generating a nasion point.
19. The system of claim 13 wherein the registration data comprises a point cloud of registration points generated from data collected with the registration device.
20. The system of claim 13 wherein the registration data comprises a final point cloud of registration points based on point clouds generated from a plurality of registration images.
21 . The system of claim 13 wherein the controller comprises a trained classifier generating the registration data including detecting at least one of the head points and the reference frame points from the images using facial recognition.
22. The system of claim 13 wherein the images are based on a video stream from the registration device.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202363516409P | 2023-07-28 | 2023-07-28 | |
US202363516413P | 2023-07-28 | 2023-07-28 | |
US63/516,413 | 2023-07-28 | ||
US63/516,409 | 2023-07-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2025027505A1 true WO2025027505A1 (en) | 2025-02-06 |
Family
ID=92543339
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2024/057327 WO2025027505A1 (en) | 2023-07-28 | 2024-07-29 | System and method of patient registration |
PCT/IB2024/057322 WO2025027502A1 (en) | 2023-07-28 | 2024-07-29 | System and method of patient registration |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2024/057322 WO2025027502A1 (en) | 2023-07-28 | 2024-07-29 | System and method of patient registration |
Country Status (1)
Country | Link |
---|---|
WO (2) | WO2025027505A1 (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5592939A (en) | 1995-06-14 | 1997-01-14 | Martinelli; Michael A. | Method and system for navigating a catheter probe |
US5913820A (en) | 1992-08-14 | 1999-06-22 | British Telecommunications Public Limited Company | Position location system |
US5983126A (en) | 1995-11-22 | 1999-11-09 | Medtronic, Inc. | Catheter location system and method |
US6474341B1 (en) | 1999-10-28 | 2002-11-05 | Surgical Navigation Technologies, Inc. | Surgical communication and power system |
US20050085720A1 (en) | 2003-10-17 | 2005-04-21 | Jascob Bradley A. | Method and apparatus for surgical navigation |
US6940941B2 (en) | 2002-02-15 | 2005-09-06 | Breakaway Imaging, Llc | Breakable gantry apparatus for multidimensional x-ray based imaging |
US7001045B2 (en) | 2002-06-11 | 2006-02-21 | Breakaway Imaging, Llc | Cantilevered gantry apparatus for x-ray imaging |
US7106825B2 (en) | 2002-08-21 | 2006-09-12 | Breakaway Imaging, Llc | Apparatus and method for reconstruction of volumetric images in a divergent scanning computed tomography system |
US7108421B2 (en) | 2002-03-19 | 2006-09-19 | Breakaway Imaging, Llc | Systems and methods for imaging large field-of-view objects |
US7188998B2 (en) | 2002-03-13 | 2007-03-13 | Breakaway Imaging, Llc | Systems and methods for quasi-simultaneous multi-planar x-ray imaging |
US20090177081A1 (en) * | 2005-01-13 | 2009-07-09 | Mazor Surgical Technologies, Ltd. | Image guided robotic system for keyhole neurosurgery |
US20230074362A1 (en) * | 2021-03-17 | 2023-03-09 | Medtronic Navigation, Inc. | Method and system for non-contact patient registration in image-guided surgery |
KR102533659B1 (en) * | 2022-02-28 | 2023-05-18 | 이마고웍스 주식회사 | Automated registration method of 3d facial scan data and 3d volumetric medical image data using deep learning and computer readable medium having program for performing the method |
-
2024
- 2024-07-29 WO PCT/IB2024/057327 patent/WO2025027505A1/en unknown
- 2024-07-29 WO PCT/IB2024/057322 patent/WO2025027502A1/en unknown
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5913820A (en) | 1992-08-14 | 1999-06-22 | British Telecommunications Public Limited Company | Position location system |
US5592939A (en) | 1995-06-14 | 1997-01-14 | Martinelli; Michael A. | Method and system for navigating a catheter probe |
US5983126A (en) | 1995-11-22 | 1999-11-09 | Medtronic, Inc. | Catheter location system and method |
US6474341B1 (en) | 1999-10-28 | 2002-11-05 | Surgical Navigation Technologies, Inc. | Surgical communication and power system |
US6940941B2 (en) | 2002-02-15 | 2005-09-06 | Breakaway Imaging, Llc | Breakable gantry apparatus for multidimensional x-ray based imaging |
US7188998B2 (en) | 2002-03-13 | 2007-03-13 | Breakaway Imaging, Llc | Systems and methods for quasi-simultaneous multi-planar x-ray imaging |
US7108421B2 (en) | 2002-03-19 | 2006-09-19 | Breakaway Imaging, Llc | Systems and methods for imaging large field-of-view objects |
US7001045B2 (en) | 2002-06-11 | 2006-02-21 | Breakaway Imaging, Llc | Cantilevered gantry apparatus for x-ray imaging |
US7106825B2 (en) | 2002-08-21 | 2006-09-12 | Breakaway Imaging, Llc | Apparatus and method for reconstruction of volumetric images in a divergent scanning computed tomography system |
US20050085720A1 (en) | 2003-10-17 | 2005-04-21 | Jascob Bradley A. | Method and apparatus for surgical navigation |
US7751865B2 (en) | 2003-10-17 | 2010-07-06 | Medtronic Navigation, Inc. | Method and apparatus for surgical navigation |
US20090177081A1 (en) * | 2005-01-13 | 2009-07-09 | Mazor Surgical Technologies, Ltd. | Image guided robotic system for keyhole neurosurgery |
US20230074362A1 (en) * | 2021-03-17 | 2023-03-09 | Medtronic Navigation, Inc. | Method and system for non-contact patient registration in image-guided surgery |
KR102533659B1 (en) * | 2022-02-28 | 2023-05-18 | 이마고웍스 주식회사 | Automated registration method of 3d facial scan data and 3d volumetric medical image data using deep learning and computer readable medium having program for performing the method |
Non-Patent Citations (2)
Title |
---|
CONDINO SARA ET AL: "Evaluation of a Wearable AR Platform for Guiding Complex Craniotomies in Neurosurgery", ANNALS OF BIOMEDICAL ENGINEERING, SPRINGER US, NEW YORK, vol. 49, no. 9, 23 July 2021 (2021-07-23), pages 2590 - 2605, XP037568318, ISSN: 0090-6964, [retrieved on 20210723], DOI: 10.1007/S10439-021-02834-8 * |
EGGERS ET AL: "Image-to-patient registration techniques in head surgery", INTERNATIONAL JOURNAL OF ORAL AND MAXILLOFACIAL SURGERY, COPENHAGEN, DK, vol. 35, no. 12, 15 November 2006 (2006-11-15), pages 1081 - 1095, XP005739717, ISSN: 0901-5027, DOI: 10.1016/J.IJOM.2006.09.015 * |
Also Published As
Publication number | Publication date |
---|---|
WO2025027502A1 (en) | 2025-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12048502B2 (en) | Surgery robot system and use method therefor | |
EP3007635B1 (en) | Computer-implemented technique for determining a coordinate transformation for surgical navigation | |
KR101848027B1 (en) | Surgical robot system for stereotactic surgery and method for controlling a stereotactic surgery robot | |
JP4822634B2 (en) | A method for obtaining coordinate transformation for guidance of an object | |
US12268506B2 (en) | System for neuronavigation registration and robotic trajectory guidance, and related methods and devices | |
JP2024095686A (en) | SYSTEM AND METHOD FOR PERFORMING SURGICAL PROCEDURE ON A PATIENT TARGET PART DEFINED BY A VIRTUAL OBJECT - Patent application | |
JP4836122B2 (en) | Surgery support apparatus, method and program | |
JP2022512420A (en) | Surgical system with a combination of sensor-based navigation and endoscopy | |
JP2019523664A (en) | System and method for identifying and tracking physical objects during robotic surgical procedures | |
US20080269588A1 (en) | Intraoperative Image Registration | |
US20080269602A1 (en) | Method And Apparatus For Performing A Navigated Procedure | |
CN108348295A (en) | Motor-driven full visual field adaptability microscope | |
CN112220557A (en) | Operation navigation and robot arm device for craniocerebral puncture and positioning method | |
EP3673854B1 (en) | Correcting medical scans | |
EP4072458A1 (en) | System and methods for planning and performing three-dimensional holographic interventional procedures | |
EP3643265B1 (en) | Loose mode for robot | |
KR101923927B1 (en) | Image registration system and method using subject-specific tracker | |
CN115475006A (en) | Techniques for determining the pose of tracked vertebrae | |
KR101895369B1 (en) | Surgical robot system for stereotactic surgery | |
WO2025027505A1 (en) | System and method of patient registration | |
US10028790B2 (en) | Wrong level surgery prevention | |
US20240340521A1 (en) | System and method of patient registration | |
KR20180100514A (en) | Surgical robot system for stereotactic surgery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24762036 Country of ref document: EP Kind code of ref document: A1 |