[go: up one dir, main page]

0% found this document useful (0 votes)
6 views26 pages

bioengineering-10-01290-v2 (4)

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 26

bioengineering

Article
A Novel Registration Method for a Mixed Reality Navigation
System Based on a Laser Crosshair Simulator: A Technical Note
Ziyu Qi 1,2, * , Miriam H. A. Bopp 2,3, * , Christopher Nimsky 2,3 , Xiaolei Chen 1 , Xinghua Xu 1 , Qun Wang 1 ,
Zhichao Gan 1,4 , Shiyu Zhang 1,4 , Jingyue Wang 1,4 , Haitao Jin 1,4,5 and Jiashu Zhang 1, *

1 Department of Neurosurgery, First Medical Center of Chinese PLA General Hospital, Beijing 100853, China;
chxlei@mail.sysu.edu.cn (X.C.); dr_xxh@126.com (X.X.); wfwangqun@163.com (Q.W.);
15620946575@163.com (Z.G.); sy983246393@gmail.com (S.Z.); richardwang9410@163.com (J.W.);
18031275391@163.com (H.J.)
2 Department of Neurosurgery, University of Marburg, Baldingerstrasse, 35043 Marburg, Germany;
christopher.nimsky@uk-gm.de
3 Center for Mind, Brain and Behavior (CMBB), 35043 Marburg, Germany
4 Medical School of Chinese PLA, Beijing 100853, China
5 NCO School, Army Medical University, Shijiazhuang 050081, China
* Correspondence: qiz@uni-marburg.de (Z.Q.); bauermi@med.uni-marburg.de (M.H.A.B.);
shujiazhang@126.com (J.Z.)

Abstract: Mixed Reality Navigation (MRN) is pivotal in augmented reality-assisted intelligent neuro-
surgical interventions. However, existing MRN registration methods face challenges in concurrently
achieving low user dependency, high accuracy, and clinical applicability. This study proposes and
evaluates a novel registration method based on a laser crosshair simulator, evaluating its feasibility
and accuracy. A novel registration method employing a laser crosshair simulator was introduced,
designed to replicate the scanner frame’s position on the patient. The system autonomously calcu-
lates the transformation, mapping coordinates from the tracking space to the reference image space.
A mathematical model and workflow for registration were designed, and a Universal Windows
Citation: Qi, Z.; Bopp, M.H.A.;
Platform (UWP) application was developed on HoloLens-2. Finally, a head phantom was used to
Nimsky, C.; Chen, X.; Xu, X.; Wang, measure the system’s target registration error (TRE). The proposed method was successfully imple-
Q.; Gan, Z.; Zhang, S.; Wang, J.; Jin, mented, obviating the need for user interactions with virtual objects during the registration process.
H.; et al. A Novel Registration Regarding accuracy, the average deviation was 3.7 ± 1.7 mm. This method shows encouraging results
Method for a Mixed Reality in efficiency and intuitiveness and marks a valuable advancement in low-cost, easy-to-use MRN
Navigation System Based on a Laser systems. The potential for enhancing accuracy and adaptability in intervention procedures positions
Crosshair Simulator: A Technical this approach as promising for improving surgical outcomes.
Note. Bioengineering 2023, 10, 1290.
https://doi.org/10.3390/ Keywords: mixed reality navigation; augmented reality; neurosurgical interventions; preoperative
bioengineering10111290
planning; registration method; laser; crosshair simulator; accuracy; target registration error; head
Academic Editors: Fabrizio Cutolo phantom
and Andrea Cataldo

Received: 14 October 2023


Accepted: 1 November 2023
1. Introduction
Published: 7 November 2023
In recent years, commercial navigation systems have become a critical paradigm in
neurosurgery [1–3], enabling neurosurgeons to match preoperative images to the patient
space, track the position of surgical instruments within the patient’s body, control the
Copyright: © 2023 by the authors. surgical scope, and protect eloquent anatomical structures [4,5]. Pointer-based navigation,
Licensee MDPI, Basel, Switzerland.
an early navigation standard, requires specialized navigation pointers as definable struc-
This article is an open access article
tures for tracking. In most instances, neurosurgeons need to switch surgical instruments
distributed under the terms and
with the pointer when navigation support is needed, thus interrupting surgical operations.
conditions of the Creative Commons
In addition, neurosurgeons often repeatedly shift their attention between the navigation
Attribution (CC BY) license (https://
pointer in the surgical area and the nearby navigation monitor, causing distractions and
creativecommons.org/licenses/by/
4.0/).
fatigue [6–10]. Therefore, this paradigm has poor ergonomics.

Bioengineering 2023, 10, 1290. https://doi.org/10.3390/bioengineering10111290 https://www.mdpi.com/journal/bioengineering


Bioengineering 2023, 10, 1290 2 of 26

Augmented Reality (AR) technology visualizes virtual information within real-world


scenes and virtual content and represents a significant technological breakthrough in neuro-
navigation, specifically microscope-based navigation [11–15]. This navigation paradigm
employs the geometric optical focus of the microscope as a virtual pointer tool and overlays
two-dimensional (2D) or three-dimensional (3D) virtual anatomical structures from the
image space to the actual surgical area; i.e., the microscope’s focal plane [3,13,14,16,17]. The
clinical benefits of integrating AR technology with navigation systems have been exten-
sively validated: it enhances surgeons’ comfort, reduces attention shifts, and heightens
physicians’ intuitive understanding of anatomical structures, which is especially beneficial
for less experienced surgeons [3,13,15–17]. Nonetheless, microscope-based navigation
comprises substantial hardware components, including infrared tracking cameras, navi-
gation workstations, integrated microscopes, and accessories, often associated with high
cost. Moreover, procuring AR and navigation support requires extra preoperative and
intraoperative procedural time. It necessitates that the entire team—surgeons, operating
room staff, and anesthetists—be sufficiently familiar with this paradigm [3,15–17].
The substantial, expensive hardware systems and complex workflows challenge the
widespread adoption of microscope-based navigation. Hence, there has been increasing
interest in smaller, low-cost, and easy-to-use alternatives for AR-guided interventions,
such as projector-based [18,19], smartphone-based [20–24], tablet-based [22,25–27], and
head-mounted display (HMD)-based AR [7,8,10,28]. These AR paradigms are widely used
as “low-cost navigation” in underdeveloped regions or healthcare facilities with limited
resources. However, projectors suffer from perspective effects and geometric distortion,
unable to provide depth information for virtual objects [18,19]. Once the user deviates from
the projector’s projecting direction, the parallax between the virtual model and the actual
anatomical structure confuses the AR experience, making projector AR more suitable for the
preoperative localization of superficial cranial lesions [18,19,29]. Smartphones and tablets
have sufficient computational resources for 3D rendering, model smoothing, and ambient
light simulation, enhancing users’ sense of depth and realism when perceiving virtual
objects [20,21,25,26]. Nonetheless, given the limited screen size, they provide a restricted
field of view. Moreover, user interactions with them depend on mouse or keyboard input,
which is incompatible with sterile surgical requirements [20,21,25,26]. Considering these
limitations, HMD-based AR may be a more valuable approach. Its advantages include
hands-free and portable operation (although weight and volume still need improvements),
a larger field of view, and a more immersive visualization environment. HMDs can be
categorized into video see-through and optical see-through paradigms based on whether
users can directly see the real world [30]. The latter offers a more realistic and immersive
user experience compared to the former, making optical see-through HMD-based AR the
most promising and appealing low-cost AR paradigm.
Mixed Reality (MR) technology, integrating real-world scenes with virtual space,
provides a new environment for users to interact with virtual content in the real world.
Although a clear distinction between MR and AR has not been universally agreed upon,
MR is considered a technology between virtual reality (VR) and AR in many publica-
tions [6–9,31–36]. This view has some validity because MR technology typically integrates
optical and physical sensors that digitalize real-world information into the virtual world,
making this information computable and interactive rather than merely superimposing
virtual and actual elements, as in AR. Therefore, MR is the term used in this paper. The
Microsoft Corporation launched the world’s first commercial stand-alone wireless MR
HMD, HoloLens, in 2016. The advanced Synchronous Localization and Mapping (SLAM)
technology allows HoloLens to understand its position relative to the actual environment
and stably integrate 3D holograms into the real world. This technology lets virtual content
be directly displayed in the user’s visual field. In 2017, Incekara et al. implemented the
first proof-of-concept MR navigation (MRN) system in clinical settings [6]. Based on preop-
erative magnetic resonance imaging (MRI) data, they created 3D holograms of the patient’s
head and tumor. They manually aligned the holograms to the patient’s head to localize the
Bioengineering 2023, 10, 1290 3 of 26

intracranial tumor. At that time, research mainly focused on the practical benefits of MRN
to patients and neurosurgeons and its applicability in various neurosurgical intervention
procedures [6,7,9,10,37].
Over the subsequent years, it was unsurprising that research publications on MRN
in neurosurgery exponentially increased [30,38–43]. MRN offers neurosurgeons an im-
mersive 3D neuroanatomical interactive model at a lower cost and with an easier-to-use
system configuration than conventional pointer-based and microscope-based navigation
systems [6–10]. However, as studies delved deeper, issues arising from this new technology
became apparent, such as a highly user-dependent holographic registration process and
relatively higher spatial localization errors [6–8,10,31,44–47]. Therefore, it is generally ac-
knowledged that, despite its advantages, MRN is not yet capable of replacing conventional
navigation systems.
The benefits and safe application of MRN for neurosurgical interventions fundamen-
tally depend on the precision and accuracy with which virtual objects blend with real-world
scenarios. Inaccurate or ambiguous virtual content may only offer physicians a deceptive
sense of security, potentially posing patient risks. According to Peter et al. [34], the clinical
accuracy of a system should constitute a combination of registration precision (objective)
and user perception accuracy (subjective). User perception accuracy, being dependent on
individual visual-spatial abilities and prior experience, is beyond the scope of technological
improvement. Nonetheless, registration accuracy is primarily influenced by the initial
registration accuracy, suggesting that improvements in the technical aspects of the initial
registration process could enhance the overall accuracy. Beyond the fully manual regis-
tration methods reported in early MRN prototypes [7,8,32,48], several other registration
methods similar to those in commercial navigation systems have proven technically feasible.
These include point matching of anatomical landmarks, cranial fixations, affixed skin mark-
ers [9,10,35,36,44,49–53], and surface matching [33,54–59]. Nevertheless, due to marker
shifts or skin shifts during imaging relative to patient positioning in the OR, coupled with
users’ errors while acquiring points on the surface or from the markers with physical tools,
the reported MRN registration methods are almost entirely user-dependent [46,55,56]. This
has resulted in reported reference accuracies varying from 0–10 mm in clinical settings,
which are not entirely acceptable for high-precision neurosurgery.
In summary, existing MRN registration methods have yet to concurrently fulfill the
requirements of low user dependency, high accuracy, and clinical applicability. However, with
the increasing demand for neuroimaging in multiple situations, compact and portable scanners
(e.g., intraoperative Computed Tomography (CT) [1,60,61], cone beam CT (CBCT) [62–64],
portable MRI [65–67]) have been widely deployed in ORs and treatment rooms, making the
acquisition of 3D imaging no longer confined to radiology departments [1,3,60,61]. This
development has given rise to an automatic registration paradigm. This paradigm reliably
maps the acquired volumetric data (reference image) to the acquisition position by tracking
the scanner, thereby ensuring accurate navigation registration [1,60,61]. This paradigm has
been maturely applied in multimodal commercial navigation systems, demonstrating its
potential to reduce user dependency and increase accuracy, which might be also applied in
MRN, providing a promising approach to overcome current limitations of MRN.
Therefore, a novel MRN registration method was developed in this study using
registration with a laser crosshair simulator by replicating the image acquisition position on
the patient using laser crosshairs and automatically calculating a transformation of tracking
space and reference image.

2. Materials and Methods


This section delivers an in-depth description of the design and calibration princi-
ples underlying the crosshair simulator, followed by an explanation of the mathematical
model and operational process of the crosshair simulator-based registration. This section
concludes with an experimental design for assessing the accuracy and feasibility of the
proof-of-concept MRN system.
Bioengineering 2023, 10, x FOR PEER REVIEW 4 of 27

model and operational process of the crosshair simulator-based registration. This section
Bioengineering 2023, 10, 1290 concludes with an experimental design for assessing the accuracy and feasibility of4 ofthe
26
proof-of-concept MRN system.

2.1.The
2.1. TheCrosshair
CrosshairSimulator
Simulator
2.1.1.Concept
2.1.1. Conceptand
andDesign
Designofofthe
theCrosshair
CrosshairSimulator
Simulator
Incommercial
In commercialnavigation
navigation systems,
systems, thethe most
most classic
classic andand practical
practical solution
solution for auto-
for automatic
matic registration
registration is tothe
is to equip equip the scanner
scanner with reference
with reference markersmarkers
and useand use a tracking
a tracking system
system during
duringacquisition
image image acquisition
to convertto convert the scanned
the scanned referencereference images
images into into a common
a common coordinatecoordi-
sys-
tem,
natesuch as the
system, world
such coordinate
as the system [1,60].
world coordinate Nonetheless,
system this approach
[1,60]. Nonetheless, is challenging
this approach is
to apply directly
challenging to MRN.
to apply On the
directly one hand,
to MRN. the one
On the optical tracking
hand, components
the optical trackingarecomponents
integrated
into the HMDs,into
are integrated requiring users requiring
the HMDs, to stay away from
users the scanner
to stay away fromduring
the scanning, potentially
scanner during scan-
interrupting optical
ning, potentially tracking (see
interrupting Figuretracking
optical 1A). On (see
the other
Figurehand,
1A).movements
On the other of hand,
large objects
move-
in the room,
ments e.g.,
of large CT scanner
objects translation,
in the room, can
e.g., CT introduce
scanner significant
translation, canlocalization errors in
introduce significant
HMD’s spatial
localization mapping.
errors in HMD’s spatial mapping.

Figure1.1.Mixed
Figure Mixedreality
realityneuro-navigation
neuro-navigation(MRN)
(MRN) registration
registration using
using scanner
scanner tracking
tracking (A)(A) is challeng-
is challenging
ing due to tracking signal interruption, as the user must stay away from the scanner
due to tracking signal interruption, as the user must stay away from the scanner during data during data
acquisition. MRN registration using the proposed concept of a crosshair simulator (B) transferring
acquisition. MRN registration using the proposed concept of a crosshair simulator (B) transferring
the laser crosshair (red) location from the scanner to the simulator (cyan) enabling the projection of
the laser crosshair (red) location from the scanner to the simulator (cyan) enabling the projection of
corresponding imaging data onto the patient’s head (HMD = head mounted display).
corresponding imaging data onto the patient’s head (HMD = head mounted display).
Toaddress
To addressthis
thisissue,
issue,thetheconcept
conceptof ofaacrosshair
crosshairsimulator
simulator is is proposed
proposed (see
(see Figure
Figure 1B).
1B).
The crosshair
The crosshair simulator
simulatoraims aimstoto transfer
transfer scanning
scanningparameters between
parameters different
between spatiotem-
different spa-
poral domains to determine the acquisition position of the reference
tiotemporal domains to determine the acquisition position of the reference images images (see Figures 1B
(see
and 2A), and essentially consists of a rack, two laser emitters, and an MR interface.
Figures 1B and 2A), and essentially consists of a rack, two laser emitters, and an MR interface.
•• TheTherack
rackisisanan“L”
“L”shaped
shapedbasic basicframe
framefor
formounting
mountingother othercomponents
components such suchasasthe
the
laser emitters. Users can flexibly adjust its position and orientation in space using aa
laser emitters. Users can flexibly adjust its position and orientation in space using
handleand
handle andthen
thensecurely
securelylock lockititininplace
placewith
withaamechanical
mechanicalarm arm(see
(seeFigure
Figure2A).2A).
•• TwoTwolaser
laseremitters
emitters(wavelength:
(wavelength: 650650 nm,
nm, power:
power: 12 12
mW) mW)
are are horizontally
horizontally and and verti-
vertically
cally fixed on the arms of the rack. They project two sets of laser
fixed on the arms of the rack. They project two sets of laser crosshairs inside the crosshairs inside the
simulator, with the centerlines of the crosshairs located coplanar
simulator, with the centerlines of the crosshairs located coplanar and perpendicular and perpendicular
toeach
to eachother.
other.This
Thisconfiguration
configuration createscreatesthree
threeorthogonal
orthogonalplanesplanesininspace,
space,forming
forming
the simulator coordinate system (SCS). When an object is exposed to the laser, ititwill
the simulator coordinate system (SCS). When an object is exposed to the laser, will
receive a crosshair projection on its top and side surfaces, analogous to the positioning
crosshairs observed in CT or MRI scanners (see Figure 2B).
• The MR interface consists of a stainless-steel panel (size: 6 cm × 6 cm) printed with
a visually recognizable target image. It is firmly fixed on the rack. The MR interface
establishes the relationship between the tracking and virtual spaces. Once the target
Bioengineering 2023, 10, x FOR PEER REVIEW 5 of 27

receive a crosshair projection on its top and side surfaces, analogous to the position-
Bioengineering 2023, 10, 1290 ing crosshairs observed in CT or MRI scanners (see Figure 2B). 5 of 26
• The MR interface consists of a stainless-steel panel (size: 6 cm × 6 cm) printed with a
visually recognizable target image. It is firmly fixed on the rack. The MR interface
establishes the relationship between the tracking and virtual spaces. Once the target
images are detected and recognized by the HMD, the virtual space is initialized with
images are detected and recognized by the HMD, the virtual space is initialized with
the geometric center of the target image as the origin. This process is implemented
the geometric center of the target image as the origin. This process is implemented
using the Vuforia Software Development Kit (SDK) (Version 10.14, PTC, Inc., Boston,
using the Vuforia Software Development Kit (SDK) (Version 10.14, PTC, Inc., Boston,
MA, USA).
MA, USA).

Figure 2. Structural and functional demonstration of the crosshair simulator. The crosshair simulator
Figure 2. Structural and functional demonstration of the crosshair simulator. The crosshair simulator
Bioengineering 2023, 10, x FOR PEER (A) exemplifies its ability to simulate scanner-generated laser crosshairs, forming two crosshair
REVIEW 6 laser
of 27
(A) exemplifies its ability to simulate scanner-generated laser crosshairs, forming two crosshair laser
projections on a patient’s head, identical to those observed in CT or MRI scanners (B). Figure 2B pro-
projections
vided on a patient’s
by ©Alamy. The imagehead,
hasidentical to those
been granted observed
copyright in CT or MRI
authorization from scanners
the Alamy(B). Figure 2B
platform.
provided by ©Alamy. The image has been granted copyright authorization from the Alamy platform.
2.1.2. Among
Coordinatecalibrated
Systems crosshair simulators of different specifications, their external ma-
2.1.2.
trices Coordinate
may vary due Systems
to the spatial relationship between the image target’s geometric center
The crosshair
Thelaser
crosshair simulator
simulator includes three
three coordinate
coordinate systems
systems (see Figure
Figure 3):
and the system’s center.includes
Still, their intrinsic matrices must (see be identical,3):as the princi-
••ple of
The simulator
scanning
The simulator coordinate
parameter
coordinate system(SCS)
transmission
system (SCS) isisdefined
remains defined
the same.fromthe
from thelaser
lasercrosshairs’
crosshairs’ geomet-
geometric-
ric-optic relationships.
optic relationships.
••2.1.3.The reference
reference image
Calibration
The image coordinate
coordinate system
system (RICS),
(RICS), defined
defined by
by the
the scanner
scanner during
during refer-
refer-
ence
ence image
imagethe
To ensure acquisition,
acquisition, either
reliabilityeither in MRI
of theincrosshairor
MRI or CT CT procedures.
procedures.
simulator, This
both This coordinate
coordinate
the intrinsic andsystem
system is
is
extrinsic
established
established at
at the
the position
position where
where the
the gantry
gantry laser
laser positioning
positioning lines
lines
matrices need calibration. Intrinsic calibration corrects the orthogonal relationship be- are
are projected.
projected. In
In
tweenthe case
thethe twoof
case MRI,emitters,
oflaser
MRI, the first
the firstallowing
“localizer
“localizer scan”
thescan” at thetobeginning
at the
simulator beginning of the scanning
of the
generate crosshairsscanning session
session
with coplanar
establishes
establishes this
this coordinate
coordinate system,
system, while
while in
in CT
CT procedures,
procedures, a
a similar
similar
and mutually perpendicular centerlines, thereby iso-centralizing the SCS with the RICS. “scout
“scout scan”
scan” is
is
used
used for
for the
the same
same purpose.
purpose.
The purpose of extrinsic calibration is to locate the simulator’s central position of RICS
••within
Thethevirtual coordinate
VCS (see Figure system
3). (VCS) is defined by recognition
recognition ofof the
the MR
MR interface.
interface.
The transformation 𝑻 from the VCS to SCS reflects the fundamental purpose of the
crosshair simulator, i.e., to establish a mapping relationship between the virtual space de-
fined by the image target on the MR interface and the physical space defined by laser
crosshairs. It can be calculated as Equation (1):
𝑻 =𝑻 ∙𝑻 (1)
The transformation 𝑻 (intrinsic matrix) from RICS to SCS describes the inherent
properties of the simulator for scanning parameters transmission. The simulator laser po-
sitioning is the inverse process of scanners utilizing laser positioning, as the scanner real-
izes the mapping of scanning parameters from physical to image space. In contrast, the
crosshair simulator implements the inverse mapping. A calibrated 𝑻 ensures that the
three reference planes of the scanned image are aligned with the positions of the three
laser locating planes of the scanner, thereby coinciding with the mechanical center and
axes of the scanner (see Figures 1B, 3 and 4A–C).
Figure 3. The
The three coordinate
transformation systems
𝑻 systems
(extrinsicin the crosshair
matrix) fromsimulator
the VCS (SCS = simulator coordinate sys-
Figure 3. The three coordinate in the crosshair simulator (SCSto=RICS defines
simulator the position
coordinate system
tem (red); RICS = reference image coordinate system (brown); VCS = virtual coordinate system
and
(red);orientation of RICS
RICS = reference in the
image VCS. With
coordinate systemthis transformation,
(brown); VCS = virtualthecoordinate
importedsystem
model(green);
in the
(green); The transformations (colored arrows) from one to the other coordinate system are color-
virtual
The space can
transformations
coded
be mapped
as gradients of(colored
to
arrows)
the related
the correct
from one
coordinate
position in
to the other
systems.)
the RICS,
coordinate
and the
enabling visualization
systemexpression
mathematical are color-coded as
utilizing
through
gradients the
both physical HMD
of theand (see
related
virtual Figures
coordinate 1B,spheres
3 andand
systems.)
calibration 4D,E).
the mathematical
to calibrate expression
the intrinsic utilizing
and extrinsic both physical
matrices.
and virtual calibration spheres to calibrate the intrinsic and extrinsic matrices.
Although RICS can be vividly understood as three orthogonal planes, materializing
S from the VCS to SCS reflects the fundamental purpose of the
themThe
intotransformation TV
a calibration model might hinder the visualization of laser projection due to
crosshair
occlusion,simulator, i.e., to establish
making it inappropriate forathe
mapping relationship
crosshair simulator’sbetween theTo
calibration. virtual space
address the
issue, the present study introduces the concept of calibration spheres, which intuitively
and visually express the geometric properties of RICS and solve the problem of light ob-
struction through spherical projection (see Figures 3 and 4A).
Calibration spheres include both physical and virtual calibration spheres. The physical
Bioengineering 2023, 10, 1290 6 of 26

defined by the image target on the MR interface and the physical space defined by laser
crosshairs. It can be calculated as Equation (1):

S S R
TV = T R · TV (1)

The transformation TSR (intrinsic matrix) from RICS to SCS describes the inherent
properties of the simulator for scanning parameters transmission. The simulator laser
positioning is the inverse process of scanners utilizing laser positioning, as the scanner
realizes the mapping of scanning parameters from physical to image space. In contrast,
the crosshair simulator implements the inverse mapping. A calibrated TSR ensures that the
three reference planes of the scanned image are aligned with the positions of the three laser
Bioengineering 2023, 10, x FOR PEER REVIEW
locating planes of the scanner, thereby coinciding with the mechanical center and axes7 of of
27
the scanner (see Figures 1B, 3 and 4A–C).

Figure 4.
Figure 4. The
The calibration
calibration procedure
procedure ofof intrinsic
intrinsic and
and extrinsic
extrinsic matrices
matrices using
using aa custom
custom calibration
calibration
sphere in the crosshair simulator. The structure of the calibration sphere is depicted in schematic
sphere in the crosshair simulator. The structure of the calibration sphere is depicted in schematic
form (A) and was used for calibrating the intrinsic of the crosshair simulator (B). During intrinsic
form (A) and was used for calibrating the intrinsic of the crosshair simulator (B). During intrinsic
calibration, eight known orthogonal calibration arc combinations on the sphere can be used (C). The
calibration, eight known orthogonal calibration arc combinations on the sphere can be used (C). The
process of extrinsic calibration is shown, with the virtual calibration sphere representing RICS not
process
aligningofwith
extrinsic calibration
the physical is shown,
calibration with (D).
sphere the virtual calibration sphere
After adjustment, representing
the virtual RICS
calibration not
sphere
aligning with the physical calibration sphere (D). After adjustment, the virtual calibration
perfectly aligns with the physical calibration sphere, signifying successful calibration (E). sphere
perfectly aligns with the physical calibration sphere, signifying successful calibration (E).
The virtual calibrationR sphere is a simplified form of the physical one, existing as a
hologram in virtual space.TV
The transformation It (extrinsic
retains thematrix) from distinguished
three GCAs the VCS to RICS defines
by red, theand
green, position
blue,
and orientation of RICS in the VCS. With this transformation, the imported model
strengthening the user’s understanding of this spatial relationship through some auxiliary in the
virtual space can be mapped to the correct position in the RICS, enabling visualization
lines within the red great-circle section. Since the SCAs are omitted, the virtual calibration
through the HMD (see Figures 1B, 3 and 4D,E).
sphere only contains the PCP.
Among calibrated crosshair simulators of different specifications, their external matri-
The calibration process begins with intrinsic calibration, achieved through a physical
ces may vary due to the spatial relationship between the image target’s geometric center
calibration sphere. The simulator’s lasers can produce three projection arcs on the sphere’s
and the laser system’s center. Still, their intrinsic matrices must be identical, as the principle
surface. If the centerlines of the two sets of crosshairs are coplanar and mutually perpen-
of scanning parameter transmission remains the same.
dicular, all three projected arcs should perfectly align with the marked arcs at one PCP
and seven
2.1.3. SCPs. Conversely, lasers that fail to maintain an orthogonal relationship will
Calibration
alter the curvature radii of some projected arcs, resulting in imperfect alignment between
To ensure the reliability of the crosshair simulator, both the intrinsic and extrinsic
the projected and marked arcs across all calibration positions. During the intrinsic calibra-
matrices need calibration. Intrinsic calibration corrects the orthogonal relationship between
tion process, the user first secures the calibration sphere at the PCP and makes fine adjust-
the two laser emitters, allowing the simulator to generate crosshairs with coplanar and
ments to the lasers via screws to align the projected and marked arcs. Then the seven SCPs
verify the calibration (see Figures 3 and 4A–C). If satisfactory, the calibration sphere is
returned to the PCP for extrinsic calibration.
The extrinsic calibration involves loading the virtual calibration sphere on the devel-
oped MR platform’s “calibration” panel (see Section 2.2.2.). The user can achieve precise
Bioengineering 2023, 10, 1290 7 of 26

mutually perpendicular centerlines, thereby iso-centralizing the SCS with the RICS. The
purpose of extrinsic calibration is to locate the simulator’s central position of RICS within
the VCS (see Figure 3).
Although RICS can be vividly understood as three orthogonal planes, materializing
them into a calibration model might hinder the visualization of laser projection due to
occlusion, making it inappropriate for the crosshair simulator’s calibration. To address the
issue, the present study introduces the concept of calibration spheres, which intuitively and
visually express the geometric properties of RICS and solve the problem of light obstruction
through spherical projection (see Figures 3 and 4A).
Calibration spheres include both physical and virtual calibration spheres. The physical
calibration sphere is made of solid, durable plastic with a radius of 60 mm. The scale lines
on the calibration sphere include three bold, great circle arcs (GCA) of different colors (red,
blue, black) and three dotted small circle arcs (SCA) (see Figure 4A). The intersections
of the sphere define the three GCAs with three orthogonal great-circle sections passing
through the center of the sphere, analogous to the earth’s equator, 0◦ + 180◦ meridians, and
90◦ E + 90◦ W meridians in geography. The three great-circle sections can be regarded as
the three reference planes of the RICS. The three SCAs are defined by intersections with
three equal small circle sections parallel to the corresponding colored great-circle sections.
It is easily inferred from solid geometry that the three SCAs are tangent to each other on
the sphere’s surface. Clearly, the planes containing arcs of different colors are necessarily
perpendicular, and any combination of red-blue-black planes is mutually orthogonal.
Hence, the three GCAs and three SCAs can form eight orthogonal combinations (see
Figure 4C). Without loss of generality, the three GCAs define the unique primary calibration
position (PCP) of the physical calibration sphere. In contrast, combinations containing
SCAs define seven secondary calibration positions (SCPs) of the physical calibration sphere.
The virtual calibration sphere is a simplified form of the physical one, existing as a
hologram in virtual space. It retains the three GCAs distinguished by red, green, and blue,
strengthening the user’s understanding of this spatial relationship through some auxiliary
lines within the red great-circle section. Since the SCAs are omitted, the virtual calibration
sphere only contains the PCP.
The calibration process begins with intrinsic calibration, achieved through a physical
calibration sphere. The simulator’s lasers can produce three projection arcs on the sphere’s
surface. If the centerlines of the two sets of crosshairs are coplanar and mutually perpen-
dicular, all three projected arcs should perfectly align with the marked arcs at one PCP and
seven SCPs. Conversely, lasers that fail to maintain an orthogonal relationship will alter
the curvature radii of some projected arcs, resulting in imperfect alignment between the
projected and marked arcs across all calibration positions. During the intrinsic calibration
process, the user first secures the calibration sphere at the PCP and makes fine adjustments
to the lasers via screws to align the projected and marked arcs. Then the seven SCPs verify
the calibration (see Figures 3 and 4A–C). If satisfactory, the calibration sphere is returned to
the PCP for extrinsic calibration.
The extrinsic calibration involves loading the virtual calibration sphere on the devel-
oped MR platform’s “calibration” panel (see Section 2.2.2.). The user can achieve precise
translation or rotation adjustments to the virtual calibration sphere through the panel’s
sliders, ultimately perfectly aligning it with the physical calibration sphere secured at the
PCP (see Figures 3 and 4D,E, and Supplementary Video S1).
Intrinsic and extrinsic calibrations will be only performed once, as the transformation
matrices can be saved for further use in each subsequent registration.

2.1.4. Mathematical Model for Crosshair Simulator-Based Registration


The mathematical model for crosshair simulator-based registration is shown in Figure 5.
Sc between
The key to crosshair simulator-based registration is to find the transformation TW
the scanner coordinate system and the world coordinate system defined by the HMD.
Bioengineering 2023, 10, x FOR PEER REVIEW 8 of 27
Bioengineering 2023, 10, 1290 8 of 26

Figure
Figure5. 5. The
The mathematical model for
mathematical model forcrosshair
crosshairsimulator-based
simulator-basedregistration
registration with
with coordinates
coordinates systems
systems
(i.e., the (i.e.,
world the(blue),scanner
world (blue),scanner
(purple),(purple), RICS (brown),
RICS (brown), SCS
SCS (red), (red),
VCS VCSand
(green), (green), and Ho-(light
HoloLens-2
loLens-2 (light blue) coordinate systems) and the related transformations between those (gradient
blue) coordinate systems) and the related transformations between those (gradient color-coded).
color-coded).
As mentioned, the two coordinate systems are in different spatiotemporal domains, so
As mentioned,
scanning parametersthe twobe
must coordinate
transferredsystems
througharethe
incrosshair
different simulator.
spatiotemporal
In thisdomains,
Sc
way, TW
so scanning parameters
using must be transferred
of TSc through
S the crosshair simulator. In this way,
can be calculated the product S and TW as Equation (2):
𝑻 can be calculated using the product of 𝑻 and 𝑻 as Equation (2):
Sc Sc S
T𝑻 S ·T∙W
W ==T𝑻 𝑻 (2)
(2)

Thereby, T𝑻Sc
Thereby, represents the transformation from SCS to the scanner coordinate sys-
S represents the transformation from SCS to the scanner coordinate system,
tem, determined
determined by aligning
by aligning the projected
the projected crosshaircrosshair
with the with the laser positioning
laser positioning lines markedlines
on
marked on the patient’s head during image acquisition.
S 𝑻 represents the
the patient’s head during image acquisition. TW represents the transformation from thetransfor-
mation from the world coordinate system tocan
the be
SCS, which can
as be
thecalculated
product ofasTthe
S prod- V
world coordinate system to the SCS, which calculated V and TW
uct of 𝑻 and 𝑻
as Equation (3): as Equation (3):
S S V
T𝑻W ==T𝑻V ·T
∙𝑻
W (3)
(3)
S of the VCS to the SCS is determined during the calibration of the crosshair simulator.
𝑻TVof the VCS the SCS is determined during the calibration of the crosshair simu-
V represents the transformation from the world coordinate system to the VCS, which can
TW
lator. 𝑻 represents the transformation from the world coordinate system to the VCS,
be calculated
which as Equation
can be calculated as(4):
Equation (4):
V H
TW = TV ·TW (4)
𝑻 = 𝑻H ∙ 𝑻 (4)
V
where T is provided by HMD’s detection and tracking of the image target and T is H
where 𝑻 H is provided by HMD’s detection and tracking of the image target and 𝑻 W is
determined through the SLAM algorithm. By combining the above Equations (1)–(4),
determined through the SLAM algorithm. By combining the above Equations (1)–(4),
Equation (5) can be obtained:
Equation (5) can be obtained:
S S V V H
  h  i
Sc
T𝑻W = T𝑻Sc
= S · ∙
T 𝑻
W ==T𝑻Sc
S · ∙ T 𝑻·
V WT∙ 𝑻 ==T𝑻Sc
S · ∙ T S R
𝑻·
R VT∙ 𝑻 · ∙
T 𝑻·
H WT ∙ 𝑻 (5)
(5)
Using 𝑻Sc as shown in Equation (5), the planned holographic model (based on the
Using
reference TW asorshown
image in Equation
another (5), theregistered
image already planned holographic model
to the reference (based
image) onbe
can thetrans-
refer-
ence image or another image already registered to the reference image) can be transformed
formed from its original position to the patient’s surgical site, enabling MRN.
from its original position to the patient’s surgical site, enabling MRN.
2.2.
2.2. The
TheComponents
Componentsofofthe
theMR
MRPlatform
Platform
This
This subsection describes the
subsection describes thehardware
hardwareand
andsoftware
softwarecomponents
componentsof
ofthe
theMR
MRsurgical
surgical
platform
platform deployed
deployed on
on the
the HMD.
HMD.
2.2.1. MR HMD
The overarching goal of MR HMD is to understand the geometric transformation
Bioengineering 2023, 10, 1290
from the device to the environment and to display the holograms of virtual objects to users,
9 of 26
effectively enabling the conversion and integration of virtual and actual entities. This
study realized this goal using the commercially available Microsoft HoloLens-2 (HL-2)
(Microsoft,
2.2.1. MR Redmond,
HMD WA, USA) as the hardware component.
The HL-2 is an untethered, portable stereoscopic optical head-mounted computer
The overarching goal of MR HMD is to understand the geometric transformation from
boasting a small,
the device to thelightweight
environment independent
and to display computing unit (weighing
the holograms 579 g) to
of virtual objects that operates
users,
without
effectively enabling the conversion and integration of virtual and actual entities. This study 1%
reliance on additional hosts or tracking hardware. It retails at approximately
cost of a standard
realized this goalcommercial navigationavailable
using the commercially system.Microsoft
Equipped with an integrated
HoloLens-2 high-defini-
(HL-2) (Microsoft,
tion RGB camera
Redmond, for photography,
WA, USA) as the hardware four visible light cameras (VLC) positioned at different
component.
angles, The HL-2camera,
a depth is an untethered, portable
and an inertial stereoscopic unit
measurement optical head-mounted
(IMU), computer un-
the HL-2 continually
boasting a small, lightweight independent computing unit (weighing
derstands its external environment and pose (position and orientation) based on 579 g) that operates
the flow of
without reliance on additional hosts or tracking hardware. It retails at approximately
real-world information from the physical sensors and cameras. This mechanism enables the
1% cost of a standard commercial navigation system. Equipped with an integrated high-
determination of the real-time geometric transformation between the HL-2, and the world
definition RGB camera for photography, four visible light cameras (VLC) positioned at
coordinate system,a embodying
different angles, depth camera,the principle
and of measurement
an inertial SLAM employed by thethe
unit (IMU), HL-2.
HL-2 contin-
Further, through matrix multiplication, any object within
ually understands its external environment and pose (position and orientation) the world coordinate
based on sys-
tem with
the flowaofknown or computable
real-world information pose canphysical
from the establish a geometric
sensors relationship
and cameras. with the HL-
This mechanism
2, culminating in the registration
enables the determination of holograms.
of the real-time geometric The HL-2 offersbetween
transformation a color display
the HL-2,with
and a 43°
thefield
× 29° world ofcoordinate
view (FoV), system, embodying
respectively. Bythe principle
utilizing of SLAMwaveguide
advanced employed byimaging
the HL-2. technol-
Further, through matrix multiplication, any object within the world coordinate
ogy, the optical focus of virtual objects is fixed on a specific plane by the HL-2, thereby system
with a known or computable pose can establish a geometric relationship with the HL-
presenting them at their anticipated locations on the display screen. In essence, the inte-
2, culminating in the registration of holograms. The HL-2 offers a color display with a
gration of conversion and fusion between the virtual and actual on the HL-2 allows the
43◦ × 29◦ field of view (FoV), respectively. By utilizing advanced waveguide imaging
operator to experience
technology, the opticalafocus
stable,
of aligned position
virtual objects of the
is fixed onvirtual
a specificobject
planeinbythetheactual
HL-2,scene
despite
therebychanges in viewing
presenting angles,
them at their thus offering
anticipated a realistic
locations visualscreen.
on the display experience.
In essence, the
integration of conversion and fusion between the virtual and actual on the HL-2 allows
2.2.2.
the MR Platform
operator Development
to experience a stable, aligned position of the virtual object in the actual scene
despite
The MRchanges in viewing
platform angles, thus
is developed offering
using Unitya (Version
realistic visual experience.
2021.3.4f1, Unity, San Francisco,
CA, USA).
2.2.2. MR The detection
Platform and tracking algorithm for the identification images is imple-
Development
mented based on the Vuforia SDK (Version 10.15). The interactive user interface uses the
The MR platform is developed using Unity (Version 2021.3.4f1, Unity, San Francisco,
Mixed Reality
CA, USA). Toolkit
The (MRTK)
detection SDK (Version
and tracking algorithm2.8.3, Microsoft).
for the Programming
identification used C#
images is imple-
scripts
mented in based
VisualonStudio (Version
the Vuforia SDK16.11.26, Microsoft,
(Version 10.15). 2019). The
The interactive scripts
user handle
interface usesthe
the basic
logic
Mixed Reality Toolkit (MRTK) SDK (Version 2.8.3, Microsoft). Programming used C# scripts the
of the program and provide user-friendly voice and gesture commands. Finally,
project is packaged
in Visual and deployed
Studio (Version to HL-2 as2019).
16.11.26, Microsoft, a Universal Windows
The scripts Platform
handle the (UWP)
basic logic of ap-
the program and provide user-friendly voice and gesture commands. Finally, the project is
plication.
packaged and deployed to HL-2 as a Universal Windows Platform (UWP) application.
2.3. Practical Workflow of MRN System
2.3. Practical Workflow of MRN System
This subsection
This subsectionoutlines the practical
outlines the practicalworkflow
workflow of of
thethe
MRNMRN system,
system, as illustrated
as illustrated in in
Figure 6. The principal steps include (1) laser projection marking, (2) image segmentation
Figure 6. The principal steps include (1) laser projection marking, (2) image segmentation
and hologram
and hologramgeneration,
generation, (3) deployment
deploymentofofthe the crosshair
crosshair simulator,
simulator, and and (4) hologram
(4) hologram
registrationand
registration andupdating.
updating.

Figure 6. Practical
Figure workflow
6. Practical workflowchart forforthe
chart theproposed
proposedMRN
MRNsystem
system compared
compared with conventional neu-
with conventional
ronavigation system. (OR = operating room).
neuronavigation system. (OR = operating room).
Bioengineering 2023, 10, 1290 10 of 26

2.3.1. Image Acquisition and Laser Projection Marking


At the onset of the image acquisition, the user marks the laser positioning lines
projected by the CT/MRI scanner on the patient’s skin. These marked lines will guide the
correct deployment of the crosshair simulator in subsequent processes. Following this, a
3D imaging scan is conducted, and the obtained imaging data (i.e., the reference image)
is exported from the workstation in Digital Imaging and Communications in Medicine
(DICOM) format.

2.3.2. Image Segmentation and Holograms Generation


The obtained 3D imaging data is imported into the open-source software 3D Slicer
(Version 5.1.0). If other preoperative 3D imaging data are available for the patient, they
are imported and fused with the reference images used for the registration process. Subse-
quently, surgical planning is undertaken, manually or semi-automatically, segmenting all
surgery-relevant structures of interest, such as the skull, lesions, vessels, customized anno-
tations, etc. Within the “Segment Editor” module of 3D Slicer, users can manually extract
masks using essential tools like the paintbrush and eraser or semi-automatically obtain
masks with an extended toolkit featuring intensity thresholds, islands, scissors, hollowing,
smoothing, holes-filling, and logical operations. These techniques aim to minimize creation
time and enhance segmentation quality. Lastly, the segmented structures are reconstructed
into 3D virtual objects and exported in HL-2-compatible “.obj” format, so the holograms
for MRN visualization are generated.

2.3.3. Crosshair Simulator Deployment


The deployment of the crosshair simulator starts with the fixation of the patient’s head.
The user activates the laser emitters and adjusts the simulator’s position until the projected
lines perfectly align with the laser positioning lines previously marked on the skin of the
patient’s
Bioengineering 2023, 10, x FOR PEER REVIEW
head (see Figures 6 and 7E). After that, the simulator’s pose is locked to maintain
11 of 27
its relative position to the patient’s head.

Figure 7. An illustrative case demonstrates the crosshair simulator-based registration process and
Figure 7. An illustrative case demonstrates the crosshair simulator-based registration process and its
its accuracy measurement method. CT imaging data, including visible fiducial markers attached to
accuracy
the scalp ofmeasurement method.
a patient presenting with CT imaging
a right data, including
basal ganglia visible
hematoma (A). fiducial
The CT markers
scanner’s laser attached to the
crosshair projection lines on orthogonal reference planes were recreated using 3D Slicer (B). A 1:1
scale 3D printed model was generated with laser projection lines marked in red (C). Using 3D Slicer,
a set of holograms for validation, including the hematoma in red, a puncture path in green, fiducial
markers in yellow, and the two quadrants of the scalp divided by the reference plane in cyan, was
created (D). Manual adjustment of the crosshair simulator showing a perfect match of the laser
Bioengineering 2023, 10, 1290 11 of 26

scalp of a patient presenting with a right basal ganglia hematoma (A). The CT scanner’s laser crosshair
projection lines on orthogonal reference planes were recreated using 3D Slicer (B). A 1:1 scale 3D
printed model was generated with laser projection lines marked in red (C). Using 3D Slicer, a set of
holograms for validation, including the hematoma in red, a puncture path in green, fiducial markers
in yellow, and the two quadrants of the scalp divided by the reference plane in cyan, was created (D).
Manual adjustment of the crosshair simulator showing a perfect match of the laser crosshairs and
the marked laser positioning lines on the head model (E). Successful hologram registration perfectly
aligned the holographic image with the 3D-printed head model (F). Coordinates of six fiducial points
in the RICS, as shown in the green box, were selected for accuracy measurement using 3D Slicer
software (G). Following MRN registration, the user positioned the virtual probe, consisting of a white
line handle and a white spherical tip, on the perceived real-world fiducial points, as shown in the red
box (H).

2.3.4. Holograms Registration and Update


Then, the user comfortably wears the HL-2 to prevent unexpected displacement. The
MR platform utilizes Vuforia’s proprietary feature detection algorithm and the known
image targets on the MR interface to achieve real-time tracking of the crosshair simulator’s
position, enabling visualization of RICS. Subsequently, the holographic model is imported
into RICS through gesture commands, initiating the initial registration for MRN.
Afterward, the user can choose to retain or update the registration result. This can
be achieved by toggling the “Freeze” and “Unfreeze” states in the MR platform. “Freeze”
locks the holographic model to the spatial anchor in the spatial mapping through SLAM
of the HL-2, providing a more stable but static holographic perception even if the image
target temporarily loses its tracking (see Figures 5 and 7F). On the other hand, “Unfreeze”
reactivates tracking of the image targets, enabling hologram updates in case of any changes
in the patient’s head position in relation to the world coordinate system.

2.4. Experimental Design for Proof-of-Concept


2.4.1. Image Data Source
To evaluate the feasibility of the proposed registration method, a 3D-printed head
phantom based on a patient’s CT data was used (see Figure 7A,C). The imaging data was
obtained from a 63-year-old male patient who underwent treatment at the Chinese PLA
General Hospital in January 2021. The patient had a right-sided basal ganglia hematoma,
and six CT-visible fiducial markers were attached to the scalp before the scan (Figure 7A).
The CT scans were performed using a 128-slice CT scanner (Siemens SOMATOM, Forch-
heim, Germany) with the following parameters: tube voltage 120 kV, window width 120,
window level 40, matrix size 128 × 128, FOV 251 × 251 mm2 , and slice thickness 0.625 mm
resulting in a voxel size of 1.96 × 1.96 × 0.625 mm3 .
Before data acquisition, written informed consent was obtained from the patient’s
authorized representative to use the pseudonymized imaging data for research purposes.
Since no invasive procedures on any patient were involved in the study, no ethical review
procedure was required.

2.4.2. Head Phantom Creation


A 3D head model was created through semi-automatic segmentation using open-
source software 3D Slicer (for further details, see Supplementary Material S1). The scan
reference plane positions provided by the imaging data were used to restore the projection
of the laser positioning crosshairs, which were then added to the model (see Figure 7B,C).
The model was saved in “.stl” format and 3D printed using a commercial 3D printer, A5S
(Shenzhen Aurora Technology Co., Ltd., Shenzhen, China), with the following parameters:
nozzle temperature: 210 ◦ C, platform temperature: 50 ◦ C, material: polylactic acid, resolu-
tion: 0.3 mm, fill level: 10%, to create a 1:1 scale head phantom for evaluation purposes.
Bioengineering 2023, 10, 1290 12 of 26

2.4.3. Creation of Holograms for Validation


To fully validate the accuracy and reliability of the MRN system, a set of holograms
was created using the 3D Slicer software for the experiment (see Figure 7D, for more de-
tails, see Supplementary Material S2). This included (1) the hematoma, (2) a puncture
path, (3) fiducial markers, and (4) two quadrants of the scalp divided by the reference
plane. The hematoma and puncture path were used to test the MRN system’s capability
in handling and rendering simple or complex models. This is because, in practical appli-
cations, the number of triangular meshes composing a hologram can range from tens to
thousands, providing information on geometric details and surface contours. The fiducial
markers were used to quantitatively measure the TRE at specific points, as described in
Section 2.4.5. The scalp quadrants provided the user with an intuitive impression of the
overall alignment quality.

2.4.4. MRN Registration and Holograms Deployment


Before deploying the holograms, the HL-2 was calibrated for the user’s interpupillary
distance (IPD) to ensure an appropriate immersive visual experience. Then, the user’s
planned holograms were registered to the fixed head phantom through the MR platform,
as described above in Section 2.3.4 (See Figure 7E,F).

2.4.5. Accuracy Evaluation


Once the hologram deployment was completed, the TRE was immediately analyzed to
characterize the initial registration accuracy of the system. The TRE represents the deviation
between the holographic representation of the fiducial markers in the virtual space and the
corresponding fiducial markers in the physical space, not used for the registration process
after performing the registration [38] and was calculated as the root mean square error
(RMSE) of point pairs in three principal axes directions in virtual or physical space.
Six fiducial markers { A, B, C, D, E, F } attached to the scalp were selected as the
known reference points Pi for measurement, as they were not involved in the registration
process (see Figure 7G).
For each reference point, the user carefully and accurately placed the tip of the virtual
probe on the perceived real-world marker point using their previous stereoscopic visual
experience on the MR platform (see Figure 7H). The platform immediately reported the
three-dimensional coordinates Qi of the probe tip in the RICS to the user’s panel. Then,
the three-dimensional Euclidean distance kPi Qi k2 between Pi and Qi in the RICS was
computed as Equation (6):
r
 2
k Pi Qi k 2 = (Pix − Qix )2 + Piy − Qiy + (Piz − Qiz )2 (6)

Finally, three registrations were performed by a single user (Z.Q.). After each registra-
tion, the system switched to the “Freeze” state, and all six landmarks were sequentially
targeted with the virtual probe. The PV camera then captured the panel display of the
three-dimensional coordinate list, which was considered the result of one measurement
session. Subsequently, after updating the holograms through the “Unfreeze” command,
the system switched back to the “Freeze” state to conduct the next measurement session.
Each registration included three rounds of measurement, thereof a total of 6 × 3 × 3 = 54
points that were acquired and used for TRE analysis.
To better visualize the TRE, the error distribution across the entire head was ex-
trapolated for each measurement based on the deviations at the six reference points
Q
(see Figure 8C). This was achieved by determining the optimal transformation TP from
{P A , PB , PC , PD , PE , PF } to {Q A , QB , QC , QD , QE , QF } using the least squares method.
The original head model (P) was then transformed by this optimal transformation to pro-
duce the registered head model (Q). Finally, the “Model to Model Distance” module
Bioengineering 2023, 10, 1290 13 of 26

Bioengineering 2023, 10, x FOR PEER the absolute distance point by point between P and Q. The calculated 14
computed
REVIEW distances
of 27
were then mapped to Q as scalar values using color mapping.

Figure 8. Results of accuracy measurement. A color gradient scatter plot demonstrates the deviation
Figure 8. Results of accuracy measurement. A color gradient scatter plot demonstrates the deviation
across all measured points (A). Overlapping polygons colored in red, cyan, and blue depict devia-
across
tionsall
formeasured pointsregistrations
three different (A). Overlapping
on the polygons colored
original model in red, R
(Legend: cyan, and blue depict
= registration, deviations
M = measure-
forment,
threeand
different registrations
the numbers denoteon therespective
the original model (Legend:
sessions, R = registration,
e.g., ‘R1M1′ correspondsMto=the measurement,
first registra-and
thetion followed
numbers by the
denote thefirst measurement)
respective sessions,(B).e.g.,
The‘R1M1 0 corresponds
measured points extrapolated
to the first the full-headfollowed
registration error
distribution for nine sessions from R1M1 to R3M3
by the first measurement) (B). The measured points extrapolated (C). The histograms present error
the full-head the distribution
distribution of for
deviations and their components along the X, Y, and Z axis (D). Box plots compare inter-group
nine sessions from R1M1 to R3M3 (C). The histograms present the distribution of deviations and
deviations grouped by fiducial points (Makers A, B, C, D, E, F) and deviations in the X, Y, and Z
their components
components (E). along the X, Y,represent
The whiskers and Z axis the(D). Box plots
minimum andcompare
maximum inter-group deviations
values within grouped
1.5 times the
byinterquartile
fiducial points
range(Makers A, B,the
(IQR) from C, first
D, E,(Q1)
F) and
and deviations in the X, Any
third (Q3) quartiles. Y, and
dataZ components
points beyond(E). thisThe
whiskers represent
range, which the minimum
are considered and maximum
outliers, are markedvalues within
with red 1.5 (+).
crosses times the interquartile range (IQR)
from the first (Q1) and third (Q3) quartiles. Any data points beyond this range, which are considered
2.4.6. Statistical
outliers, are marked Analysis
with red crosses (+).
Statistical analysis was conducted using one-way analysis of variance (ANOVA) to
2.4.6. Statistical Analysis
test the differences between the TREs of the three registrations and the six fiducial mark-
ers.Statistical analysis
The statistical was conducted
significance using
level was set one-way analysis
at p < 0.05. of variance
All statistical (ANOVA)
analyses to test
were per-
the
formed using MATLAB software (version R2022a, MathWorks, Apple Hill Campus, Na-The
differences between the TREs of the three registrations and the six fiducial markers.
tick, MA, USA).
Bioengineering 2023, 10, 1290 14 of 26

statistical significance level was set at p < 0.05. All statistical analyses were performed using
MATLAB software (version R2022a, MathWorks, Apple Hill Campus, Natick, MA, USA).

3. Results
3.1. Workflow Analysis
The registration procedure was successfully implemented. Setting up the crosshair
simulator position took about 2 to 3 min. Importing the holograms to the MR platform took
about 1 to 2 min. For a detailed view of the user experience, a video is provided as Supple-
mentary Video S1 for an immersive holographic experience from the user’s perspective.

3.2. Accuracy Analysis


The average TRE across all points and registrations was 3.7 ± 1.7 mm, ranging between
1.2 mm and 8.9 mm (see Figure 8A–D). There, 81.5% of the measured points exhibited a TRE
below 5 mm (see Figure 8A), a cut-off value established based on the previous study that
developed and tested a fiducial-based MRN system for accuracy assessment [9]. There was
no significant difference between the TREs between the three registrations (4.1 ± 1.9 mm,
3.3 ± 1.4 mm, and 3.6 ± 1.7 mm, p = 0.755), showing a high reproducibility of registration
accuracy, nor was there a significant difference between the TREs of the six fiducial markers
across all three registrations (3.5 ± 1.6 mm, 3.4 ± 1.6 mm, 4.0 ± 1.0 mm, 4.2 ± 2.2 mm,
3.2 ± 1.2 mm, and 3.7 ± 2.3 mm, p = 0.992) (see Table 1, Figure 8E).

Table 1. Accuracy data for all registrations and markers.

Group TRE * [mm] Min [mm] Max [mm]


Registration 1 4.1 ± 1.9 1.2 8.9
Registration 2 3.3 ± 1.4 1.4 6.9
Registration 3 3.6 ± 1.7 1.2 7.9
Marker A 3.5 ± 1.6 2.0 6.8
Marker B 3.4 ± 1.6 1.6 6.3
Marker C 4.0 ± 1.0 2.6 5.4
Marker D 4.2 ± 2.2 1.2 7.9
Marker E 3.2 ± 1.2 1.4 4.8
Marker F 3.7 ± 2.3 1.2 8.9
Total 3.7 ± 1.7 1.2 8.9
* Mean ± Standard deviation.

4. Discussion
This study presents and assesses a novel MRN registration method utilizing a laser
crosshair simulator to mitigate prevalent challenges concerning user dependency, cost-
effectiveness, and accuracy in neurosurgical interventions. The developed system simulates
the scanner frame’s position on the patient and autonomously computes the transformation,
mapping coordinates from the tracking space to the reference image space. Preliminary
evaluations using a head phantom indicated promising results, setting a foundation for
future improvements and potential applications in clinical settings.
The pivotal role of MRN technology in neurosurgical interventions lies in its ability to
offer clinicians an immersive environment. Deep-seated anatomical structures and planned
surgical interventions can be visualized within this environment through the patient’s surface.
Consequently, much research has confirmed the benefits of MRN, such as portable, low-
cost, easy-to-use, free-hand interaction, intuitive understanding, and improved ergonomics
compared to pointer-based and microscope-based conventional navigation systems [6–9,44,48].
The former requires neurosurgeons to continuously switch between surgical instruments and
the pointer for navigation support, interrupting surgical operations and leading to potential
distractions and fatigue. While offering advantages like superimposing virtual anatomical
structures onto the surgical area, the latter comes with high costs and requires substantial
Bioengineering 2023, 10, 1290 15 of 26

hardware components. However, concerns about the accuracy and reliability of MRN were
frequently raised on the other hand [9,10,30,37,39,41–43,49,50].
Ensuring a compelling holographic visualization experience for users is crucial, primar-
ily achieved through the accurate overlay of virtual content onto the surgical field [30,34,38].
Therefore, in most publications concerning MRN-assisted surgical interventions, the em-
phasis was placed on registration, tracking, and technical challenges encountered.
The prevalent strategy for facilitating MRN support compels users to manually align
the virtual objects with the patient’s physical counterparts [6–8,30,32,45,48]. From the user’s
perspective, the virtual objects’ position, orientation, and scale are manually adjusted to
align with the actual ones. Subsequently, through the HL-20 s inherent spatial map, the
virtual objects are anchored to the patient’s actual location. This process negates the need
for additional software or algorithmic resources. Nonetheless, this approach only yields
statically registered holograms, necessitating a re-registration every time a spatial relation
shifts between the patient’s head and the reference spatial map [9,47]. Moreover, the steep
learning curve associated with manual registration implies it is both time-consuming and
less dependable [6–9,43].
Landmark-based registration uses uniquely identifiable natural or artificial landmarks
for paired-point matching. These markers can be pasted or anchored on the patient’s scalp
or custom pointers which function in a similar way to the tracking pointer utilized in
conventional navigation systems [9,10,44,49,53]. It is quicker to implement than manual
alignment [9,38]. However, pre-shift of the markers or wear and tear of the pointers over
time introduce fiducial localization errors (FLE) that impact the final registration accuracy.
Notably, this method’s robustness depends on several factors, including the potential skin
shift during imaging, the patient’s positioning in the OR, and the targeting of the marker
with a physical pointer more robust [9,10].
Within the context of MRN, markerless registration, typically surface-based, distin-
guishes itself from manual or fiducial-based registration. Through computer vision (CV)
algorithms, they automatically acquire partial surface information from the patient and
correlate it with surface data from corresponding positions in the reference image, thus
eliminating the time-consuming nature of complicated operations and the potential hazards
of contact [33,54–58]. Nonetheless, the challenges of achieving robust and precise registra-
tion are magnified by the roughness of spatial mapping, and also the lack of distinctive
feature points (e.g., in a prone registration session) and sensitivity to noise (e.g., geometric
distortions in the original image, notably in an MRI image). Additionally, real-time tracking
and rendering are necessary to enhance the signal-to-noise ratio of the captured surface
data, demanding considerable computational resources [54–56]. Therefore, users may need
to reduce the visualization expenditures, for example, by lowering the frame rate or de-
creasing the surface sampling rate [57]. The feature of Holographic remoting introduced by
Microsoft in 2020 enabled the HL-2 to offload the rendering process to a remote computer
or server and stream the rendered content back to the device for display. While this feature
allows for leveraging more robust computing resources and alleviates rendering pressure
on the HL-2, it also introduces complexity by making the system dependent on external
computing resources, potentially increasing costs.
Although a vast amount of research has evaluated the accuracy of MRN systems
based on various registration methods across various clinical intervention measures, novel
registration methods that overcome these challenges still need to be developed. Therefore,
this paper proposes the concept of simulator-based registration rather than focusing on
specific neurosurgical intervention procedures.
Simulator-based registration offers a more straightforward and confidence-enhancing
approach. During the registration process, the user’s task is merely to physically align two
laser cross-projections, which is easier to achieve than touchless virtual object handling
and eliminates the issue of pointer wear with no need to segment or predefine additional
virtual objects for registration purposes (e.g., selecting virtual marker points or defining
regions of interest for registration) as the scanned reference plane is already prepared and
Bioengineering 2023, 10, 1290 16 of 26

independent of the user in the reference image DICOM. These three reference planes are
mutually orthogonal, imbuing the registration process with globally averaged character-
istics. Furthermore, the system is easy to assemble and configure. On the hardware side,
the production of the crosshair simulator is simple and comes with a low manufacturing
cost. On the software side, the involved tracking and rendering do not require significant
computing resources.
In the novel MRN system proposed in this study, the fusion of the crosshair simulator
with the MR Platform offers valuable navigational advantages (see Table 2), which can be
delineated in three key aspects:
• From a technical perspective, the crosshair simulator provides surgeons with an in-
tuitive reference for physical positioning, while the MR Platform furnishes a visual
reference for anatomical structures. This combination ensures reliable physical posi-
tioning and aids in a deeper comprehension of the surgical area.
• Regarding visual tracking, the crosshair simulator furnishes a stable CV tracking
reference in physical space, whereas the MR Platform ensures visual stability through
spatial anchors as the user moves. The spatial anchors signify holographic visualiza-
tion optimization when surgeons need to relocate or change angles during surgery.
• In the context of practical workflow, the crosshair simulator can be rapidly deployed
during surgical preparation, followed by the activation of the MR Platform to provide
clear visual references and planning for surgeons. In the event of technical failures in
either system, the Crosshair simulator can act as a backup physical reference location,
while the MR Platform can concurrently optimize CV tracks. Hence, this fusion
enhances efficiency, reliability, and robustness.

Table 2. Complementarity and compatibility analysis of the crosshair simulator and MR Platform.

The Crosshair
Aspect The MR Platform Complementarity Compatibility
Simulator
Calibration process
Combines benefits of ensures synchronization
Provides physical Offers 3D visualization
Technical principle physical and 3D and consistency between
positioning and virtual interaction
visualization the crosshair simulator
and MR
Physical location back Seamless transition
Fixed visual tracking Spatial anchors
Visual tracking up and optimized CV between the two tracking
reference stabilize the MR view
vision tracking modes
Rapid physical Provides 3D visual Quick re-registration; Efficient and intuitive
Practical workflow positioning with low references once Remedial static registration, simplifying
user dependency activated guidance interaction

It is worth noting that fusion of the crosshair simulator and the MR Platform follows a
structured approach, commonly referred to as “hybrid combination” in the mechatronics
field. This concept, initially introduced by Zhang et al. [68], adheres to the principles of
complementarity and compatibility. In essence, the complementarity principle implies
that each technology compensates for the deficiencies of the other, while the compatibility
principle ensures seamless integration of the two technologies [68]. Of more significance,
this study introduces a novel registration method centered around the development and
utilization of the crosshair simulator. The key innovation in this study revolves around
applying the crosshair simulator to establish an identical coordinate origin as that of the CT
or MRI. This foundational technique serves as the bedrock of the entire system, facilitating
seamless integration and precise alignment, effectively addressing a critical challenge in
MRN. While prior MRN studies were acknowledged, the specific contribution of this
study primarily focuses on the comprehensive exploration and validation of the crosshair
simulator’s pivotal role in achieving this coordination.
Bioengineering 2023, 10, 1290 17 of 26

The crosshair simulator utilizes laser-based positioning. Although using lasers for
preoperative or intraoperative localization is not a unique technique, such as obtaining
surface data of patients [5,57,69,70], indicating planned puncture approaches [71,72], or
guiding the correct positioning for radiotherapy [45,73], they typically provide point
information from laser projections. However, the principle employed in this study entirely
differs from previous applications, as it leverages both point and normal vector information
in the projection transformation of the laser crosshair on 3D surfaces.
In the presented approach, a pair of laser crosshairs was used, where the projection of
a single crosshair on a 3D curved surface can constrain two translation degrees of freedom
(DOFs) (excluding the normal vector direction) and three rotational DOFs for the simulator.
Another crosshair’s projection direction is perpendicular to the first, compensating for the
missing translational DOF and providing redundant constraints to ensure reliability and
accuracy. Additionally, the rigidly fixed relationship between the laser emitters constitutes
a constraint on the scale factor. Therefore, through a complete constraint of six DOFs plus
scale, a one-to-one mapping between the crosshair simulator and the projected lines on
the patient’s head is achieved, enabling the simulator to reproduce its position during the
acquisition of reference images by the scanner. The calibration of the simulator’s intrinsic
and extrinsic parameters using calibration spheres is similarly based on this principle.
Although the proposed method shares some similarities with the standard optical
navigation paradigm employing surface matching techniques based on a non-contact laser
pointer, such as exploring the patient’s surface structure data through laser projection,
it also has key differences. In the data acquisition stage of standard optical navigation,
the user must manipulate a laser pointer (e.g., Z-Touch) to ‘capture’ a relatively non-
large number (around 100–200) of random points from the patient’s craniofacial area,
allowing for interpolation of the local shape of the surface. Nonetheless, in the proposed
framework, the key lies in how the laser crosshair lines ‘twist’ on the surface. Once these
projection lines perfectly match the reference lines, it effectively determines the position
of the reference plane in image space. In this process, the curvature of the surface plays a
critical role. Theoretically, the curvature of 3D surfaces may affect the efficiency of DOFs
constraints. It was found that regions with smaller facial curvature radii, such as the nasal
and zygomatic regions, have more significant surface curvature and higher variations in
normal vectors. Even slight deviations in the laser projection angle from the ideal position
on the 3D surface can cause significant deformation in the crosshair projection, leading
to a mismatch with the reference lines (see Figure 9B). Therefore, compared to flatter and
smoother regions with larger curvature radii, such as the forehead, the non-flat regions
have higher DOFs constraint efficiency. They are, therefore, more critical in the registration
value (see Figure 9A,B).
Although the term “accuracy” of MRN systems is not consistently defined, the TRE is a
commonly used metric for evaluating navigation accuracy at different stages of the surgical
intervention, ranging from initial registration [38] to registration quality in the late course
of surgery. While other issues might arise later in the surgical procedure, such as non-linear
deformations caused by brain-shift, the importance of initial registration should not be
overlooked. The accuracy of preliminary registration will directly impact the accuracy and
reliability of subsequent steps. TRE is defined as the distance between specifically selected
points in virtual space and their corresponding points in physical space [34]. In this study,
the TRE of the MRN system based on the crosshair simulator was measured in 3D space,
with an average of 3.7 ± 1.7 mm and a range between 1.2 mm and 8.9 mm. Therefore, its
accuracy may not be sufficient for neurosurgical microsurgeries requiring high accuracy
yet but is acceptable for non-microsurgeries (e.g., extra-ventricular drainage (EVD)) and
the macroscopic parts of microsurgeries (e.g., craniotomy planning). The noted accuracy is
also comparable to that in fiducial-based registration. The TRE as presented by the system
represents the deviation at the surface level. Although using markers like fiducials or bone
screws allows for the acquisition of points both externally and within the images to calculate
the Euclidean distance, this measure still does not reflect the actual target level since the
Bioengineering 2023, 10, 1290 18 of 26

target (e.g., tumor) is inaccessible at that stage. Recognizing this limitation, which is also
observed in other studies, alternative non-invasive methods might be explored in future
research. For instance, using non-invasive markers or imaging techniques to indirectly
estimate the TRE at the deeper target level. However, it is acknowledged that directly
assessing the target, especially if it is a tumor, remains challenging at this stage. A literature
review on 3D TRE of MRN systems implemented on standalone HMDs is presented in
Table 3 [7,8,32,33,35,36,44,48,50–52,56,58]. However, it must be emphasized that these
studies cannot directly compare TRE due to their different purposes or measurement
methods [38].

Table 3. Literature Review of 3D TRE measured in Standalone HMD-based MRN Systems.

Registration
Reference Object Measurement Method * Accuracy # [mm]
Method
Li et al., 2018 [7] Manual Patient Indirect 4.34 ± 1.63
Li et al., 2023 [8] Manual Patient Indirect 5.46 ± 2.22
Gibby et al., 2019 [48] Manual Phantom Indirect 2.50 ± 0.44
McJunkin et al., 2019 [32] Manual Phantom Direct 5.76 ± 0.54
Phantom and Phantom: 1.65
Zhou et al., 2022 [36] Fiducial Direct
Patient Patient: 1.94
Gibby et al., 2021 [44] Fiducial Phantom Indirect 3.62 ± 1.71
Gsaxner et al., 2021 [51] Fiducial Phantom Direct 1.70 ± 0.81
Martin-Gomez et al., 2023 [52] Fiducial Phantom Direct 3.64 ± 1.47
Zhou et al., 2023 [35] Fiducial Phantom Direct 1.74 ± 0.38
Eom et al., 2022 [50] Fiducial Phantom Direct 3.12 ± 2.53
Stationary: 3.32 ± 0.02
Akulauskas et al., 2023 [49] Fiducial Phantom Direct
Dynamic: 4.77 ± 0.97
Von Haxthausen et al., 2021 [56] Surface Phantom Direct 14.0
X: 3.3 ± 2.3
Pepe et al., 2019 [33] Surface Phantom Direct Y: −4.5 ± 2.9
Z: −9.3 ± 6.1
Liebmann et al., 2019 [58] Surface Phantom Indirect 2.77 ± 1.46
* Two types of measurement methods: direct and indirect. Direct: The distance between the real target and the
perceived virtual target is directly measured by a caliper or a tracked probe linked to a reliable optic tracking
system (e.g., conventional navigation). Indirect: TRE is characterized by the distance between the annotated target
and the tip of a catheter or needle measured on the post-operative image. # Mean ± Standard deviation.

Previous work has categorized the measurement methods of 3D TRE into two types:
direct and indirect. Calipers or commercial navigation probes are typically used to directly
read the error in the former way [32,33,35,36,50–52,56,58]. It is worth mentioning that
a recent study reported a more precise and efficient measurement by cleverly utilizing
customized probes within the MRN system [49]. Nonetheless, the accessibility of the mea-
surement tools could lead to an underestimation of TRE, as virtual targets may be located
beneath the patient or phantom. In indirect measurements, users usually place a puncture
needle or other markers at the position of the virtual target to mark its location [7,8,44,48,58].
Then, the position is re-registered onto the original image through “post-operative” imag-
ing to calculate the re-registration error as TRE. Nevertheless, this measurement method
may introduce changes to the original structure due to needle insertion (e.g., the shrink of
the ventricle after EVD), potentially affecting the measurement’s repeatability and reliabil-
ity [7,8]. Therefore, in this study, a direct and non-invasive approach was adopted where
the relative point of the actual marker in the virtual space and measured the distance to
the corresponding point of the virtual target was manually obtained. Placing the virtual
probe at the same marker point multiple times in each registration helps control random
errors caused by the lack of tactile feedback. However, a potential system error may be
introduced because a 3D-printed head phantom was used instead of an actual patient’s
head. Although using 3D-printed phantoms as ground truth is a common practice for
preclinical validation of new systems’ feasibility, it may neglect various aspects that could
craniofacial area, allowing for interpolation of the local shape of the surface. Nonetheless,
in the proposed framework, the key lies in how the laser crosshair lines ‘twist’ on the
surface. Once these projection lines perfectly match the reference lines, it effectively
determines the position of the reference plane in image space. In this process, the
Bioengineering 2023, 10, 1290
curvature of the surface plays a critical role. Theoretically, the curvature of 3D surfaces
19 of 26
may affect the efficiency of DOFs constraints. It was found that regions with smaller facial
curvature radii, such as the nasal and zygomatic regions, have more significant surface
curvature and higher variations in normal vectors. Even slight deviations in the laser
occur during
projection an actual
angle fromacquisition.
the ideal This includes
position on skin
the shift when thecan
3D surface patient’s
causehead is fixed
significant
in the head clamp, deformations caused by intubation or the insertion of a nasogastric
deformation in the crosshair projection, leading to a mismatch with the reference lines (see
tube, unknown changes in muscle tension leading to skin deformation, and the effects of
Figure 9B). Therefore, compared to flatter and smoother regions with larger curvature
head positioning using headphones or foam pads for MRI. Additionally, it neglects the
radii, such as the forehead, the non-flat regions have higher DOFs constraint efficiency.
geometric distortions of the original imaging itself, such as those caused by resolution and
They are, therefore, more critical in the registration value (see Figures 9A,B).
slice thickness [74].

Figure
Figure 9.
9. Comparison
Comparison of oflaser
lasercrosshair
crosshairprojection
projection
onon areas
areas with
with large
large curvature
curvature radius
radius (A)small
(A) and and
small curvature
curvature radiusradius (B).
(B). The The projection
projection simulation
simulation was conducted
was conducted in MATLABin MATLAB
R2022a,R2022a, and to
and to simplify
simplify calculations, two parabolic surfaces with different apertures and curvatures were plotted.
calculations, two parabolic surfaces with different apertures and curvatures were plotted. Assuming
Assuming a slight angular disparity between the simulator laser (depicted as the red line) and the
a slight angular disparity between the simulator laser (depicted as the red line) and the scanner
scanner laser (depicted as the green line) during the deployment of the crosshair simulator, rather
laser being
than (depicted as the green
perfectly line)
coaxial, during
the the deployment
projection of the crosshair
of the simulator’s simulator,
crosshair on therather than being
patient’s skin
perfectly coaxial, the projection of the simulator’s crosshair on the patient’s skin (depicted
(depicted as the red curve) will distort at the reference lines (depicted as the green curve), resultingas the
red curve) will distort at the reference lines (depicted as the green curve), resulting in an imperfect
match. This mismatch is more pronounced in areas with a smaller curvature radius, aiding users
in promptly detecting registration errors and adjusting the simulator’s deployment. Hence, these
regions are more critical for the value of registration.

Nonetheless, the presented registration method does have some limitations. Firstly,
restrictions on registration and tracking arise from software and hardware. The current
system introduces a dedicated Vuforia SDK for real-time tracking through the PV camera
of HL-2. While plane target deployment to the crosshair simulator is straightforward,
as many studies have shown, the user’s perspective would affect the tracking quality of
planar image targets [45,53,75]. When users observe from positions with large incident
angles, the HL-2 tracking becomes unstable, leading to slight jitter or even loss of tracking.
This impacts the accuracy of initial registration and affects the stability of the holographic
content after registration. To overcome or limit this analysis, observing the holograms
from angles perpendicular to the marker image yielded the most optimal visualization
results. As for maintaining the holographic content after registration, a “Freeze” mode was
developed that utilizes anchors in the HL-20 s spatial mapping to temporarily maintain the
hologram’s position, replacing real-time image target tracking. This prevents inappropriate
updates to the holographic content when users are in unfavorable angles. However, as
reported in previous literature, a holographic drift was also observed in the “Freeze” mode.
This drift may be due to the built-in sensors’ hardware limitations and the spatial mapping
coarseness of HL-2. They are leading to an accumulated estimation error in the SLAM over
time. Nevertheless, loop closure detection can eliminate this drift when switching to the
“Unfreeze” mode, which re-enables real-time tracking of the image target. Nonetheless,
whether the “Freeze/Unfreeze” switching can be reasonably triggered depends on the
user monitoring whether the virtual content has deviated from the registered position.
A potential improvement strategy could involve replacing plane targets with cylindrical
surface targets to reduce user reliance on monitoring. In this scenario, the PV camera
Bioengineering 2023, 10, 1290 20 of 26

can detect the image target perpendicular to it at any angle, allowing for stable tracking
and positioning.
Second, the potential challenges of using this novel registration method for surgery
in the prone position should be considered. In most cases, preoperative CT or MRI scans
are performed in the supine position, making the marking of projection lines for surgery
in the prone position impractical. The importance of considering the surgical position in
the preoperative imaging process is highlighted in recent studies. Furuse et al. evaluated
the clinical accuracy of navigation systems [76]. They emphasized that the surface-based
registration method was only recommended for the supine position due to skin distortion
frequently observed in the lateral region. In contrast, Dho et al. demonstrated the superior
accuracy of neuronavigational localization with prone-position MRI during prone-position
surgery for posterior fossa (PF) lesions [77]. These findings suggest that acquiring preoper-
ative data in the prone position, or using intraoperative imaging where a prone scan can be
obtained, can improve registration accuracy. This would make registration based on the
crosshair simulator feasible. Therefore, future research should broaden the applicability
of the crosshair simulator for registering prone patients after an intra-operative scan or
consider acquiring preoperative imaging in the prone position to align with the surgical
position for more precise localization.
Thirdly, the calibration process of the crosshair simulator remains user-dependent.
On the one hand, intrinsic calibration relies on the user’s assessment of whether the laser
projection lines perfectly match the calibration arcs. Based on this, the coplanarity and
verticality of the two laser emitters are adjusted. On the other hand, extrinsic calibration
necessitates the user to manually align the virtual calibration sphere with the physical
calibration sphere through slider adjustments on the MR platform. Errors generated by
the user during both processes can affect calibration accuracy, subsequently impacting
the overall performance of the MRN system. Future research endeavors should explore
methods to eliminate this dependency. One potential improvement strategy for intrinsic
calibration is to equip the physical calibration sphere with a self-changeable robot calibra-
tion module [78–80], effectively transforming it into a self-reconfigurable robot calibration
sphere. Like the “self-docking” method reported by Wu et al. [80], this can be achieved by
mounting laser sensors or cameras on the physical calibration sphere, allowing it to au-
tonomously locate the target module on the rack and laser emitters, thus self-reconfiguring
and establishing the PCP or SCP relationships. Regarding extrinsic calibration, equipping
the physical calibration sphere with CV targets could enable automated geometric calibra-
tion. However, the challenge lies in the psychophysical aspect, where an aligned geometric
position of the virtual calibration sphere does not guarantee a correct position within the
user’s perceptual system due to the perspective difference between the HL-2’s PV camera
and the user’s eyeballs. Lastly, this registration method only partially eliminates user
dependencies. Although the marker projection lines were printed with the phantom in
the experiments to simplify the procedure, the process becomes more complex in a clinical
environment. During MRI or CT reference image acquisition, users may need to manually
draw the marker lines on the patient’s skin at the beginning of the acquisition session.
Therefore, readers may need clarification on whether these marking lines are robust and
valid, especially in MRI data collection involving multiple localization procedures. In fact,
an MRI scanning begins with a ‘localizer scan,’ defining the RICS at the position where
the gantry laser positioning lines are projected. Subsequent localizers (defining the ROI
for different modality scanning sequences) are set within this coordinate system without
altering the system itself. Thus, any tilt or movement in later scans will not affect the
validity of the marking lines. If the patient’s head position changes between different scans,
images can be merged or fused using open-source software like 3D Slicer, ensuring that
the marker lines remain valid. Although this might sound complex, the fusion of image
series is a standard procedure in multi-modal navigation and can be easily implemented
using 3D Slicer. So, the marking lines can ensure robustness and validity. Moreover,
another reasonable concern is whether making markings on the skin (such as drawing
Bioengineering 2023, 10, 1290 21 of 26

lines) could introduce geometric distortions. While evidence from adjacent disciplines
supports the routine use of therapeutic room lasers and skin marker lines for pre-treatment
positioning [45,73], the specific impact of manual markings on the accuracy of registration
based on the crosshair simulator still needs to be determined within the following context:
the background of neurosurgery. Additional challenges that must be considered include
calibrating the scanner’s tilted gantry, concerns about additional radiation, and challenges
in users’ holographic perception. These aspects warrant further investigation and remain
subjects for future research.
Besides those rather general limitations of the presented approach, there are also study-
specific limitations. First, this proof-of-concept study only encompassed an experiment
involving a single phantom by a single user, needing more reliability of multiple tests
and statistical analysis. Furthermore, the current research framework does not enroll
human participants, hindering a comprehensive assessment of its performance in clinical
settings. To further explore its clinical adaptability, systematic testing on human volunteers
is planned. For real patients, ensuring the stable fixation of the crosshair simulator to the
head is paramount. Consideration can be given to using a rigid headframe connected to
the patient’s bed or operating table. Medical tape or other fixation devices are also viable
alternatives for ensuring the stability of head positioning during the surgical procedure.
Given patient comfort and safety, these fixation solutions should be easy to rapidly install
and dismantle. These strategies aim to enhance positional accuracy and reduce errors
arising from head movement.
Despite the limitations, the novel registration method demonstrates promising prospects
and merits further development. Compared to previous techniques, using laser crosshair
alignment offers a more intuitive and efficient approach to registration, reducing the
complexities associated with virtual object handling. The simplicity in hardware and
software configuration, combined with the potential to enhance accuracy and adaptability
in intervention procedures, positions this method as a valuable advancement in low-cost,
easy-to-use MRN systems, potentially offering improvements in surgical outcomes.

5. Conclusions
This study presented a novel registration method for an MRN system based on a laser
crosshair simulator. It provided initial evidence of its feasibility as a low user-dependent,
cost-effective, and relatively reliable approach. The method showed encouraging results
in efficiency and intuitiveness compared to previous techniques. Its simplicity in both
hardware and software configuration, combined with the potential for enhancing accuracy
and adaptability in intervention procedures, marks this method as a valuable advancement.
Although refinements in accuracy are still possible, the current study lays the groundwork
for improvements in low-cost, easy-to-use MRN systems, positioning this approach as
a promising avenue for enhancing surgical outcomes. Future research may continue to
build upon these strengths while evaluating their utility and effectiveness in broader
clinical settings.

Supplementary Materials: The following supporting information can be downloaded at: https://
www.mdpi.com/article/10.3390/bioengineering10111290/s1, Supplementary Material S1: Protocol:
Preparation of a 3D Printed Skull Model with Laser Crosshair Projection Using 3D Slicer Software.
Supplementary Material S2: Protocol: Preparation of Holograms for Validation. Supplementary
Video S1: MR platform: Holograms registration based on laser crosshair simulator, available online
at https://youtu.be/2FSBHFEGIgg (Accessed date: 12 August 2023).
Author Contributions: Conceptualization, Z.Q., M.H.A.B. and J.Z.; methodology, Z.Q., M.H.A.B.,
C.N., X.C., X.X., Q.W. and J.Z.; software, Z.Q., X.X., Q.W., Z.G., S.Z., H.J., J.W. and X.C.; validation, Z.Q.
and J.Z.; formal analysis, Z.Q. and J.Z.; investigation, Z.Q.; resources, X.X.; data curation, Z.Q. and
J.Z.; writing—original draft preparation, Z.Q.; writing—review and editing, M.H.A.B.; visualization,
Z.Q.; supervision, M.H.A.B., C.N., X.C. and J.Z.; project administration, M.H.A.B., C.N., X.C. and J.Z.;
funding acquisition, X.X., X.C. and J.Z. All authors have read and agreed to the published version of
the manuscript.
Bioengineering 2023, 10, 1290 22 of 26

Funding: This research received no external funding.


Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Written informed consent has been obtained from the patient to
publish this paper.
Data Availability Statement: The data presented in this study are available upon reasonable request
from the corresponding authors.
Acknowledgments: We would like to express our sincere appreciation to Hui Zhang for her invalu-
able assistance during this research.
Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or
in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript:
2D Two-Dimensional
3D Three-Dimensional
AR Augmented Reality
CBCT Cone Beam Computed Tomography
CT Computed Tomography
CV Computer vision
DICOM Digital Imaging and Communications in Medicine
DOF Degrees of Freedom
EVD Extra-Ventricular Drainage
FLE Fiducial Localization Errors
FoV Field of View
GCA Great Circle Arc
HMD Head-Mounted Display
HL-2 HoloLens-2
IMU Inertial Measurement Unit
IPD Interpupillary Distance
MR Mixed Reality
MRI Magnetic Resonance Imaging
MRN Mixed Reality Navigation
MRTK Mixed Reality Toolkit
OR Operating Room
PV Photos-Videos
PCP Primary Calibration Position
RICS Reference Image Coordinate System
RMSE Root Mean Square Error
SCA Small Circle Arc
SCS Simulator Coordinate System
SCP Secondary Calibration Position
SDK Software Development Kit
SLAM Synchronous Localization and Mapping
TRE Target Registration Error
VCS Virtual Coordinate System
VLC Visible Light Cameras
VR Virtual Reality
UWP Universal Windows Platform

References
1. Carl, B.; Bopp, M.; Saß, B.; Pojskic, M.; Gjorgjevski, M.; Voellger, B.; Nimsky, C. Reliable navigation registration in cranial and
spine surgery based on intraoperative computed tomography. Neurosurg. Focus 2019, 47, E11. [CrossRef]
2. Watanabe, Y.; Fujii, M.; Hayashi, Y.; Kimura, M.; Murai, Y.; Hata, M.; Sugiura, A.; Tsuzaka, M.; Wakabayashi, T. Evaluation of
errors influencing accuracy in image-guided neurosurgery. Radiol. Phys. Technol. 2009, 2, 120–125. [CrossRef] [PubMed]
Bioengineering 2023, 10, 1290 23 of 26

3. Bopp, M.H.A.; Corr, F.; Saß, B.; Pojskic, M.; Kemmling, A.; Nimsky, C. Augmented Reality to Compensate for Navigation
Inaccuracies. Sensors 2022, 22, 9591. [CrossRef] [PubMed]
4. Kantelhardt, S.R.; Gutenberg, A.; Neulen, A.; Keric, N.; Renovanz, M.; Giese, A. Video-assisted navigation for adjustment of
image-guidance accuracy to slight brain shift. Oper. Neurosurg. 2015, 11, 504–511. [CrossRef] [PubMed]
5. Stieglitz, L.H.; Fichtner, J.; Andres, R.; Schucht, P.; Krähenbühl, A.-K.; Raabe, A.; Beck, J. The silent loss of neuronavigation
accuracy: A systematic retrospective analysis of factors influencing the mismatch of frameless stereotactic systems in cranial
neurosurgery. Neurosurgery 2013, 72, 796–807. [CrossRef] [PubMed]
6. Incekara, F.; Smits, M.; Dirven, C.; Vincent, A. Clinical Feasibility of a Wearable Mixed-Reality Device in Neurosurgery. World
Neurosurg. 2018, 118, e422–e427. [CrossRef]
7. Li, Y.; Chen, X.; Wang, N.; Zhang, W.; Li, D.; Zhang, L.; Qu, X.; Cheng, W.; Xu, Y.; Chen, W.; et al. A wearable mixed-reality
holographic computer for guiding external ventricular drain insertion at the bedside. J. Neurosurg. 2018, 131, 1599–1606.
[CrossRef]
8. Li, Y.; Zhang, W.; Wang, N. Wearable mixed-reality holographic guidance for catheter-based basal ganglia hemorrhage treatment.
Interdiscip. Neurosurg. 2023, 34, 101821. [CrossRef]
9. Qi, Z.; Li, Y.; Xu, X.; Zhang, J.; Li, F.; Gan, Z.; Xiong, R.; Wang, Q.; Zhang, S.; Chen, X. Holographic mixed-reality neuronavigation
with a head-mounted device: Technical feasibility and clinical application. Neurosurg. Focus 2021, 51, E22. [CrossRef]
10. van Doormaal, T.P.C.; van Doormaal, J.A.M.; Mensink, T. Clinical Accuracy of Holographic Navigation Using Point-Based
Registration on Augmented-Reality Glasses. Oper. Neurosurg. 2019, 17, 588–593. [CrossRef]
11. Meola, A.; Cutolo, F.; Carbone, M.; Cagnazzo, F.; Ferrari, M.; Ferrari, V. Augmented reality in neurosurgery: A systematic review.
Neurosurg. Rev. 2017, 40, 537–548. [CrossRef] [PubMed]
12. Kiya, N.; Dureza, C.; Fukushima, T.; Maroon, J.C. Computer Navigational Microscope for Minimally Invasive Neurosurgery.
Minim. Invasive Neurosurg. 1997, 40, 110–115. [CrossRef] [PubMed]
13. Léger, É.; Drouin, S.; Collins, D.L.; Popa, T.; Kersten-Oertel, M. Quantifying attention shifts in augmented reality image-guided
neurosurgery. Healthc. Technol. Lett. 2017, 4, 188–192. [CrossRef] [PubMed]
14. Drouin, S.; Kochanowska, A.; Kersten-Oertel, M.; Gerard, I.J.; Zelmann, R.; De Nigris, D.; Bériault, S.; Arbel, T.; Sirhan, D.; Sadikot,
A.F.; et al. IBIS: An OR ready open-source platform for image-guided neurosurgery. Int. J. Comput. Assist. Radiol. Surg. 2017, 12,
363–378. [CrossRef] [PubMed]
15. De Mauro, A.; Raczkowsky, J.; Halatsch, M.E.; Wörn, H. Mixed Reality Neurosurgical Microscope for Training and Intra-operative
Purposes. In Virtual and Mixed Reality; Springer: Berlin/Heidelberg, Germany, 2009; pp. 542–549.
16. Pojskić, M.; Bopp, M.H.A.; Saß, B.; Carl, B.; Nimsky, C. Microscope-Based Augmented Reality with Intraoperative Computed
Tomography-Based Navigation for Resection of Skull Base Meningiomas in Consecutive Series of 39 Patients. Cancers 2022, 14,
2302. [CrossRef]
17. Bopp, M.H.A.; Saß, B.; Pojskić, M.; Corr, F.; Grimm, D.; Kemmling, A.; Nimsky, C. Use of Neuronavigation and Augmented
Reality in Transsphenoidal Pituitary Adenoma Surgery. J. Clin. Med. 2022, 11, 5590. [CrossRef]
18. Mahvash, M.; Besharati Tabrizi, L. A novel augmented reality system of image projection for image-guided neurosurgery. Acta
Neurochir. 2013, 155, 943–947. [CrossRef]
19. Besharati Tabrizi, L.; Mahvash, M. Augmented reality–guided neurosurgery: Accuracy and intraoperative application of an
image projection technique. J. Neurosurg. JNS 2015, 123, 206–211. [CrossRef]
20. Yavas, G.; Caliskan, K.E.; Cagli, M.S. Three-dimensional–printed marker–based augmented reality neuronavigation: A new
neuronavigation technique. Neurosurg. Focus 2021, 51, E20. [CrossRef]
21. Shu, X.-j.; Wang, Y.; Xin, H.; Zhang, Z.-z.; Xue, Z.; Wang, F.-y.; Xu, B.-n. Real-time augmented reality application in presurgical
planning and lesion scalp localization by a smartphone. Acta Neurochir. 2022, 164, 1069–1078. [CrossRef]
22. Alves, M.O.; Dantas, D.O. Mobile Augmented Reality for Craniotomy Planning. In Proceedings of the 2021 IEEE Symposium on
Computers and Communications (ISCC), Athens, Greece, 5–8 September 2021; pp. 1–6.
23. de Almeida, A.G.C.; Fernandes de Oliveira Santos, B.; Oliveira, J.L.M. A Neuronavigation System Using a Mobile Augmented
Reality Solution. World Neurosurg. 2022, 167, e1261–e1267. [CrossRef] [PubMed]
24. Cenzato, M.; Fratianni, A.; Stefini, R. Using a Smartphone as an Exoscope Where an Operating Microscope is not Available. World
Neurosurg. 2019, 132, 114–117. [CrossRef]
25. Deng, W.; Li, F.; Wang, M.; Song, Z. Easy-to-Use Augmented Reality Neuronavigation Using a Wireless Tablet PC. Stereotact.
Funct. Neurosurg. 2013, 92, 17–24. [CrossRef] [PubMed]
26. Satoh, M.; Nakajima, T.; Yamaguchi, T.; Watanabe, E.; Kawai, K. Application of Augmented Reality to Stereotactic Biopsy. Neurol.
Med.-Chir. 2019, 59, 444–447. [CrossRef] [PubMed]
27. Chiou, S.-Y.; Zhang, Z.-Y.; Liu, H.-L.; Yan, J.-L.; Wei, K.-C.; Chen, P.-Y. Augmented Reality Surgical Navigation System for External
Ventricular Drain. Healthcare 2022, 10, 1815. [CrossRef] [PubMed]
28. Abe, Y.; Sato, S.; Kato, K.; Hyakumachi, T.; Yanagibashi, Y.; Ito, M.; Abumi, K. A novel 3D guidance system using augmented
reality for percutaneous vertebroplasty: Technical note. J. Neurosurg. Spine SPI 2013, 19, 492–501. [CrossRef] [PubMed]
29. Ferrari, V.; Cutolo, F. Letter to the Editor: Augmented reality–guided neurosurgery. J. Neurosurg. JNS 2016, 125, 235–237.
[CrossRef] [PubMed]
Bioengineering 2023, 10, 1290 24 of 26

30. Gsaxner, C.; Li, J.; Pepe, A.; Jin, Y.; Kleesiek, J.; Schmalstieg, D.; Egger, J. The HoloLens in medicine: A systematic review and
taxonomy. Med. Image Anal. 2023, 85, 102757. [CrossRef]
31. Hayasaka, T.; Kawano, K.; Onodera, Y.; Suzuki, H.; Nakane, M.; Kanoto, M.; Kawamae, K. Comparison of accuracy between
augmented reality/mixed reality techniques and conventional techniques for epidural anesthesia using a practice phantom
model kit. BMC Anesthesiol. 2023, 23, 171. [CrossRef]
32. McJunkin, J.L.; Jiramongkolchai, P.; Chung, W.; Southworth, M.; Durakovic, N.; Buchman, C.A.; Silva, J.R. Development of a
Mixed Reality Platform for Lateral Skull Base Anatomy. Otol. Neurotol. Off. Publ. Am. Otol. Soc. Am. Neurotol. Soc. Eur. Acad. Otol.
Neurotol. 2018, 39, e1137–e1142. [CrossRef]
33. Pepe, A.; Trotta, G.F.; Mohr-Ziak, P.; Gsaxner, C.; Wallner, J.; Bevilacqua, V.; Egger, J. A Marker-Less Registration Approach for
Mixed Reality-Aided Maxillofacial Surgery: A Pilot Evaluation. J. Digit. Imaging 2019, 32, 1008–1018. [CrossRef] [PubMed]
34. Peters, T.M.; Linte, C.A.; Yaniv, Z.; Williams, J. Mixed and Augmented Reality in Medicine; CRC Press: Boca Raton, FL, USA, 2018.
35. Zhou, Z.; Yang, Z.; Jiang, S.; Zhuo, J.; Li, Y.; Zhu, T.; Ma, S.; Zhang, J. Validation of a surgical navigation system for hypertensive
intracerebral hemorrhage based on mixed reality using an automatic registration method. Virtual Real. 2023, 27, 2059–2071.
[CrossRef]
36. Zhou, Z.; Yang, Z.; Jiang, S.; Zhuo, J.; Zhu, T.; Ma, S. Surgical Navigation System for Hypertensive Intracerebral Hemorrhage
Based on Mixed Reality. J. Digit. Imaging 2022, 35, 1530–1543. [CrossRef] [PubMed]
37. Gharios, M.; El-Hajj, V.G.; Frisk, H.; Ohlsson, M.; Omar, A.; Edström, E.; Elmi-Terander, A. The use of hybrid operating rooms in
neurosurgery, advantages, disadvantages, and future perspectives: A systematic review. Acta Neurochir. 2023, 165, 2343–2358.
[CrossRef] [PubMed]
38. Fick, T.; van Doormaal, J.A.M.; Hoving, E.W.; Willems, P.W.A.; van Doormaal, T.P.C. Current Accuracy of Augmented Reality
Neuronavigation Systems: Systematic Review and Meta-Analysis. World Neurosurg. 2021, 146, 179–188. [CrossRef] [PubMed]
39. Fick, T.; Meulstee, J.W.; Köllen, M.H.; Van Doormaal, J.A.M.; Van Doormaal, T.P.C.; Hoving, E.W. Comparing the influence
of mixed reality, a 3D viewer, and MRI on the spatial understanding of brain tumours. Front. Virtual Real. 2023, 4, 1214520.
[CrossRef]
40. Colombo, E.; Bektas, D.; Regli, L.; van Doormaal, T. Case report: Impact of mixed reality on anatomical understanding and
surgical planning in a complex fourth ventricular tumor extending to the lamina quadrigemina. Front. Surg. 2023, 10, 1227473.
[CrossRef]
41. Colombo, E.; Esposito, G.; Regli, L.; Fierstra, J.; Seboek, M.; Germans, M.; van Doormaal, T. Mixed Reality applied to surgical
planning and customization of Carotid Endarterectomies. Brain Spine 2023, 3, 102030. [CrossRef]
42. Colombo, E.; Regli, L.; Esposito, G.; Germans, M.; Fierstra, J.; Serra, C.; van Doormaal, T. Impact of mixed reality on surgical
planning: A single center usability study with 119 subsequent cases. Brain Spine 2023, 3, 102325. [CrossRef]
43. Jean, W.C.; Piper, K.; Felbaum, D.R.; Saez-Alegre, M. The Inaugural “Century” of Mixed Reality in Cranial Surgery: Virtual Reality
Rehearsal/Augmented Reality Guidance and Its Learning Curve in the First 100-Case, Single-Surgeon Series. Oper. Neurosurg.
2023. [CrossRef]
44. Gibby, W.; Cvetko, S.; Gibby, A.; Gibby, C.; Sorensen, K.; Andrews, E.G.; Maroon, J.; Parr, R. The application of augmented
reality-based navigation for accurate target acquisition of deep brain sites: Advances in neurosurgical guidance. J. Neurosurg.
2021, 137, 489–495. [CrossRef] [PubMed]
45. Li, C.; Lu, Z.; He, M.; Sui, J.; Lin, T.; Xie, K.; Sun, J.; Ni, X. Augmented reality-guided positioning system for radiotherapy patients.
J. Appl. Clin. Med. Phys. 2022, 23, e13516. [CrossRef]
46. Qi, Z.; Zhang, S.; Xu, X.; Chen, X. Augmented reality–assisted navigation for deep target acquisition: Is it reliable? J. Neurosurg.
2022, 138, 1169–1170. [PubMed]
47. Fick, T.; van Doormaal, J.A.M.; Hoving, E.W.; Regli, L.; van Doormaal, T.P.C. Holographic patient tracking after bed movement
for augmented reality neuronavigation using a head-mounted display. Acta Neurochir. 2021, 163, 879–884. [CrossRef]
48. Gibby, J.T.; Swenson, S.A.; Cvetko, S.; Rao, R.; Javan, R. Head-mounted display augmented reality to guide pedicle screw
placement utilizing computed tomography. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 525–535. [CrossRef] [PubMed]
49. Akulauskas, M.; Butkus, K.; Rutkūnas, V.; Blažauskas, T.; Jegelevičius, D. Implementation of Augmented Reality in Dental
Surgery Using HoloLens 2: An In Vitro Study and Accuracy Assessment. Appl. Sci. 2023, 13, 8315. [CrossRef]
50. Eom, S.; Sykes, D.; Rahimpour, S.; Gorlatova, M. NeuroLens: Augmented Reality-based Contextual Guidance through Surgical
Tool Tracking in Neurosurgery. In Proceedings of the 2022 IEEE International Symposium on Mixed and Augmented Reality
(ISMAR), Singapore, 17–21 October 2022; pp. 355–364. [CrossRef]
51. Gsaxner, C.; Li, J.; Pepe, A.; Schmalstieg, D.; Egger, J. Inside-Out Instrument Tracking for Surgical Navigation in Augmented
Reality. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, 8–10 December
2021; p. 4.
52. Martin-Gomez, A.; Li, H.; Song, T.; Yang, S.; Wang, G.; Ding, H.; Navab, N.; Zhao, Z.; Armand, M. STTAR: Surgical Tool Tracking
using Off-the-Shelf Augmented Reality Head-Mounted Displays. IEEE Trans. Vis. Comput. Graph. Trans. Vis. Comput. Graph. 2023,
arXiv:2208.08880. [CrossRef] [PubMed]
53. Schneider, M.; Kunz, C.; Pal’a, A.; Wirtz, C.R.; Mathis-Ullrich, F.; Hlaváč, M. Augmented reality–assisted ventriculostomy.
Neurosurg. Focus FOC 2021, 50, E16. [CrossRef]
Bioengineering 2023, 10, 1290 25 of 26

54. Chien, J.-C.; Tsai, Y.-R.; Wu, C.-T.; Lee, J.-D. HoloLens-Based AR System with a Robust Point Set Registration Algorithm. Sensors
2019, 19, 3555. [CrossRef]
55. Dewitz, B.; Bibo, R.; Moazemi, S.; Kalkhoff, S.; Recker, S.; Liebrecht, A.; Lichtenberg, A.; Geiger, C.; Steinicke, F.; Aubin, H.
Real-time 3D scans of cardiac surgery using a single optical-see-through head-mounted display in a mobile setup. Front. Virtual
Real. 2022, 3, 949360. [CrossRef]
56. Haxthausen, F.v.; Chen, Y.; Ernst, F. Superimposing holograms on real world objects using HoloLens 2 and its depth camera. Curr.
Dir. Biomed. Eng. 2021, 7, 111–115. [CrossRef]
57. Li, W.; Fan, J.; Li, S.; Tian, Z.; Zheng, Z.; Ai, D.; Song, H.; Yang, J. Calibrating 3D Scanner in the Coordinate System of Optical
Tracker for Image-To-Patient Registration. Front. Neurorobot. 2021, 15, 636772. [CrossRef] [PubMed]
58. Liebmann, F.; Roner, S.; von Atzigen, M.; Scaramuzza, D.; Sutter, R.; Snedeker, J.; Farshad, M.; Fürnstahl, P. Pedicle screw
navigation using surface digitization on the Microsoft HoloLens. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1157–1165.
[CrossRef] [PubMed]
59. Meulstee, J.W.; Nijsink, J.; Schreurs, R.; Verhamme, L.M.; Xi, T.; Delye, H.H.; Borstlap, W.A.; Maal, T.J. Toward holographic-guided
surgery. Surg. Innov. 2019, 26, 86–94. [CrossRef]
60. Carl, B.; Bopp, M.; Saß, B.; Nimsky, C. Intraoperative computed tomography as reliable navigation registration device in 200
cranial procedures. Acta Neurochir. 2018, 160, 1681–1689. [CrossRef]
61. Saß, B.; Pojskic, M.; Bopp, M.; Nimsky, C.; Carl, B. Comparing fiducial-based and intraoperative computed tomography-based
registration for frameless stereotactic brain biopsy. Stereotact. Funct. Neurosurg. 2021, 99, 79–89. [CrossRef]
62. Burström, G.; Persson, O.; Edström, E.; Elmi-Terander, A. Augmented reality navigation in spine surgery: A systematic review.
Acta Neurochir. 2021, 163, 843–852. [CrossRef]
63. Elmi-Terander, A.; Nachabe, R.; Skulason, H.; Pedersen, K.; Söderman, M.; Racadio, J.; Babic, D.; Gerdhem, P.; Edström, E.
Feasibility and Accuracy of Thoracolumbar Minimally Invasive Pedicle Screw Placement with Augmented Reality Navigation
Technology. Spine 2018, 43, 1018–1023. [CrossRef]
64. Skyrman, S.; Lai, M.; Edström, E.; Burström, G.; Förander, P.; Homan, R.; Kor, F.; Holthuizen, R.; Hendriks, B.H.W.; Persson, O.;
et al. Augmented reality navigation for cranial biopsy and external ventricular drain insertion. Neurosurg. Focus 2021, 51, E7.
[CrossRef]
65. Wald, L.L.; McDaniel, P.C.; Witzel, T.; Stockmann, J.P.; Cooley, C.Z. Low-cost and portable MRI. J. Magn. Reson. Imaging 2020, 52,
686–696. [CrossRef]
66. Cooley, C.Z.; McDaniel, P.C.; Stockmann, J.P.; Srinivas, S.A.; Cauley, S.F.; Śliwiak, M.; Sappo, C.R.; Vaughn, C.F.; Guerin, B.; Rosen,
M.S. A portable scanner for magnetic resonance imaging of the brain. Nat. Biomed. Eng. 2021, 5, 229–239. [CrossRef] [PubMed]
67. Basser, P. Detection of stroke by portable, low-field MRI: A milestone in medical imaging. Sci. Adv. 2022, 8, eabp9307. [CrossRef]
[PubMed]
68. Zhang, W.J.; Ouyang, P.R.; Sun, Z.H. A novel hybridization design principle for intelligent mechatronics systems. Abstr. Int. Conf.
Adv. Mechatron. Evol. Fusion IT Mechatron. ICAM 2010, 5, 67–74. [CrossRef]
69. Schlaier, J.; Warnat, J.; Brawanski, A. Registration accuracy and practicability of laser-directed surface matching. Comput. Aided
Surg. 2002, 7, 284–290. [CrossRef] [PubMed]
70. Paraskevopoulos, D.; Unterberg, A.; Metzner, R.; Dreyhaupt, J.; Eggers, G.; Wirtz, C.R. Comparative study of application accuracy
of two frameless neuronavigation systems: Experimental error assessment quantifying registration methods and clinically
influencing factors. Neurosurg. Rev. 2011, 34, 217–228. [CrossRef]
71. Krombach, G.; Schmitz-Rode, T.; Wein, B.; Meyer, J.; Wildberger, J.; Brabant, K.; Günther, R. Potential of a new laser target system
for percutaneous CT-guided nerve blocks. Neuroradiology 2000, 42, 838–841. [CrossRef]
72. Moser, C.; Becker, J.; Deli, M.; Busch, M.; Boehme, M.; Groenemeyer, D.H. A novel Laser Navigation System reduces radiation
exposure and improves accuracy and workflow of CT-guided spinal interventions: A prospective, randomized, controlled,
clinical trial in comparison to conventional freehand puncture. Eur. J. Radiol. 2013, 82, 627–632. [CrossRef]
73. Zhang, G.; Liu, X.; Wang, L.; Zhu, J.; Yu, J. Development and feasibility evaluation of an AR-assisted radiotherapy positioning
system. Front. Oncol. 2022, 12, 921607. [CrossRef]
74. Poggi, S.; Pallotta, S.; Russo, S.; Gallina, P.; Torresin, A.; Bucciolini, M. Neuronavigation accuracy dependence on CT and MR
imaging parameters: A phantom-based study. Phys. Med. Biol. 2003, 48, 2199. [CrossRef]
75. Frantz, T.; Jansen, B.; Duerinck, J.; Vandemeulebroucke, J. Augmenting Microsoft’s HoloLens with vuforia tracking for neuronavi-
gation. Healthc. Technol. Lett. 2018, 5, 221–225. [CrossRef]
76. Furuse, M.; Ikeda, N.; Kawabata, S.; Park, Y.; Takeuchi, K.; Fukumura, M.; Tsuji, Y.; Kimura, S.; Kanemitsu, T.; Yagi, R. Influence
of surgical position and registration methods on clinical accuracy of navigation systems in brain tumor surgery. Sci. Rep. 2023, 13,
2644. [CrossRef] [PubMed]
77. Dho, Y.-S.; Kim, Y.J.; Kim, K.G.; Hwang, S.H.; Kim, K.H.; Kim, J.W.; Kim, Y.H.; Choi, S.H.; Park, C.-K. Positional effect of
preoperative neuronavigational magnetic resonance image on accuracy of posterior fossa lesion localization. J. Neurosurg. 2019,
133, 546–555. [CrossRef]
78. Bi, Z.M.; Lin, Y.; Zhang, W.J. The general architecture of adaptive robotic systems for manufacturing applications. Robot.
Comput.-Integr. Manuf. 2010, 26, 461–470. [CrossRef]
Bioengineering 2023, 10, 1290 26 of 26

79. Ouyang, P.R.; Zhang, W.J.; Gupta, M.M. An adaptive switching learning control method for trajectory tracking of robot
manipulators. Mechatronics 2006, 16, 51–61. [CrossRef]
80. Wu, J.Q.; Yuan, C.W.; Yin, R.X.; Sun, W.; Zhang, W.J. A Novel Self-Docking and Undocking Approach for Self-Changeable
Robots. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference
(ITNEC), Chongqing, China, 12–14 June 2020; pp. 689–693.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like