[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (355)

Search Parameters:
Keywords = head-mounted display

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11425 KiB  
Article
SmartVR Pointer: Using Smartphones and Gaze Orientation for Selection and Navigation in Virtual Reality
by Brianna McDonald, Qingyu Zhang, Aiur Nanzatov, Lourdes Peña-Castillo and Oscar Meruvia-Pastor
Sensors 2024, 24(16), 5168; https://doi.org/10.3390/s24165168 - 10 Aug 2024
Viewed by 229
Abstract
Some of the barriers preventing virtual reality (VR) from being widely adopted are the cost and unfamiliarity of VR systems. Here, we propose that in many cases, the specialized controllers shipped with most VR head-mounted displays can be replaced by a regular smartphone, [...] Read more.
Some of the barriers preventing virtual reality (VR) from being widely adopted are the cost and unfamiliarity of VR systems. Here, we propose that in many cases, the specialized controllers shipped with most VR head-mounted displays can be replaced by a regular smartphone, cutting the cost of the system, and allowing users to interact in VR using a device they are already familiar with. To achieve this, we developed SmartVR Pointer, an approach that uses smartphones to replace the specialized controllers for two essential operations in VR: selection and navigation by teleporting. In SmartVR Pointer, a camera mounted on the head-mounted display (HMD) is tilted downwards so that it points to where the user will naturally be holding their phone in front of them. SmartVR Pointer supports three selection modalities: tracker based, gaze based, and combined/hybrid. In the tracker-based SmartVR Pointer selection, we use image-based tracking to track a QR code displayed on the phone screen and then map the phone’s position to a pointer shown within the field of view of the camera in the virtual environment. In the gaze-based selection modality, the user controls the pointer using their gaze and taps on the phone for selection. The combined technique is a hybrid between gaze-based interaction in VR and tracker-based Augmented Reality. It allows the user to control a VR pointer that looks and behaves like a mouse pointer by moving their smartphone to select objects within the virtual environment, and to interact with the selected objects using the smartphone’s touch screen. The touchscreen is used for selection and dragging. The SmartVR Pointer is simple and requires no calibration and no complex hardware assembly or disassembly. We demonstrate successful interactive applications of SmartVR Pointer in a VR environment with a demo where the user navigates in the virtual environment using teleportation points on the floor and then solves a Tetris-style key-and-lock challenge. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of SmartVR Pointer applications in a game-style scenario. On the (<b>left</b>), we show the navigation task within the Viking Village. Footprints are shown in bright green when they are available for selection, and turn purple indicating they will be used for teleporting there upon clicking. On the (<b>right</b>), we show the <span class="html-italic">Tetris</span>-style task. The <span class="html-italic">Tetris</span> shapes are shown in bright colors and the rotation regions are shown in green with black circles. The user drags and drops shapes to be placed in the appropriate location and orientation which is highlighted with the darker shade.</p>
Full article ">Figure 2
<p>Setup for SmartVR Pointer with the camera mounted on top of the HTC Vive Cosmos Elite HMD.</p>
Full article ">Figure 3
<p>Comparison of baseline condition (<b>left</b>) to the usage of a SmartVR Pointer (<b>right</b>) with the camera mounted on top of the HTC Vive Pro Headset.</p>
Full article ">Figure 4
<p>Setup for SmartVR Pointer with the camera mounted at the bottom of the Meta Quest 2 HMD.</p>
Full article ">Figure 5
<p>Using Vuforia Engine to display a red cube on top of the tracked QR code image displayed on the smartphone.</p>
Full article ">Figure 6
<p>Teleporting application enabled by this technique where the user can select footprint-shaped teleport points to navigate around the VR environment by placing the pointer on one of the footprints.</p>
Full article ">Figure 7
<p>Illustration of how the ray-casting mechanism works. (<b>a</b>) The view the player sees while wearing the HMD shows a pointer highlighting the <span class="html-italic">Tetris</span> block in the bottom left for selection. (<b>b</b>) The developer view illustrates how a black ray is cast from the player towards the pointer, and through the <span class="html-italic">Tetris</span> block.</p>
Full article ">Figure 8
<p>Screenshot of the <span class="html-italic">Tetris</span>-like task view shown in Unity from a third-person perspective, illustrating the semi-transparent VR canvas (the rectangle shown on the right side, darkened here for emphasis). From the user’s perspective (the white camera in the middle), the panel appears fixed in front of the viewer.</p>
Full article ">Figure 9
<p>Teleporting task completion times. (<b>Top</b>) Distribution of completion time per condition. Horizontal line inside the box indicates the median completion time, and the box height indicates the interquartile range (IQR). (<b>Bottom</b>) Pairwise mean time completion difference between conditions. The gray dotted line marks the point where the difference between the mean levels of the conditions compared is zero.</p>
Full article ">Figure 10
<p><span class="html-italic">Tetris</span>-like task completion times. (<b>Top</b>) Distribution of completion time per condition. Horizontal line inside the box indicates the median completion time, and the box height indicates the interquartile range (IQR). (<b>Bottom</b>) Pairwise mean time completion difference between conditions. The gray dotted line marks the point where the difference between the mean levels of the conditions compared is zero.</p>
Full article ">Figure 11
<p>Click success rate. (<b>Top</b>) Distribution of click success rate per condition. Horizontal line inside the box indicates the median click success rate, and the box height indicates the interquartile range (IQR). (<b>Bottom</b>) Pairwise mean click success rate difference between conditions. The gray dotted line marks the point where the difference between the mean levels of the conditions compared is zero.</p>
Full article ">Figure 12
<p>Participants scores regarding perception of ease of use per condition. Horizontal line inside the box indicates the median click success rate, and the box height indicates the interquartile range (IQR). For Ranking, 1 is “Least Preferred” and 7 is “Most preferred”.</p>
Full article ">Figure 13
<p>Participants scores regarding condition preference. Horizontal line inside the box indicates the median click success rate, and the box height indicates the interquartile range (IQR). For Ranking, 1 is “Least Preferred” and 7 is “Most preferred”.</p>
Full article ">Figure 14
<p>Participants agreement with the statement that one of the three SmartVR Pointer modes was easier to use for the teleporting task than the VR controller. Horizontal line inside the box indicates the median click success rate, and the box height indicates the interquartile range (IQR). For Ranking, 1 is “Not agree at all” and 7 is “Completely agree”.</p>
Full article ">Figure 15
<p>Participants agreement with the statement that one of the three SmartVR Pointer modes was easier to use for the <span class="html-italic">Tetris</span>-like task than the VR controller. Horizontal line inside the box indicates the median click success rate, and the box height indicates the interquartile range (IQR). For Ranking, 1 is “Not agree at all” and 7 is “Completely agree”.</p>
Full article ">
20 pages, 4390 KiB  
Article
Explainable Artificial Intelligence Approach for Improving Head-Mounted Fault Display Systems
by Abdelaziz Bouzidi, Lala Rajaoarisoa and Luka Claeys
Future Internet 2024, 16(8), 282; https://doi.org/10.3390/fi16080282 - 6 Aug 2024
Viewed by 343
Abstract
To fully harness the potential of wind turbine systems and meet high power demands while maintaining top-notch power quality, wind farm managers run their systems 24 h a day/7 days a week. However, due to the system’s large size and the complex interactions [...] Read more.
To fully harness the potential of wind turbine systems and meet high power demands while maintaining top-notch power quality, wind farm managers run their systems 24 h a day/7 days a week. However, due to the system’s large size and the complex interactions of its many components operating at high power, frequent critical failures occur. As a result, it has become increasingly important to implement predictive maintenance to ensure the continued performance of these systems. This paper introduces an innovative approach to developing a head-mounted fault display system that integrates predictive capabilities, including deep learning long short-term memory neural networks model integration, with anomaly explanations for efficient predictive maintenance tasks. Then, a 3D virtual model, created from sampled and recorded data coupled with the deep neural diagnoser model, is designed. To generate a transparent and understandable explanation of the anomaly, we propose a novel methodology to identify a possible subset of characteristic variables for accurately describing the behavior of a group of components. Depending on the presence and risk level of an anomaly, the parameter concerned is displayed in a piece of specific information. The system then provides human operators with quick, accurate insights into anomalies and their potential causes, enabling them to take appropriate action. By applying this methodology to a wind farm dataset provided by Energias De Portugal, we aim to support maintenance managers in making informed decisions about inspection, replacement, and repair tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Internet of Things (IoT))
Show Figures

Figure 1

Figure 1
<p>Real-time head-mounted fault system architecture [<a href="#B27-futureinternet-16-00282" class="html-bibr">27</a>].</p>
Full article ">Figure 2
<p>Example of the human–machine interface view both on the computer and the headset systems.</p>
Full article ">Figure 3
<p>The basic diagram of the long short-term memory (LSTM) neural network [<a href="#B35-futureinternet-16-00282" class="html-bibr">35</a>].</p>
Full article ">Figure 4
<p>Diagnoser model architecture.</p>
Full article ">Figure 5
<p>Standard simplified diagram of an LSTM cell.</p>
Full article ">Figure 6
<p>Relationship example between a group of features vs. a group of components.</p>
Full article ">Figure 7
<p>Wind turbine systems with their different components.</p>
Full article ">Figure 8
<p>Detected anomalies before the first reported malfunction of the turbine T07 (generator bearing).</p>
Full article ">Figure 9
<p>Detected anomalies before the third reported malfunction of the turbine T07 (hydraulic group).</p>
Full article ">Figure 10
<p>Detection and explanation of a significant rise in the generator bearings’ temperature.</p>
Full article ">Figure 11
<p>Detection and explanation of a rise in the transformer’s temperature.</p>
Full article ">Figure 12
<p>Correlations between each feature and instances of system malfunction. (<b>a</b>) For each component. (<b>b</b>) Zoom display with the generator bearing component.</p>
Full article ">Figure 13
<p>Windfarm monitoring clients’ views: (<b>a</b>) HMFD interface, (<b>b</b>) PC interface.</p>
Full article ">Figure 14
<p>(<b>a</b>) Detected warning view in generator component. (<b>b</b>) Warning anomaly view in the windfarm. (<b>c</b>) Critical anomaly view in the windfarm.</p>
Full article ">
14 pages, 1740 KiB  
Article
A Vestibular Training to Reduce Dizziness
by Heiko Hecht, Carla Aulenbacher, Laurin Helmbold, Henrik Eichhorn and Christoph von Castell
Appl. Sci. 2024, 14(16), 6870; https://doi.org/10.3390/app14166870 - 6 Aug 2024
Viewed by 252
Abstract
Many situations can induce dizziness in healthy participants, be it when riding a carrousel or when making head movements while wearing a head-mounted display. Everybody—maybe with the exception of vestibular loss patients—is prone to dizziness, albeit to widely varying degrees. Some people get [...] Read more.
Many situations can induce dizziness in healthy participants, be it when riding a carrousel or when making head movements while wearing a head-mounted display. Everybody—maybe with the exception of vestibular loss patients—is prone to dizziness, albeit to widely varying degrees. Some people get dizzy after a single rotation around the body axis, while others can perform multiple pirouettes without the slightest symptoms. We have developed a form of vestibular habituation training with the purpose of reducing proneness to dizziness. The training consists of a short (8 min) exercise routine which is moderate enough that it can easily be integrated into a daily routine. Twenty volunteers performed the training over the course of two weeks. We measured subjective dizziness before and after each daily session. We also performed several vestibular tests before (pre-test) and after (post-test) the two-week training period. They included exposure to a rotating and pitching visual environment while standing upright, as well as a physical rotation that was abruptly stopped. The results show that the dizziness induced during a given daily session decreased over the course of the two weeks. The dizziness induced by the rotating visual stimulus was significantly less after completion of the training period compared with the initial pre-test. Also, postural stability and post-rotatory spinning sensations had improved when comparing the post-test with the pre-test. We conclude that a short regular vestibular training can significantly improve proneness to dizziness. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental set-up, showing the visual stimulation task on the right and the vestibular stimulation task on the left.</p>
Full article ">Figure 2
<p>Timeline of the experiment.</p>
Full article ">Figure 3
<p>Mean path length of the center of pressure for visual yaw rotation with and without pitch for the pre-test and post-test. Error bars indicate ± 1 <span class="html-italic">SEM</span> of the 20 participants in the condition without pitch (black line) and the 18 participants in the condition with pitch (gray line).</p>
Full article ">Figure 4
<p>Mean FMS-D scores obtained before, during, and after visual yaw rotation, as measured during pre- and post-test. The left panel corresponds to the pure visual yaw rotation, and the right panel corresponds to the added pitch oscillation. Dizziness ratings were obtained just before (before = 0 s), in the middle of (during = 30 s), and right after 60 s of visual stimulation (after = 60 s). Error bars indicate ± 1 <span class="html-italic">SEM</span> of the 20 participants in the condition without pitch (<b>left panel</b>) and the 19 participants in the condition with pitch (<b>right panel</b>).</p>
Full article ">Figure 5
<p>Mean error scores of the one-legged and Romberg stances obtained with the balance error scoring system (BESS) during pre-test and post-test. Error bars indicate ± 1 <span class="html-italic">SEM</span> of the 20 participants in each condition.</p>
Full article ">Figure 6
<p>Mean FMS-D ratings for vestibular yaw rotation before and after 30 s of exposure (<b>left panel</b>) and mean duration of post-rotatory dizziness (<b>right panel</b>) at pre-test and post-test. Before = 0 s and after = 30 s of vestibular rotation exposure time. Error bars indicate ± 1 <span class="html-italic">SEM</span> of the 20 participants in each condition.</p>
Full article ">Figure 7
<p>Individual FMS-D difference scores before and after training sessions plotted for the three stages of the 14 day training. The points indicate individual difference scores, and the line connects the averages. The early training stage comprises the first two training sessions; the middle stage comprises a varying number of intermediate training sessions; and the late stage comprises the last two trainings sessions. Error bars indicate ± 1 <span class="html-italic">SEM</span> of the 20 participants.</p>
Full article ">
16 pages, 7880 KiB  
Communication
Multimodal Drumming Education Tool in Mixed Reality
by James Pinkl, Julián Villegas and Michael Cohen
Multimodal Technol. Interact. 2024, 8(8), 70; https://doi.org/10.3390/mti8080070 - 5 Aug 2024
Viewed by 643
Abstract
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in [...] Read more.
First-person VR- and MR-based Action Observation research has thus far yielded both positive and negative findings in studies observing such tools’ potential to teach motor skills. Teaching drumming, particularly polyrhythms, is a challenging motor skill to learn and has remained largely unexplored in the field of Action Observation. In this contribution, a multimodal tool designed to teach rudimental and polyrhythmic drumming was developed and tested in a 20-subject study. The tool presented subjects with a first-person MR perspective via a head-mounted display to provide users with visual exposure to both virtual content and their physical surroundings simultaneously. When compared against a control group practicing via video demonstrations, results showed increased rhythmic accuracy across four exercises. Specifically, a difference of 239 ms (z-ratio = 3.520, p < 0.001) was found between the timing errors of subjects who practiced with our multimodal mixed reality development compared to subjects who practiced with video, demonstrating the potential of such affordances. This research contributes to ongoing work in the fields of Action Observation and Mixed Reality, providing evidence that Action Observation techniques can be an effective practice method for drumming. Full article
Show Figures

Figure 1

Figure 1
<p>Four rhythmic exercises used in this experiment. (<b>a</b>) Western notation representation for eighth note doubles and paradiddles (left and right panel, respectively), two examples of rudiments. (<b>b</b>) Western notation representation for 3:2 and 3:4 polyrhythms, left and right panel, respectively.</p>
Full article ">Figure 2
<p>System schematic.</p>
Full article ">Figure 3
<p>Users in the experimental group practiced while simultaneously grasping both the drumstick and Meta Quest controller.</p>
Full article ">Figure 4
<p>Procedure for subjective testing.</p>
Full article ">Figure 5
<p>The first phase of all sections of the pilot study was a third-person (overhead view) video demonstration of the corresponding exercise.</p>
Full article ">Figure 6
<p>The two practice modalities: Video and MR. (<b>a</b>) For subjects in the Video group, first-person video demonstration of the corresponding exercise are used for each rhythm’s practice session. (<b>b</b>) For subjects of the MR group, a demonstration using virtual objects and programmed animations is used for each rhythm’s practice session.</p>
Full article ">Figure 7
<p>The difference in the maximum consecutive correct hits between trial 2 and trial 1 of each group.</p>
Full article ">Figure 8
<p>Effect on the absolute timing error of Block and Trial.</p>
Full article ">Figure 9
<p>Effect on the absolute timing error of Block and Group.</p>
Full article ">Figure 10
<p>Effect on the absolute timing error of Trial and Group.</p>
Full article ">
9 pages, 2319 KiB  
Article
Augmented Reality Improved Knowledge and Efficiency of Root Canal Anatomy Learning: A Comparative Study
by Fahd Alsalleeh, Katsushi Okazaki, Sarah Alkahtany, Fatemah Alrwais, Mohammad Bendahmash and Ra’ed Al Sadhan
Appl. Sci. 2024, 14(15), 6813; https://doi.org/10.3390/app14156813 - 4 Aug 2024
Viewed by 672
Abstract
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination [...] Read more.
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination and the inability to simulate clinical scenarios fully. This study evaluated the potential of AR as a tool for teaching root canal anatomy in preclinical training in endodontics for predoctoral dental students. Six cone beam computed tomography (CBCT) images of teeth were selected. Board-certified endodontist and radiologist recorded the tooth type and classification of root canals. Then, STereoLithography (STL) files of the same images were imported into a virtual reality (VR) application and viewed through a VR head-mounted display. Forty-three third-year dental students were asked questions about root canal anatomy based on the CBCT images, and then, after the AR model. The time to respond to each question and feedback was recorded. Student responses were paired, and the difference between CBCT and AR scores was examined using a paired-sample t-test and set to p = 0.05. Students demonstrated a significant improvement in their ability to answer questions about root canal anatomy after utilizing the AR model (p < 0.05). Female participants demonstrated significantly higher AR scores compared to male participants. However, gender did not significantly influence overall test scores. Furthermore, students required significantly less time to answer questions after using the AR model (M = 4.09, SD = 3.55) compared to the CBCT method (M = 15.21, SD = 8.01) (p < 0.05). This indicates that AR may improve learning efficiency alongside comprehension. In a positive feedback survey, 93% of students reported that the AR simulation led to a better understanding of root canal anatomy than traditional CBCT interpretation. While this study highlights the potential of AR in learning root canal anatomy, further research is needed to explore its long-term impact and efficacy in clinical settings. Full article
(This article belongs to the Special Issue Virtual/Augmented Reality and Its Applications)
Show Figures

Figure 1

Figure 1
<p>Generation and Optimization of a Virtual 3D Model from DICOM Data for AR Visualization. A 3D virtual model was created from CBCT scans using open-source 3Dslicer and Blender. 3Dslicer analyzed internal structures and generated 3D data. Blender refined the model’s quality. The model was exported as segmented STL files into Holoeyes XR application version 2.6 for visualization and interaction. A Meta Quest 2 VR headset enabled immersive exploration of the model’s intricacies. Key software utilized: 3Dslicer (data generation), Blender (optimization), Holoeyes XR application (visualization), Meta Quest 2 headset (interaction).</p>
Full article ">Figure 2
<p>Screen captures illustrate the virtual learning environment (<b>a</b>), where students collaborate with a virtual instructor to explore root canal system complexities (<b>b</b>). Student performance is assessed through timed tasks and subsequent knowledge-based questions (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 3
<p>The mean scores of AR and CBCT assessments of 6 teeth were studied. The difference was 3.95 units (statistically significant <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 4
<p>Examination scores for females and males in CBCT and AR assessments.</p>
Full article ">
22 pages, 3408 KiB  
Article
Microservices-Based Resource Provisioning for Multi-User Cloud VR in Edge Networks
by Ho-Jin Choi, Nobuyoshi Komuro and Won-Suk Kim
Electronics 2024, 13(15), 3077; https://doi.org/10.3390/electronics13153077 - 3 Aug 2024
Viewed by 335
Abstract
Cloud virtual reality (VR) is attracting attention in terms of its lightweight head-mounted display (HMD), providing telepresence and mobility. However, it is still in the research stages due to motion-to-photon (MTP) latency, the need for high-speed network infrastructure, and large-scale traffic processing problems. [...] Read more.
Cloud virtual reality (VR) is attracting attention in terms of its lightweight head-mounted display (HMD), providing telepresence and mobility. However, it is still in the research stages due to motion-to-photon (MTP) latency, the need for high-speed network infrastructure, and large-scale traffic processing problems. These problems are expected to be partially solved through edge computing, but the limited computing resource capacity of the infrastructure presents new challenges. In particular, in order to efficiently provide multi-user content such as remote meetings on edge devices, resource provisioning is needed that considers the application’s traffic patterns and computing resource requirements at the same time. In this study, we present a microservice architecture (MSA)-based application to provide multi-user cloud VR in edge computing and propose a scheme for planning an efficient service deployment considering the characteristics of each service. The proposed scheme not only guarantees the MTP latency threshold for all users but also aims to reduce networking and computing resource waste. The proposed scheme was evaluated by simulating various scenarios, and the results were compared to several studies. It was confirmed that the proposed scheme represents better performance metrics than the comparison schemes in most cases from the perspectives of networking, computing, and MTP latency. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

Figure 1
<p>Entire process from motion input to playback in cloud VR.</p>
Full article ">Figure 2
<p>Conceptual diagram of the MSA-based cloud VR applications at the network edge.</p>
Full article ">Figure 3
<p>Configuration and operation of MSA in multi-user cloud VR applications.</p>
Full article ">Figure 4
<p>Changes in key network metrics with the increase in the number of users per application. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 5
<p>Changes in key network metrics based on resource usage types of applications. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 6
<p>Changes in key network metrics with the increase in computing resource capacity. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 7
<p>Changes in key network metrics based on client locality. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 7 Cont.
<p>Changes in key network metrics based on client locality. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 8
<p>Changes in key network metrics based on computing resource locality. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">Figure 8 Cont.
<p>Changes in key network metrics based on computing resource locality. (<b>a</b>) Traffic load per user; (<b>b</b>) Stdev of computing resource usage per node; (<b>c</b>) network distance per user.</p>
Full article ">
12 pages, 4881 KiB  
Article
Virtual Reality Head-Mounted Display (HMD) and Preoperative Patient-Specific Simulation: Impact on Decision-Making in Pediatric Urology: Preliminary Data
by Giulia Lanfranchi, Sara Costanzo, Giorgio Giuseppe Orlando Selvaggio, Cristina Gallotta, Paolo Milani, Francesco Rizzetto, Alessia Musitelli, Maurizio Vertemati, Tommaso Santaniello, Alessandro Campari, Irene Paraboschi, Anna Camporesi, Michela Marinaro, Valeria Calcaterra, Ugo Maria Pierucci and Gloria Pelizzo
Diagnostics 2024, 14(15), 1647; https://doi.org/10.3390/diagnostics14151647 - 30 Jul 2024
Viewed by 375
Abstract
Aim of the Study: To assess how virtual reality (VR) patient-specific simulations can support decision-making processes and improve care in pediatric urology, ultimately improving patient outcomes. Patients and Methods: Children diagnosed with urological conditions necessitating complex procedures were retrospectively reviewed and enrolled in [...] Read more.
Aim of the Study: To assess how virtual reality (VR) patient-specific simulations can support decision-making processes and improve care in pediatric urology, ultimately improving patient outcomes. Patients and Methods: Children diagnosed with urological conditions necessitating complex procedures were retrospectively reviewed and enrolled in the study. Patient-specific VR simulations were developed with medical imaging specialists and VR technology experts. Routine CT images were utilized to create a VR environment using advanced software platforms. The accuracy and fidelity of the VR simulations was validated through a multi-step process. This involved comparing the virtual anatomical models to the original medical imaging data and conducting feedback sessions with pediatric urology experts to assess VR simulations’ realism and clinical relevance. Results: A total of six pediatric patients were reviewed. The median age of the participants was 5.5 years (IQR: 3.5–8.5 years), with an equal distribution of males and females across both groups. A minimally invasive laparoscopic approach was performed for adrenal lesions (n = 3), Wilms’ tumor (n = 1), bilateral nephroblastomatosis (n = 1), and abdominal trauma in complex vascular and renal malformation (ptotic and hypoplastic kidney) (n = 1). Key benefits included enhanced visualization of the segmental arteries and the deep vascularization of the kidney and adrenal glands in all cases. The high depth perception and precision in the orientation of the arteries and veins to the parenchyma changed the intraoperative decision-making process in five patients. Preoperative VR patient-specific simulation did not offer accuracy in studying the pelvic and calyceal anatomy. Conclusions: VR patient-specific simulations represent an empowering tool in pediatric urology. By leveraging the immersive capabilities of VR technology, preoperative planning and intraoperative navigation can greatly impact surgical decision-making. As we continue to advance in medical simulation, VR holds promise in educational programs to include even surgical treatment of more complex urogenital malformations. Full article
(This article belongs to the Special Issue Diagnosis and Prognosis of Urological Diseases)
Show Figures

Figure 1

Figure 1
<p>Virtual reality (VR) (<b>a</b>) and coronal computed tomography (CT) (<b>b</b>) images of a 4-year-old girl with left adrenal neuroblastoma (3.5 × 2.5 × 4 cm in diameter) (* in <b>a</b>; → in <b>b</b>) in contact with the upper third of the left kidney and the spleen; images show the adrenal vein and the impressions of the adrenal mass on surrounding organs.</p>
Full article ">Figure 2
<p>Virtual reality (VR) (<b>a</b>) and coronal computed tomography (CT) (<b>b</b>,<b>c</b>) images of a 7-year-old boy with a right adrenal myelolipoma (* in <b>a</b>; → in <b>b</b>) measuring 38 × 25 × 25 mm (AP × LL × CC), with heterogeneous density and fatty components. Images depict vascular details: bilateral accessory renal arteries, which on the right course antero-superiorly to the main renal artery heading towards the upper pole, while on the left run postero-superiorly parallel to the main renal artery.</p>
Full article ">Figure 3
<p>Virtual reality (VR) (<b>a</b>) and preoperative computed tomography (CT) (<b>b</b>) images of a 3-year-old girl with bilateral nephroblastomatosis (*). Both kidneys are in place, enlarged in size (right kidney 8 cm, left kidney 9 cm), with the parenchymal structure altered due to the presence of multiple solid lesions, the largest of which are located, respectively, at the upper pole of the right kidney, rounded in appearance, approximately 4 cm in diameter, and at the lower pole of the left kidney, approximately 4.5 cm in diameter. The lesions all have a nearly rounded morphology, and defined margins, albeit without a clear capsular structure.</p>
Full article ">Figure 4
<p>Virtual reality (VR) (<b>a</b>) and preoperative computed tomography (CT) (<b>b</b>) images of a 3-month-old boy with a right-side Wilms’ tumor (*). The figures show a solid lesion in the middle-lower third of the right kidney (30 × 38 × 30 mm; LL × AP × CC), heterogeneously hypodense compared to the normal renal parenchyma, which posterior-inferior-laterally reaches the renal profile and causes rotation of the renal axis, with posteriorized pelvis. The mass reaches the hilar region. A recognizable single renal artery is present bilaterally. Two right renal venous branches are recognizable, up to near the drainage into the inferior vena cava, the cranial one with a larger caliber. The left kidney is normal.</p>
Full article ">Figure 5
<p>Virtual reality (VR) (<b>a</b>) and coronal computed tomography (CT) (<b>b</b>–<b>d</b>) images of a 10-year-old girl with a symptomatic right ptotic dysmorphic kidney. The figures show a ptotic right kidney, located inferior-medially and rotated, with normal parenchymal thickness. No urinary tract dilatation is noted. Three right renal arteries are present (→): one originates from the proximal aorta, one from the distal aorta, and one from the proximal common iliac artery; on the left, two contiguous renal arteries are depicted, and the left renal vein runs posteriorly to the aorta.</p>
Full article ">
12 pages, 4028 KiB  
Article
The Perceptions of University Students as to the Benefits and Barriers to Using Immersive Virtual Reality in Learning to Work with Individuals with Developmental Disabilities
by Nicole Luke, Avery Keith, Nicole Bajcar, Brittney Sureshkumar and Oluwakemi Adebayo
Educ. Sci. 2024, 14(8), 812; https://doi.org/10.3390/educsci14080812 - 25 Jul 2024
Viewed by 339
Abstract
The aim of this study is to understand the experiences of university students who took part in a pilot program for an experiential learning opportunity in immersive virtual reality (iVR). Experiential learning opportunities are essential for students who will be expected to apply [...] Read more.
The aim of this study is to understand the experiences of university students who took part in a pilot program for an experiential learning opportunity in immersive virtual reality (iVR). Experiential learning opportunities are essential for students who will be expected to apply their knowledge in a professional setting. Head-mounted display devices were distributed to university students and individuals with developmental disabilities at a partnering community organization. The university students met community partners in a virtual world and interacted with them to learn about their partners’ self-selected goals related to communication and job skills. A mixed methods analysis of survey responses and journal entries was conducted. Students reported an overall positive experience with iVR and indicated an interest in pursuing future opportunities to include iVR in their learning. Full article
Show Figures

Figure 1

Figure 1
<p>A wireless VR headset.</p>
Full article ">Figure 2
<p>A scene from the ENGAGE platform with avatars.</p>
Full article ">Figure 3
<p>Participants Survey Results for Liking iVR.</p>
Full article ">Figure 4
<p>Tone Variable Analysis for Participants’ Journal Entries. Note. Tone scores above 50 are considered positive.</p>
Full article ">Figure 5
<p>Emotion Word Use Analysis for Participants’ Journal Entries.</p>
Full article ">Figure 6
<p>Affect Word Use Analysis for Participants’ Journal Entries.</p>
Full article ">Figure 7
<p>Time word use analysis for participants’ journal entries.</p>
Full article ">
11 pages, 5434 KiB  
Article
An Innovative Device Based on Human-Machine Interface (HMI) for Powered Wheelchair Control for Neurodegenerative Disease: A Proof-of-Concept
by Arrigo Palumbo, Nicola Ielpo, Barbara Calabrese, Remo Garropoli, Vera Gramigna, Antonio Ammendolia and Nicola Marotta
Sensors 2024, 24(15), 4774; https://doi.org/10.3390/s24154774 - 23 Jul 2024
Viewed by 406
Abstract
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control [...] Read more.
In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control a robotic wheelchair for people with limited mobility. The test group comprised 11 healthy subjects (six male, five female, mean age 35.2 ± 11.7 years). A novel platform that integrates a smart wheelchair and an eye-tracking-enabled head-mounted display was proposed to reduce the cognitive requirements needed for wheelchair movement and control. The approach’s effectiveness was demonstrated by evaluating our system in realistic scenarios. The demonstration of the proposed AR head-mounted display user interface for controlling a smart wheelchair and the results provided in this paper could highlight the potential of the HoloLens 2-based innovative solutions and bring focus to emerging research topics, such as remote control, cognitive rehabilitation, the implementation of patient autonomy with severe disabilities, and telemedicine. Full article
(This article belongs to the Special Issue Computational Intelligence Based-Brain-Body Machine Interface)
Show Figures

Figure 1

Figure 1
<p>The prototype system architecture.</p>
Full article ">Figure 2
<p>HoloLens 2 device.</p>
Full article ">Figure 3
<p>Using the virtual keyboard via Eye-Tracking (view from the HoloLens 2).</p>
Full article ">Figure 4
<p>Yousefi et al. [<a href="#B10-sensors-24-04774" class="html-bibr">10</a>] circuit.</p>
Full article ">Figure 5
<p>Test outdoor course for the training.</p>
Full article ">
29 pages, 1631 KiB  
Systematic Review
Extended Reality-Based Head-Mounted Displays for Surgical Education: A Ten-Year Systematic Review
by Ziyu Qi, Felix Corr, Dustin Grimm, Christopher Nimsky and Miriam H. A. Bopp
Bioengineering 2024, 11(8), 741; https://doi.org/10.3390/bioengineering11080741 - 23 Jul 2024
Viewed by 608
Abstract
Surgical education demands extensive knowledge and skill acquisition within limited time frames, often limited by reduced training opportunities and high-pressure environments. This review evaluates the effectiveness of extended reality-based head-mounted display (ExR-HMD) technology in surgical education, examining its impact on educational outcomes and [...] Read more.
Surgical education demands extensive knowledge and skill acquisition within limited time frames, often limited by reduced training opportunities and high-pressure environments. This review evaluates the effectiveness of extended reality-based head-mounted display (ExR-HMD) technology in surgical education, examining its impact on educational outcomes and exploring its strengths and limitations. Data from PubMed, Cochrane Library, Web of Science, ScienceDirect, Scopus, ACM Digital Library, IEEE Xplore, WorldCat, and Google Scholar (Year: 2014–2024) were synthesized. After screening, 32 studies comparing ExR-HMD and traditional surgical training methods for medical students or residents were identified. Quality and bias were assessed using the Medical Education Research Study Quality Instrument, Newcastle–Ottawa Scale-Education, and Cochrane Risk of Bias Tools. Results indicate that ExR-HMD offers benefits such as increased immersion, spatial awareness, and interaction and supports motor skill acquisition theory and constructivist educational theories. However, challenges such as system fidelity, operational inconvenience, and physical discomfort were noted. Nearly half the studies reported outcomes comparable or superior to traditional methods, emphasizing the importance of social interaction. Limitations include study heterogeneity and English-only publications. ExR-HMD shows promise but needs educational theory integration and social interaction. Future research should address technical and economic barriers to global accessibility. Full article
Show Figures

Figure 1

Figure 1
<p>Flow diagram of PRISMA (The Preferred Reporting Items for Systematic reviews and Meta-Analyses) via online tools developed by Haddaway et al. [<a href="#B73-bioengineering-11-00741" class="html-bibr">73</a>].</p>
Full article ">Figure 2
<p>Risk of bias assessment for individual studies (randomized parallel design) and summary using the ROB-2 tool [<a href="#B44-bioengineering-11-00741" class="html-bibr">44</a>,<a href="#B47-bioengineering-11-00741" class="html-bibr">47</a>,<a href="#B48-bioengineering-11-00741" class="html-bibr">48</a>,<a href="#B49-bioengineering-11-00741" class="html-bibr">49</a>,<a href="#B50-bioengineering-11-00741" class="html-bibr">50</a>,<a href="#B51-bioengineering-11-00741" class="html-bibr">51</a>,<a href="#B53-bioengineering-11-00741" class="html-bibr">53</a>,<a href="#B54-bioengineering-11-00741" class="html-bibr">54</a>,<a href="#B55-bioengineering-11-00741" class="html-bibr">55</a>,<a href="#B56-bioengineering-11-00741" class="html-bibr">56</a>,<a href="#B57-bioengineering-11-00741" class="html-bibr">57</a>,<a href="#B58-bioengineering-11-00741" class="html-bibr">58</a>,<a href="#B59-bioengineering-11-00741" class="html-bibr">59</a>,<a href="#B60-bioengineering-11-00741" class="html-bibr">60</a>,<a href="#B61-bioengineering-11-00741" class="html-bibr">61</a>,<a href="#B62-bioengineering-11-00741" class="html-bibr">62</a>,<a href="#B63-bioengineering-11-00741" class="html-bibr">63</a>,<a href="#B64-bioengineering-11-00741" class="html-bibr">64</a>,<a href="#B65-bioengineering-11-00741" class="html-bibr">65</a>,<a href="#B66-bioengineering-11-00741" class="html-bibr">66</a>,<a href="#B67-bioengineering-11-00741" class="html-bibr">67</a>,<a href="#B68-bioengineering-11-00741" class="html-bibr">68</a>,<a href="#B69-bioengineering-11-00741" class="html-bibr">69</a>,<a href="#B70-bioengineering-11-00741" class="html-bibr">70</a>,<a href="#B71-bioengineering-11-00741" class="html-bibr">71</a>,<a href="#B72-bioengineering-11-00741" class="html-bibr">72</a>,<a href="#B74-bioengineering-11-00741" class="html-bibr">74</a>,<a href="#B75-bioengineering-11-00741" class="html-bibr">75</a>,<a href="#B76-bioengineering-11-00741" class="html-bibr">76</a>].</p>
Full article ">Figure 3
<p>Risk of bias assessment for randomized crossover studies using the ROB-2 tool [<a href="#B45-bioengineering-11-00741" class="html-bibr">45</a>,<a href="#B46-bioengineering-11-00741" class="html-bibr">46</a>].</p>
Full article ">Figure 4
<p>Risk of bias assessment for a non-randomized study using ROBINS-I tool [<a href="#B52-bioengineering-11-00741" class="html-bibr">52</a>].</p>
Full article ">Figure 5
<p>MERSQI and NOS-E scores of included studies [<a href="#B44-bioengineering-11-00741" class="html-bibr">44</a>,<a href="#B45-bioengineering-11-00741" class="html-bibr">45</a>,<a href="#B46-bioengineering-11-00741" class="html-bibr">46</a>,<a href="#B47-bioengineering-11-00741" class="html-bibr">47</a>,<a href="#B48-bioengineering-11-00741" class="html-bibr">48</a>,<a href="#B49-bioengineering-11-00741" class="html-bibr">49</a>,<a href="#B50-bioengineering-11-00741" class="html-bibr">50</a>,<a href="#B51-bioengineering-11-00741" class="html-bibr">51</a>,<a href="#B52-bioengineering-11-00741" class="html-bibr">52</a>,<a href="#B53-bioengineering-11-00741" class="html-bibr">53</a>,<a href="#B54-bioengineering-11-00741" class="html-bibr">54</a>,<a href="#B55-bioengineering-11-00741" class="html-bibr">55</a>,<a href="#B56-bioengineering-11-00741" class="html-bibr">56</a>,<a href="#B57-bioengineering-11-00741" class="html-bibr">57</a>,<a href="#B58-bioengineering-11-00741" class="html-bibr">58</a>,<a href="#B59-bioengineering-11-00741" class="html-bibr">59</a>,<a href="#B60-bioengineering-11-00741" class="html-bibr">60</a>,<a href="#B61-bioengineering-11-00741" class="html-bibr">61</a>,<a href="#B62-bioengineering-11-00741" class="html-bibr">62</a>,<a href="#B63-bioengineering-11-00741" class="html-bibr">63</a>,<a href="#B64-bioengineering-11-00741" class="html-bibr">64</a>,<a href="#B65-bioengineering-11-00741" class="html-bibr">65</a>,<a href="#B66-bioengineering-11-00741" class="html-bibr">66</a>,<a href="#B67-bioengineering-11-00741" class="html-bibr">67</a>,<a href="#B68-bioengineering-11-00741" class="html-bibr">68</a>,<a href="#B69-bioengineering-11-00741" class="html-bibr">69</a>,<a href="#B70-bioengineering-11-00741" class="html-bibr">70</a>,<a href="#B71-bioengineering-11-00741" class="html-bibr">71</a>,<a href="#B72-bioengineering-11-00741" class="html-bibr">72</a>,<a href="#B74-bioengineering-11-00741" class="html-bibr">74</a>,<a href="#B75-bioengineering-11-00741" class="html-bibr">75</a>,<a href="#B76-bioengineering-11-00741" class="html-bibr">76</a>].</p>
Full article ">Figure 6
<p>A taxonomy of ExR-assisted teaching.</p>
Full article ">Figure 7
<p>Educational outcomes of surgical training using ExR-HMDs.</p>
Full article ">Figure 8
<p>Trainee, educator, and stakeholder perspectives on the advantages and disadvantages of using ExR-HMDs.</p>
Full article ">
18 pages, 12761 KiB  
Article
Robot-Assisted Augmented Reality (AR)-Guided Surgical Navigation for Periacetabular Osteotomy
by Haoyan Ding, Wenyuan Sun and Guoyan Zheng
Sensors 2024, 24(14), 4754; https://doi.org/10.3390/s24144754 - 22 Jul 2024
Cited by 1 | Viewed by 688
Abstract
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for [...] Read more.
Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for surgeons to perform a PAO surgery. To solve this challenge, we propose a robot-assisted, augmented reality (AR)-guided surgical navigation system for PAO. The system mainly consists of a robot arm, an optical tracker, and a Microsoft HoloLens 2 headset, which is a state-of-the-art (SOTA) optical see-through (OST) head-mounted display (HMD). For AR guidance, we propose an optical marker-based AR registration method to estimate a transformation from the optical tracker coordinate system (COS) to the virtual space COS such that the virtual models can be superimposed on the corresponding physical counterparts. Furthermore, to guide the osteotomy, the developed system automatically aligns a bone saw with osteotomy planes planned in preoperative images. Then, it provides surgeons with not only virtual constraints to restrict movement of the bone saw but also AR guidance for visual feedback without sight diversion, leading to higher surgical accuracy and improved surgical safety. Comprehensive experiments were conducted to evaluate both the AR registration accuracy and osteotomy accuracy of the developed navigation system. The proposed AR registration method achieved an average mean absolute distance error (mADE) of 1.96 ± 0.43 mm. The robotic system achieved an average center translation error of 0.96 ± 0.23 mm, an average maximum distance of 1.31 ± 0.20 mm, and an average angular deviation of 3.77 ± 0.85°. Experimental results demonstrated both the AR registration accuracy and the osteotomy accuracy of the developed system. Full article
(This article belongs to the Special Issue Augmented Reality-Based Navigation System for Healthcare)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed robot-assisted AR-guided surgical navigation system for PAO.</p>
Full article ">Figure 2
<p>A schematic illustration of preoperative planning, where <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">o</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">y</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math> are calculated based on <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>11</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>12</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>21</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">a</mi> <mrow> <mn>22</mn> </mrow> <mrow> <mi>C</mi> <mi>T</mi> </mrow> </msubsup> </semantics></math>.</p>
Full article ">Figure 3
<p>A schematic illustration of bone saw calibration. (<b>a</b>) Digitizing the four corner points using a trackable pointer; (<b>b</b>) calculating <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">o</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">x</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">y</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>w</mi> </mrow> <mi>M</mi> </msubsup> </semantics></math> based on <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>11</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>12</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>21</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold-italic">b</mi> <mrow> <mn>22</mn> </mrow> <mi>M</mi> </msubsup> </semantics></math>.</p>
Full article ">Figure 4
<p>The proposed AR registration. (<b>a</b>) Virtual models of the optical marker are loaded in the virtual space. Each virtual model has a unique pose. (<b>b</b>) The optical marker attached to the robot flange is aligned with each virtual model.</p>
Full article ">Figure 5
<p>AR guidance during the PAO procedure. (<b>a</b>) The proposed AR navigation system not only provides visualization of virtual models but also dispalys the pose parameters of the bone saw relative to the osteotomy plane. (<b>b</b>) The definitions of the pose parameters.</p>
Full article ">Figure 6
<p>Experimental setup of the evaluation of AR registration accuracy. In this experiment, we defined eight validation points in the virtual space. Then, after performing AR registration, we used a trackable pointer to digitize the validation points, acquiring their coordinates in the optical tracker COS <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">O</mi> <mi>T</mi> </msub> </semantics></math>. We calculated the mADE of the validation points as an evaluation metric.</p>
Full article ">Figure 7
<p>Evaluation of the osteotomy accuracy. (<b>a</b>) Extraction of the upper plane and the lower plane in the postoperative image. (<b>b</b>) A schematic illustration on how the center translation error <math display="inline"><semantics> <msub> <mi>d</mi> <mi>c</mi> </msub> </semantics></math>, the maximum distance <math display="inline"><semantics> <msub> <mi>d</mi> <mi>m</mi> </msub> </semantics></math>, and the angular deviation <math display="inline"><semantics> <mi>θ</mi> </semantics></math> are defined.</p>
Full article ">Figure 8
<p>Visualization of the alignment between the virtual model (yellow) and the pelvis phantom (white) using different methods [<a href="#B18-sensors-24-04754" class="html-bibr">18</a>,<a href="#B19-sensors-24-04754" class="html-bibr">19</a>,<a href="#B21-sensors-24-04754" class="html-bibr">21</a>]. In this figure, misalignment is highlighted using red arrows. Compared with other methods, the proposed method achieved the most accurate AR registration.</p>
Full article ">Figure 9
<p>Visualization of experimental results of osteotomy where actual osteotomy planes and planned osteotomy planes are visualized in orange and yellow, respectively.</p>
Full article ">Figure 10
<p>AR guidance during the osteotomy procedure: (<b>a</b>) AR display when the bone saw was out of the osteotomy area, where the pose parameters are displayed in red; (<b>b</b>) AR display when the bone saw was on the planned plane and in the osteotomy area, where the pose parameters are visualized in green.</p>
Full article ">
12 pages, 780 KiB  
Article
Predicting the Arousal and Valence Values of Emotional States Using Learned, Predesigned, and Deep Visual Features
by Itaf Omar Joudeh, Ana-Maria Cretu and Stéphane Bouchard
Sensors 2024, 24(13), 4398; https://doi.org/10.3390/s24134398 - 7 Jul 2024
Viewed by 563
Abstract
The cognitive state of a person can be categorized using the circumplex model of emotional states, a continuous model of two dimensions: arousal and valence. The purpose of this research is to select a machine learning model(s) to be integrated into a virtual [...] Read more.
The cognitive state of a person can be categorized using the circumplex model of emotional states, a continuous model of two dimensions: arousal and valence. The purpose of this research is to select a machine learning model(s) to be integrated into a virtual reality (VR) system that runs cognitive remediation exercises for people with mental health disorders. As such, the prediction of emotional states is essential to customize treatments for those individuals. We exploit the Remote Collaborative and Affective Interactions (RECOLA) database to predict arousal and valence values using machine learning techniques. RECOLA includes audio, video, and physiological recordings of interactions between human participants. To allow learners to focus on the most relevant data, features are extracted from raw data. Such features can be predesigned, learned, or extracted implicitly using deep learners. Our previous work on video recordings focused on predesigned and learned visual features. In this paper, we extend our work onto deep visual features. Our deep visual features are extracted using the MobileNet-v2 convolutional neural network (CNN) that we previously trained on RECOLA’s video frames of full/half faces. As the final purpose of our work is to integrate our solution into a practical VR application using head-mounted displays, we experimented with half faces as a proof of concept. The extracted deep features were then used to predict arousal and valence values via optimizable ensemble regression. We also fused the extracted visual features with the predesigned visual features and predicted arousal and valence values using the combined feature set. In an attempt to enhance our prediction performance, we further fused the predictions of the optimizable ensemble model with the predictions of the MobileNet-v2 model. After decision fusion, we achieved a root mean squared error (RMSE) of 0.1140, a Pearson’s correlation coefficient (PCC) of 0.8000, and a concordance correlation coefficient (CCC) of 0.7868 on arousal predictions. We achieved an RMSE of 0.0790, a PCC of 0.7904, and a CCC of 0.7645 on valence predictions. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of our visual data methodology.</p>
Full article ">Figure 2
<p>Predicted versus actual plots of fused (<b>a</b>) arousal, and (<b>b</b>) valence predictions from an optimizable ensemble trained on combined visual features and MobileNet-v2 trained on video frames of full faces (green) or half faces (blue). The red dashed line represents perfect predictions.</p>
Full article ">
19 pages, 12908 KiB  
Article
Integration of 3D Gaussian Splatting and Neural Radiance Fields in Virtual Reality Fire Fighting
by Haojie Lian, Kangle Liu, Ruochen Cao, Ziheng Fei, Xin Wen and Leilei Chen
Remote Sens. 2024, 16(13), 2448; https://doi.org/10.3390/rs16132448 - 3 Jul 2024
Viewed by 819
Abstract
Neural radiance fields (NeRFs) and 3D Gaussian splatting have emerged as promising 3D reconstruction techniques recently. However, their application in virtual reality (VR), particularly in firefighting training, remains underexplored. We present an innovative VR firefighting simulation system based on 3D Gaussian Splatting technology. [...] Read more.
Neural radiance fields (NeRFs) and 3D Gaussian splatting have emerged as promising 3D reconstruction techniques recently. However, their application in virtual reality (VR), particularly in firefighting training, remains underexplored. We present an innovative VR firefighting simulation system based on 3D Gaussian Splatting technology. Leveraging these techniques, we successfully reconstruct realistic physical environments. By integrating the Unity3D game engine with head-mounted displays (HMDs), we created and presented immersive virtual fire scenes. Our system incorporates NeRF technology to generate highly realistic models of firefighting equipment. Users can freely navigate and interact with fire within the virtual fire scenarios, enhancing immersion and engagement. Moreover, by utilizing the Photon PUN2 networking framework, our system enables multi-user collaboration on firefighting tasks, improving training effectiveness and fostering teamwork and communication skills. Through experiments and surveys, it is demonstrated that the proposed VR framework enhances user experience and holds promises for improving the effectiveness of firefighting training. Full article
Show Figures

Figure 1

Figure 1
<p>The process of Neural Radiance Fields (NeRFs) [<a href="#B7-remotesensing-16-02448" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Rendering Process for 3D Gaussian Splatting [<a href="#B6-remotesensing-16-02448" class="html-bibr">6</a>].</p>
Full article ">Figure 3
<p>System Architecture Diagram. The primary focus is on the integration of 3D Gaussian splatting and instant-ngp algorithms within Unity3D, as well as the utilization of VR devices for immersive interaction within the virtual scene.</p>
Full article ">Figure 4
<p>Importing a trained PLY file into Unity3D. (<b>a</b>) Creation of 3D Gaussian Assets. (<b>b</b>) Scene post-import.</p>
Full article ">Figure 5
<p>Cleaning the imported scene. (<b>a</b>) Ellipse pruner. (<b>b</b>) Cube pruner.</p>
Full article ">Figure 6
<p>Adding Box Collider and “Teleportation Area” script.</p>
Full article ">Figure 7
<p>Importing smoke with physical laws into Unity3D. (<b>a</b>) COMSOL simulation results. (<b>b</b>) Import smoke.</p>
Full article ">Figure 8
<p>Incorporating Fire into the scene involves adding a collider and control script to the fire object. The image illustrates this process.</p>
Full article ">Figure 9
<p>Colmap generates pre-trained images. (<b>a</b>) The camera captures images from various angles around the model. (<b>b</b>) A set of 125 png images is generated.</p>
Full article ">Figure 10
<p>NeRF generated using instant-ngp. (<b>a</b>) Creation of the fire extinguisher model. (<b>b</b>) The PLY file format.</p>
Full article ">Figure 11
<p>Generate texture maps.</p>
Full article ">Figure 12
<p>Incorporating fire suppression equipment into Unity3D.</p>
Full article ">Figure 13
<p>Align the spray particles position relative to the fire extinguisher.</p>
Full article ">Figure 14
<p>Create a “Network Player” prefab. (<b>a</b>) This prefab serves as a representation of the user, and upon launching the application on various devices, an instance of this prefab is instantiated within the scene. (<b>b</b>) We have incorporated scripts into the prefab to facilitate functionalities such as scene navigation and fire extinguisher control.</p>
Full article ">Figure 15
<p>Adding Photon Voice Components.</p>
Full article ">
10 pages, 4132 KiB  
Article
Tear Film Break-Up Time before and after Watching a VR Video: Comparison between Naked Eyes and Contact Lens Wearers
by Hyunjin Kim, Minji Gil and Hyungoo Kang
Electronics 2024, 13(13), 2448; https://doi.org/10.3390/electronics13132448 - 22 Jun 2024
Viewed by 502
Abstract
The impact of viewing VR videos using a head-mounted display (HMD) on tear film dynamics is examined by comparing the viewing experience of individuals using their naked eyes with that of viewers wearing contact lenses. While the impact of VR on eye dryness [...] Read more.
The impact of viewing VR videos using a head-mounted display (HMD) on tear film dynamics is examined by comparing the viewing experience of individuals using their naked eyes with that of viewers wearing contact lenses. While the impact of VR on eye dryness has been studied, there is limited research on the risks for contact lens wearers. This study aims to investigate eye dryness associated with VR use in individuals wearing soft contact lenses. Seventeen adults in their 20s (7 male, 10 female) with uncorrected visual acuity of 0.8+ participated. The non-invasive tear film break-up time (NIBUT) was assessed before and after a 20 min VR video session under two conditions: with and without soft contact lenses. The results indicated a decrease in the initial tear film break-up time and an increase in the average tear film break-up time when viewing with naked eyes, whereas viewing with contact lenses led to decreases in both parameters, with statistically significant changes observed. Although the alteration in the tear film break-up time was insignificant during VR video viewing with naked eyes, the tear film stability of individuals wearing soft contact lenses tended to decrease. Caution is advised when using soft contact lenses during VR video sessions to mitigate potential eye dryness. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented and Mixed Reality)
Show Figures

Figure 1

Figure 1
<p>Measurement of tear film breakage time using Idra.</p>
Full article ">Figure 2
<p>Subject watching VR.</p>
Full article ">Figure 3
<p>Experimental procedure.</p>
Full article ">Figure 4
<p>Initial tear film break-up time before and after watching the VR video with the naked eye.</p>
Full article ">Figure 5
<p>Average tear film break-up time before and after watching the VR video with the naked eye.</p>
Full article ">Figure 6
<p>Initial tear film break-up time before watching the VR video with the naked eyes compared with after watching the VR video while wearing contact lenses.</p>
Full article ">Figure 7
<p>Average tear film break-up time before watching the VR video with the naked eyes compared with after watching the VR video while wearing contact lenses. * Statistically significant difference from the baseline value (<span class="html-italic">p</span> &lt; 0.05, paired <span class="html-italic">t</span>-test).</p>
Full article ">Figure 8
<p>Initial tear film break-up time before watching the VR video with the naked eye: comparison between the first and second days.</p>
Full article ">Figure 9
<p>Average tear film break-up time before watching the VR video with the naked eye: comparison between the first and second days.</p>
Full article ">
21 pages, 4869 KiB  
Article
Assessment of User Preferences for In-Car Display Combinations during Non-Driving Tasks: An Experimental Study Using a Virtual Reality Head-Mounted Display Prototype
by Liang Li, Chacon Quintero Juan Carlos, Zijiang Yang and Kenta Ono
World Electr. Veh. J. 2024, 15(6), 264; https://doi.org/10.3390/wevj15060264 - 17 Jun 2024
Viewed by 554
Abstract
The goal of vehicular automation is to enhance driver comfort by reducing the necessity for active engagement in driving. This allows for the performance of non-driving-related tasks (NDRTs), with attention shifted away from the driving process. Despite this, there exists a discrepancy between [...] Read more.
The goal of vehicular automation is to enhance driver comfort by reducing the necessity for active engagement in driving. This allows for the performance of non-driving-related tasks (NDRTs), with attention shifted away from the driving process. Despite this, there exists a discrepancy between current in-vehicle display configurations and the escalating demands of NDRTs. This study investigates drivers’ preferences for in-vehicle display configurations within highly automated driving contexts. Utilizing virtual reality head-mounted displays (VR-HMDs) to simulate autonomous driving scenarios, this research employs Unity 3D Shape for developing sophisticated head movement tracking software. This setup facilitates the creation of virtual driving environments and the gathering of data on visual attention distribution. Employing an orthogonal experiment, this experiment methodically analyses and categorizes the primary components of in-vehicle display configurations to determine their correlation with visual immersion metrics. Additionally, this study incorporates subjective questionnaires to ascertain the most immersive display configurations and to identify key factors impacting user experience. Statistical analysis reveals that a combination of Portrait displays with Windshield Head-Up Displays (W-HUDs) is favored under highly automated driving conditions, providing increased immersion during NDRTs. This finding underscores the importance of tailoring in-vehicle display configurations to individual needs to avoid distractions and enhance user engagement. Full article
Show Figures

Figure 1

Figure 1
<p>Virtual driving environment. (<b>a</b>) Map and environment of virtual driving experience: commuter driving scene sections and functions. (<b>b</b>) Integration into unity 3D virtual driving platform and experiment environment.</p>
Full article ">Figure 2
<p>A sample of the interface item.</p>
Full article ">Figure 3
<p>The principle of generating visual attention mapping through head movements using VR-HMD.</p>
Full article ">Figure 4
<p>Attention distribution data for in-vehicle display combinations (<b>a</b>) Fact A center console display, (<b>b</b>) fact B dashboard, and (<b>c</b>) fact C HUD.</p>
Full article ">Figure 5
<p>Trends of factors in in-vehicle display configurations.</p>
Full article ">Figure 6
<p>Heatmap of in-vehicle display configurations. (<b>a</b>) Configuration 1. (<b>b</b>) Configuration 2 (<b>c</b>) Configuration 3. (<b>d</b>) Configuration 4.</p>
Full article ">Figure A1
<p>Overview of mainstream in-vehicle display configuration types by current automakers.</p>
Full article ">Figure A2
<p>Prototypes of interface interaction for four configurations.</p>
Full article ">
Back to TopTop