[go: up one dir, main page]

 
 
applsci-logo

Journal Browser

Journal Browser

Advanced Virtual, Augmented, and Mixed Reality: Immersive Applications and Innovative Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 November 2024 | Viewed by 15538

Special Issue Editor


E-Mail Website
Guest Editor
Computer Science Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
Interests: virtual reality; augmented reality; coputer graphics; robotics; machine learning; data mining; qualitaitve spatial reasoning

Special Issue Information

Dear Colleagues,

AR/VR has grown beyond its science fiction roots, becoming a powerful visualization tool using and creating immersive software and hardware technology. This technology provides a more immersive experience than traditional vignette approaches to evoke instant perception and comprehension. The COVID-19 pandemic has accelerated the need for enhancements in technology with such research and immersive tools. Zoom, for instance, is a prime tool for remote learning, conferencing, meeting of minds, etc. VR also uses remote sensing devices to inspect, diagnose, and maintain products and projects remotely. The technology has not reached a mature state yet because of the cost of equipment, but this is not a deterrent to content developers, and more research is continuing to be carried out to combat this.

Research interest in virtual reality education is rapidly growing beyond the limits of imagination in many areas, leading to various innovative applications which are of use both to researchers and the general public. This includes several industries such as automotive, healthcare, psychology, education and training, tourism, manufacturing, civil engineering, commerce (advertising and retail sales), military, architecture, and research and development. VR is also akin to perception, immersion and visualization. All the more, the innovators in each industry continue to explore ways to tap into the unlimited potential of VR in abstract and real environments.

In 2023, the trend is anticipated to involve the exploration and exploitation of AR as much as possible. AR trends show that it is increasingly being adopted in automotives, healthcare, marketing, engineering, and education, and there is a rapidly growing demand for professionals who are proficient in virtual reality (VR) and augmented reality (AR). In the context of these recent innovations, AR and VR will soon be integral aspects of society. MDPI welcomes original papers in all areas of virtual reality and augmented reality applications in natural and social sciences.

Dr. Chaman Sabharwal
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • augmented reality
  • digital
  • immersion
  • AR and AI
  • 3D graphics
  • animation
  • emerging
  • interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 20444 KiB  
Article
Investigating User Experience of an Immersive Virtual Reality Simulation Based on a Gesture-Based User Interface
by Teemu H. Laine and Hae Jung Suk
Appl. Sci. 2024, 14(11), 4935; https://doi.org/10.3390/app14114935 - 6 Jun 2024
Viewed by 1007
Abstract
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common [...] Read more.
The affordability of equipment and availability of development tools have made immersive virtual reality (VR) popular across research fields. Gesture-based user interface has emerged as an alternative method to handheld controllers to interact with the virtual world using hand gestures. Moreover, a common goal for many VR applications is to elicit a sense of presence in users. Previous research has identified many factors that facilitate the evocation of presence in users of immersive VR applications. We investigated the user experience of Four Seasons, an immersive virtual reality simulation where the user interacts with a natural environment and animals with their hands using a gesture-based user interface (UI). We conducted a mixed-method user experience evaluation with 21 Korean adults (14 males, 7 females) who played Four Seasons. The participants filled in a questionnaire and answered interview questions regarding presence and experience with the gesture-based UI. The questionnaire results indicated high ratings for presence and gesture-based UI, with some issues related to the realism of interaction and lack of sensory feedback. By analyzing the interview responses, we identified 23 potential presence factors and proposed a classification for organizing presence factors based on the internal–external and dynamic–static dimensions. Finally, we derived a set of design principles based on the potential presence factors and demonstrated their usefulness for the heuristic evaluation of existing gesture-based immersive VR experiences. The results of this study can be used for designing and evaluating presence-evoking gesture-based VR experiences. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of this study.</p>
Full article ">Figure 2
<p>The setup for gesture-based interaction using HTC VIVE and Leap Motion. Users interact with fireflies (<b>top</b>) and a butterfly (<b>bottom</b>).</p>
Full article ">Figure 3
<p>Experimental procedure.</p>
Full article ">Figure 4
<p>Means and standard deviations of the statements related to presence. The scale ranges from strongly disagree (1) to strongly agree (5).</p>
Full article ">Figure 5
<p>Means and standard deviations of the statements related to the gesture-based UI in Four Seasons. The scale ranges from strongly disagree (1) to strongly agree (5).</p>
Full article ">Figure 6
<p>The classification of the potential presence factors identified in this study.</p>
Full article ">Figure 7
<p>Sample screens from the Pillow immersive VR and mixed reality experience: (<b>A</b>) the Stargazer, (<b>B</b>) the Fisherman, (<b>C</b>) the Meditator, and (<b>D</b>) the Storyteller.</p>
Full article ">
18 pages, 7366 KiB  
Article
Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications
by Cosimo Aliani, Alberto Morelli, Eva Rossi, Sara Lombardi, Vincenzo Yuto Civale, Vittoria Sardini, Flavio Verdino and Leonardo Bocchi
Appl. Sci. 2024, 14(10), 4133; https://doi.org/10.3390/app14104133 - 13 May 2024
Cited by 1 | Viewed by 758
Abstract
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, [...] Read more.
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, training and testing phases of medical students. This study introduces a technique for enhancing a 3D digital model reconstructed from cone-beam computed tomography images with its real coloured texture using an Intel D435 RGBD camera. This method is based on iteratively projecting the two models onto a 2D plane, identifying their contours and then minimizing the distance between them. Finally, the coloured digital models were displayed in mixed reality through a Microsoft HoloLens 2 and an application to interact with them using hand gestures was developed. The registration error between the two 3D models evaluated using 30,000 random points indicates values of: 1.1 ± 1.3 mm on the x-axis, 0.7 ± 0.8 mm on the y-axis, and 0.9 ± 1.2 mm on the z-axis. This result was achieved in three iterations, starting from an average registration error on the three axes of 1.4 mm to reach 0.9 mm. The heatmap created to visualize the spatial distribution of the error shows how it is uniformly distributed over the surface of the pointcloud obtained with the RGBD camera, except for some areas of the nose and ears where the registration error tends to increase. The obtained results indicate that the proposed methodology seems effective. In addition, since the used RGBD camera is inexpensive, future approaches based on the simultaneous use of multiple cameras could further improve the results. Finally, the augmented reality visualization of the obtained result is innovative and could provide support in all those cases where the visualization of three-dimensional medical models is necessary. Full article
Show Figures

Figure 1

Figure 1
<p>Human head mimicking phantom.</p>
Full article ">Figure 2
<p>Conceptual diagram summarizing the main adopted steps.</p>
Full article ">Figure 3
<p>Application of the Otsu’s method for the outer contour detection: the left image shows the original DICOM image while the right one shows the result obtained by applying the Otsu’s thresholding method.</p>
Full article ">Figure 4
<p>Pointcloud obtained from the DICOM images processing.</p>
Full article ">Figure 5
<p>ChArUco board used as a fiducial marker for RGBD camera pose estimation. The side length of each square on the board is 15 mm, while the side length of each ArUco marker is 10 mm.</p>
Full article ">Figure 6
<p>Reference system of the ChArUco board.</p>
Full article ">Figure 7
<p>Pointcloud obtained after the registration and segmentation processes of RGBD pointclouds.</p>
Full article ">Figure 8
<p>Result of the initial BB-based registration process. The bounding boxes of the two registered pointclouds are represented in white.</p>
Full article ">Figure 9
<p>Highlights of the contours-based minimization algorithm. (<b>a</b>) RGBD pointcloud contour <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math> after its projection on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math> plane defined by the rotation vector <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) DICOM-based pointcloud contour <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math> after its projection on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math> plane defined by the rotation vector <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) Application of the distance matrix filter to the <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>D</mi> <mi>I</mi> <mi>C</mi> <mi>O</mi> <mi>M</mi> </mrow> </msub> </semantics></math> contour. The colour scale from black to white means the position of a pixel closer and farther from the reference contour, respectively; (<b>d</b>) Minimization of the distance between <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>D</mi> <mi>I</mi> <mi>C</mi> <mi>O</mi> <mi>M</mi> </mrow> </msub> </semantics></math> based on the distance matrix. <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>D</mi> <mi>I</mi> <mi>C</mi> <mi>O</mi> <mi>M</mi> </mrow> </msub> </semantics></math> is represented in white, the initial <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math> contour is represented in red, and the <math display="inline"><semantics> <msub> <mi>c</mi> <mrow> <mi>R</mi> <mi>G</mi> <mi>B</mi> </mrow> </msub> </semantics></math> contour at the end of the minimization is represented in green.</p>
Full article ">Figure 10
<p>Flowchart diagram of the contours-based minimization algorithm.</p>
Full article ">Figure 11
<p>Result of the registration between RGBD and DICOM-based pointclouds.</p>
Full article ">Figure 12
<p>Mesh obtained at the end of the texture mapping application from RGBD to DICOM-based pointcloud.</p>
Full article ">Figure 13
<p>Plot representing the mean registration error <span class="html-italic">E</span>, calculated over a thousand random points, as a function of iterations. The error reported at iteration number zero represents the mean error obtained at the end of the preliminary registration based on bounding boxes. In red, instead, the error <math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>t</mi> <mi>h</mi> <mi>r</mi> </mrow> </msub> </semantics></math> = 1 mm is represented, chosen as the stopping criterion for the minimization process.</p>
Full article ">Figure 14
<p>Heatmap representing how the registration error between the RGBD and the DICOM-based pointcloud is spatially distributed on the RGBD pointcloud. In the left part of the image, the colour scale is represented according to the registration error in millimetres: it ranges from red, representing areas with minimal error, to green/blue, representing areas with higher error.</p>
Full article ">Figure 15
<p>Different types of object manipulation and interactions using hand gestures. The white lines represent the user’s hand raycasts and allow HoloLens 2 to interact with the digital object when the user’s hands are not in direct contact with the digital object. (<b>a</b>) One-handed interaction with the digital object: the user can rotate and translate the object using the corresponding hand movement; (<b>b</b>) One-handed interaction with the digital object: the user can rotate the object by selecting one side of the box and then moving its hand; (<b>c</b>) One-handed interaction with the digital object: the user can scale the object by selecting one vertex of the box and then moving its hand; (<b>d</b>) Two-handed interaction with the digital object: the user can translate and scale the object using intuitive hand movements.</p>
Full article ">Figure 15 Cont.
<p>Different types of object manipulation and interactions using hand gestures. The white lines represent the user’s hand raycasts and allow HoloLens 2 to interact with the digital object when the user’s hands are not in direct contact with the digital object. (<b>a</b>) One-handed interaction with the digital object: the user can rotate and translate the object using the corresponding hand movement; (<b>b</b>) One-handed interaction with the digital object: the user can rotate the object by selecting one side of the box and then moving its hand; (<b>c</b>) One-handed interaction with the digital object: the user can scale the object by selecting one vertex of the box and then moving its hand; (<b>d</b>) Two-handed interaction with the digital object: the user can translate and scale the object using intuitive hand movements.</p>
Full article ">
14 pages, 3573 KiB  
Article
Participatory Exhibition-Viewing Using Augmented Reality and Analysis of Visitor Behavior
by Chun-I Lee, Yen-Hsi Pan and Brian Chen
Appl. Sci. 2024, 14(9), 3579; https://doi.org/10.3390/app14093579 - 24 Apr 2024
Viewed by 921
Abstract
Augmented reality (AR) is rapidly becoming a popular technology for exhibitions. The extended content provided through virtual elements offers a higher level of interactivity and can increase the appeal of the exhibition for younger viewers, in particular. However, AR technology in exhibition settings [...] Read more.
Augmented reality (AR) is rapidly becoming a popular technology for exhibitions. The extended content provided through virtual elements offers a higher level of interactivity and can increase the appeal of the exhibition for younger viewers, in particular. However, AR technology in exhibition settings is typically utilized to extend the effects of exhibits, focusing solely on individual experiences and lacking in shared social interactions. In order to address this limitation, in this study, we used AR technology to construct a participatory exhibition-viewing system in the form of an AR mobile application (app), “Wander Into Our Sea”. This system was developed as a component of the 2022 Greater Taipei Biennial of Contemporary Art exhibition titled “Log Into Our Sea”. The app features two modes: exhibition-viewing mode and message mode. The first embodies passive exhibition-viewing while the second offers channels for active participation. The app has three functions: (1) in exhibition mode, visitors passively view the exhibition content through the AR lens, (2) in message mode, visitors can use the AR lens to leave messages in the 3D space of the exhibition to become part of the exhibit, and (3) during the use of either mode, the app collects data on visitor behavior and uploads it to a cloud to create a research database. The third function allowed us to compare the behaviors of exhibition visitors while they used the two modes. Results revealed that without restricting the ways and sequences in which AR content was viewed, there were no significant differences in the duration of viewing, or the distance covered by visitors between the two modes. However, the paths they took were more concentrated in the exhibition-viewing mode, which indicates that this mode encouraged visitors to view the exhibit in accordance with the AR content. In contrast, in message mode, visitors were encouraged to leave text messages and read those left by others, which created disorganized unpredictable paths. Our study demonstrates an innovative application of AR positioning within an interactive exhibition-viewing system, showcasing a novel way to engage visitors and enrich their experience. Full article
Show Figures

Figure 1

Figure 1
<p>Exhibition space and dimensions.</p>
Full article ">Figure 2
<p>Frame scanning for positioning.</p>
Full article ">Figure 3
<p>Schematic of AR image recognition and SLAM technology integration for virtual–physical space positioning.</p>
Full article ">Figure 4
<p>AR video content positioned within physical frames (The Chinese text displayed in the image are introductions to the two centers.).</p>
Full article ">Figure 5
<p>Messages left by visitors in the exhibition space (The Chinese characters displayed on the wall in the image are introductions to the three centers, while the floating Chinese characters represent messages left by visitors at the exhibition.).</p>
Full article ">Figure 6
<p>Experiment process.</p>
Full article ">Figure 7
<p>Path distributions of all visitors (Blue lines represent the paths of visitors in Exhibition mode, while red lines represent the paths of visitors in Message mode.).</p>
Full article ">Figure 8
<p>Paths taken by one visitor (Blue lines represent the paths of visitors in Exhibition mode, while red lines represent the paths of visitors in Message mode.).</p>
Full article ">
11 pages, 3044 KiB  
Article
Towards the Emergence of the Medical Metaverse: A Pilot Study on Shared Virtual Reality for Orthognathic–Surgical Planning
by Jari Kangas, Jorma Järnstedt, Kimmo Ronkainen, John Mäkelä, Helena Mehtonen, Pertti Huuskonen and Roope Raisamo
Appl. Sci. 2024, 14(3), 1038; https://doi.org/10.3390/app14031038 - 25 Jan 2024
Viewed by 915
Abstract
Three-dimensional (3D) medical images are used for diagnosis and in surgical operation planning. Computer-assisted surgical simulations (CASS) are essential for complex surgical procedures that are often performed in an interdisciplinary manner. Traditionally, the participants study the designs on the same display. In 3D [...] Read more.
Three-dimensional (3D) medical images are used for diagnosis and in surgical operation planning. Computer-assisted surgical simulations (CASS) are essential for complex surgical procedures that are often performed in an interdisciplinary manner. Traditionally, the participants study the designs on the same display. In 3D virtual reality (VR) environments, the planner is wearing a head-mounted display (HMD). The designs can be then examined in VR by other persons wearing HMDs, which is a practical use case for the medical metaverse. A multi-user VR environment was built for the planning of an orthognathic–surgical (correction of facial skeleton) operation. Four domain experts (oral and maxillofacial radiologists) experimented with the pilot system and found it useful. It enabled easier observation of the model and a better understanding of the structures. There was a voice connection and co-operation during the procedure was natural. The planning task is complex, leading to a certain level of complexity in the user interface. Full article
Show Figures

Figure 1

Figure 1
<p>Two points located on the skull model surface (<b>left</b>). A cut plane defined on the lower jaw by three points (<b>middle</b>). The plane location can be edited by moving the points. Two separated parts of the lower jaw after the cut operation (<b>right</b>).</p>
Full article ">Figure 2
<p>Directional handles to move the model part on the main axes (<b>left</b>). The part can be moved in the given directions. Rotational handles to rotate the part around the main axes (<b>middle</b>). The movement length and direction can be measured by observing the movement of any point of interest set on the model (<b>right</b>). The bottom row in the tableindicates the move (9.3 mm to the right, etc.) from the original position.</p>
Full article ">Figure 3
<p>The collaborating participants can point to specific locations for other participants while discussing (<b>left</b>). Seeing the other participant in the VR environment (<b>middle</b>). The system only shows the HMD (as a blue box on the left) and the hand controllers of the other participant (as two red boxes in the middle), which was enough to understand how the other participant sees the medical imaging data. The view is from the observer point-of-view while the other participant was doing the plan. The movements of the hand controllers during the design added to the sense of immersion. A user with an Oculus Quest 2 HMD and the Touch controllers (<b>right</b>).</p>
Full article ">Figure 4
<p>The distribution of subjective evaluation values for each attribute (<a href="#applsci-14-01038-t001" class="html-table">Table 1</a>). The values selected by “radiologist” participants are marked by an “x” and the values selected by “surgeon” participants are marked by a dot. The “radiologist” participants and one of the “surgeon” participants consistently gave high evaluations for all attributes.</p>
Full article ">
13 pages, 1288 KiB  
Article
Attentional Bias Modification Training in Virtual Reality: Evaluation of User Experience
by María Teresa Mendoza-Medialdea, Ana Carballo-Laza, Mariarca Ascione, Franck-Alexandre Meschberger-Annweiler, Bruno Porras-Garcia, Marta Ferrer-Garcia and José Gutiérrez-Maldonado
Appl. Sci. 2024, 14(1), 222; https://doi.org/10.3390/app14010222 - 26 Dec 2023
Viewed by 931
Abstract
Recent technological advances have paved the way for incorporating virtual reality (VR) into attentional bias modification training (ABMT) for the treatment of eating disorders. An important consideration in this therapeutic approach is ensuring the ease and comfort of users of the hardware and [...] Read more.
Recent technological advances have paved the way for incorporating virtual reality (VR) into attentional bias modification training (ABMT) for the treatment of eating disorders. An important consideration in this therapeutic approach is ensuring the ease and comfort of users of the hardware and software, preventing them from becoming additional obstacles during treatment. To assess this, 68 healthy participants engaged in an ABMT experiment aimed at evaluating various factors, including usability as well as the participants’ comfort while using the VR equipment, task-induced fatigue, and attitudes towards the technology. Our results indicated a favorable usability level for the ABMT proposed in this study. While their discomfort, anxiety, and fatigue increased during the task, these did not significantly impact its execution. However, heightened anxiety and fatigue were linked to lower evaluations of software usability. Other variables considered in the experiment did not notably affect the task. Full article
Show Figures

Figure 1

Figure 1
<p>Example of ABMT trials, with the three possible figures and colors: (<b>a</b>) Yellow triangle on the right leg; (<b>b</b>) Green triangle on the left shoulder; (<b>c</b>) Red square on the chest; (<b>d</b>) Close-up of the body after the red circle on the stomach disappears and the surrounding body areas are illuminated.</p>
Full article ">
21 pages, 3273 KiB  
Article
Design of an Immersive Virtual Reality Framework to Enhance the Sense of Agency Using Affective Computing Technologies
by Amalia Ortiz and Sonia Elizondo
Appl. Sci. 2023, 13(24), 13322; https://doi.org/10.3390/app132413322 - 17 Dec 2023
Cited by 1 | Viewed by 1550
Abstract
Virtual Reality is expanding its use to several fields of application, including health and education. The continuous growth of this technology comes with new challenges related to the ways in which users feel inside these virtual environments. There are various guidelines on ways [...] Read more.
Virtual Reality is expanding its use to several fields of application, including health and education. The continuous growth of this technology comes with new challenges related to the ways in which users feel inside these virtual environments. There are various guidelines on ways to enhance users’ virtual experience in terms of immersion or presence. Nonetheless, there is no extensive research on enhancing the sense of agency (SoA), a phenomenon which refers to the self-awareness of initiating, executing, and controlling one’s actions in the world. After reviewing the state of the art of technologies developed in the field of Affective Computing (AC), we propose a framework for designing immersive virtual environments (IVE) to enhance the users’ SoA. The framework defines the flow of interaction between users and the virtual world, as well as the AC technologies required for each interactive component to recognise, interpret and respond coherently within the IVE in order to enhance the SoA. Full article
Show Figures

Figure 1

Figure 1
<p>Sense of agency conceptualisation: example showing the intention to burst a balloon.</p>
Full article ">Figure 2
<p>General view of the sense of agency in terms of two layers proposed by Wen in [<a href="#B14-applsci-13-13322" class="html-bibr">14</a>].</p>
Full article ">Figure 3
<p>Research about the facial movements employed to convey universal emotions. Image source [<a href="#B29-applsci-13-13322" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) A two-dimensional model; image source [<a href="#B45-applsci-13-13322" class="html-bibr">45</a>] (<b>b</b>) Three-dimensional model; image source [<a href="#B46-applsci-13-13322" class="html-bibr">46</a>].</p>
Full article ">Figure 5
<p>Several studies employed the OCC model to generate emotions for embodied characters; image source [<a href="#B57-applsci-13-13322" class="html-bibr">57</a>].</p>
Full article ">Figure 6
<p>SAM scale that measures the dimensions of pleasure, arousal and dominance using a series of graphic abstract characters horizontally arranged according to a 9-points scale. image source [<a href="#B93-applsci-13-13322" class="html-bibr">93</a>].</p>
Full article ">Figure 7
<p>Adaptation of the two-layer model of the sense of agency proposed in [<a href="#B14-applsci-13-13322" class="html-bibr">14</a>] to immersive virtual environments.</p>
Full article ">Figure 8
<p>Proposed framework for the implementation of immersive virtual environments that integrate the management of the emotion flow to improve the sense of agency.</p>
Full article ">
16 pages, 391 KiB  
Article
Performance, Emotion, Presence: Investigation of an Augmented Reality-Supported Concept for Flight Training
by Birgit Moesl, Harald Schaffernak, Wolfgang Vorraber, Reinhard Braunstingl and Ioana Victoria Koglbauer
Appl. Sci. 2023, 13(20), 11346; https://doi.org/10.3390/app132011346 - 16 Oct 2023
Viewed by 1249
Abstract
Augmented reality (AR) could be a means for a more sustainable education of the next generation of pilots. This study aims to assess an AR-supported training concept for approach to landing, which is the riskiest phase of flying an aircraft and the most [...] Read more.
Augmented reality (AR) could be a means for a more sustainable education of the next generation of pilots. This study aims to assess an AR-supported training concept for approach to landing, which is the riskiest phase of flying an aircraft and the most difficult to learn. The evaluation was conducted with 59 participants (28 women and 31 men) in a pretest–post-test control group design. No significant effect of the AR-supported training was observed when comparing the experimental and the control groups. However, the results show that for the experimental group that trained with AR, higher performance in post-test was associated with higher AR presence and comfort with AR during training. Although both gender groups improved their approach quality after training, the improvement was larger in women as compared to men. Trainees’ workload, fear of failure, and negative emotions decreased in post-test as compared to pre-test, but the decrease was significantly larger in women than in men. The experimental group who used AR support during training showed improved performance despite the absence of AR support in post-test. However, the AR-based training concept had a similar effect to conventional simulator training. Although more research is necessary to explore the training opportunities in AR and mixed reality, the results of this study indicate that such an application would be beneficial to bridge the gap between theoretical and practical instruction. Full article
Show Figures

Figure 1

Figure 1
<p>Visualization of the AR cues in green at scenario start. AR cues are relative to the runway in the variants “<b>center</b>” without offset, and “<b>left</b>” and “<b>right</b>” with offset. Figure adapted from [<a href="#B12-applsci-13-11346" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Schematic illustration of the approach path for spot landing adapted from [<a href="#B12-applsci-13-11346" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>The approach quality of female and male trainees in pre-test and post-test. The circles represent the individual data of the participants, data of the same participant in pre- and post-test are connected by lines.</p>
Full article ">Figure 4
<p>Differences in Approach Quality between Post-test and Pre-test. The circles represent differences of each participant, the bars represent mean differences of the group.</p>
Full article ">
18 pages, 14314 KiB  
Article
Wine Production through Virtual Environments with a Focus on the Teaching–Learning Process
by Danis Tapia, Diego Illescas, Walter Santamaría and Jessica S. Ortiz
Appl. Sci. 2023, 13(19), 10823; https://doi.org/10.3390/app131910823 - 29 Sep 2023
Cited by 1 | Viewed by 1315
Abstract
This paper focuses on the application of the hardware-in-the-loop (HIL) technique in the winemaking process. The HIL technique provides an effective methodology to test and verify the automatic control of industrial processes in 3D laboratory environments. Two parts are considered: (i) software, which [...] Read more.
This paper focuses on the application of the hardware-in-the-loop (HIL) technique in the winemaking process. The HIL technique provides an effective methodology to test and verify the automatic control of industrial processes in 3D laboratory environments. Two parts are considered: (i) software, which consists of the virtualization of the wine process in order to generate a realistic work environment that allows the student to manipulate the system while visualizing the changes in the process; and (ii) hardware, through which the process control is implemented in ladder language in a PLC S7 1200 AC/DC/RLY (programmable logic controller). Bidirectional Ethernet TCP/IP communication is established, achieving a client–server architecture. This article highlights the main advantages of the HIL technique, such as its ability to simulate complex and extreme scenarios that would be difficult or expensive to recreate in a real environment. In addition, real-time testing of the hardware and software to implement the control system is performed, allowing for fast and accurate responses. Finally, a usability table is obtained that demonstrates the benefits of performing industrial process control work in virtual work environments, focusing the development on meaningful learning processes for engineering students. Full article
Show Figures

Figure 1

Figure 1
<p>Bidirectional communication between software and hardware.</p>
Full article ">Figure 2
<p>P&amp;ID diagram for wine production.</p>
Full article ">Figure 3
<p>Stages of wine production.</p>
Full article ">Figure 4
<p>SCADA System.</p>
Full article ">Figure 5
<p>Windows containing the wine production HMI.</p>
Full article ">Figure 6
<p>Elaboration of the virtual environment of the wine process.</p>
Full article ">Figure 7
<p>Software-to-hardware communication.</p>
Full article ">Figure 8
<p>Communication between KepServer and InTouch.</p>
Full article ">Figure 9
<p>Control and communication of the virtual process.</p>
Full article ">Figure 10
<p>Control panel in the three scenarios for user interaction.</p>
Full article ">Figure 11
<p>Virtualization stages of the wine process.</p>
Full article ">Figure 12
<p>User interaction with the HMI and the virtual wine production environment.</p>
Full article ">Figure 13
<p>Visualization of the factory in the virtual environment.</p>
Full article ">Figure 14
<p>Scene in the virtual environment: explosion.</p>
Full article ">Figure 15
<p>Tests at limit values in the virtual environment. (<b>a</b>) shows an explosion; (<b>b</b>) shows a liquid overflow.</p>
Full article ">Figure 16
<p>Graphical representation of the user in the environment.</p>
Full article ">Figure A1
<p>GPU performance.</p>
Full article ">Figure A2
<p>CPU performance.</p>
Full article ">
15 pages, 1212 KiB  
Article
Augmented Reality Applications for Synchronized Communication in Construction: A Review of Challenges and Opportunities
by Rita El Kassis, Steven K. Ayer and Mounir El Asmar
Appl. Sci. 2023, 13(13), 7614; https://doi.org/10.3390/app13137614 - 28 Jun 2023
Cited by 6 | Viewed by 1978
Abstract
Many researchers in the construction field have explored the utilization of augmented reality (AR) and its impact on the industry. Previous studies have shown potential uses for AR in the construction industry. However, a comprehensive critical review exploring the ways in which AR [...] Read more.
Many researchers in the construction field have explored the utilization of augmented reality (AR) and its impact on the industry. Previous studies have shown potential uses for AR in the construction industry. However, a comprehensive critical review exploring the ways in which AR supports synchronized communication is still missing. This paper aims to fill this gap by examining trends identified in the literature and by analyzing both beneficial and challenging attributes. This work was performed by collecting numerous journal and conference papers, using keywords including “augmented reality”, “construction”, and “synchronous communication”. The papers were then categorized based on the reported attributes that were indicated to be challenges or benefits. Throughout the analysis, several benefits were consistently reported, including training, visualization, instantly sharing information, decision making, and intuitive interaction. Similarly, several challenges were consistently reported, such as difficulty in manipulation, unfriendly interface, device discomfort, and sun brightness. Regarding other attributes, such as field of view, cost, safety hazards, and hands-free mode, researchers provided divergent reports regarding whether they were beneficial or detrimental to AR communication. These findings provide valuable guidance for future researchers and practitioners, enabling them to leverage AR for synchronized communication in ways that consistently offer value. Full article
Show Figures

Figure 1

Figure 1
<p>Off-site and on-site users’ hardware and software.</p>
Full article ">Figure 2
<p>Research method.</p>
Full article ">Figure 3
<p>Number of attributes (i.e., benefits and challenges) showing consensus/divergence.</p>
Full article ">
22 pages, 7522 KiB  
Article
Upper Body Pose Estimation Using Deep Learning for a Virtual Reality Avatar
by Taravat Anvari, Kyoungju Park and Ganghyun Kim
Appl. Sci. 2023, 13(4), 2460; https://doi.org/10.3390/app13042460 - 14 Feb 2023
Cited by 3 | Viewed by 3502
Abstract
With the popularity of virtual reality (VR) games and devices, demand is increasing for estimating and displaying user motion in VR applications. Most pose estimation methods for VR avatars exploit inverse kinematics (IK) and online motion capture methods. In contrast to existing approaches, [...] Read more.
With the popularity of virtual reality (VR) games and devices, demand is increasing for estimating and displaying user motion in VR applications. Most pose estimation methods for VR avatars exploit inverse kinematics (IK) and online motion capture methods. In contrast to existing approaches, we aim for a stable process with less computation, usable in a small space. Therefore, our strategy has minimum latency for VR device users, from high-performance to low-performance, in multi-user applications over the network. In this study, we estimate the upper body pose of a VR user in real time using a deep learning method. We propose a novel method inspired by a classical regression model and trained with 3D motion capture data. Thus, our design uses a convolutional neural network (CNN)-based architecture from the joint information of motion capture data and modifies the network input and output to obtain input from a head and both hands. After feeding the model with properly normalized inputs, a head-mounted display (HMD), and two controllers, we render the user’s corresponding avatar in VR applications. We used our proposed pose estimation method to build single-user and multi-user applications, measure their performance, conduct a user study, and compare the results with previous methods for VR avatars. Full article
Show Figures

Figure 1

Figure 1
<p>Our proposed pose estimation method for an avatar. Part A is network architecture for pose regression and Part B is the avatar animation step.</p>
Full article ">Figure 2
<p>Upper body joints <italic>j</italic><sub>1</sub>, <italic>j</italic><sub>2</sub>, <italic>j</italic><sub>3</sub>, <italic>j</italic><sub>4</sub>, <italic>j</italic><sub>5</sub>, <italic>j</italic><sub>6</sub>, <italic>j</italic><sub>7</sub>, <italic>j</italic><sub>8</sub>, and <italic>j</italic><sub>9</sub> indicate the head, left hand, right hand, left elbow, right elbow, left shoulder, right shoulder, spine, and pelvis.</p>
Full article ">Figure 3
<p>Our pose regression model.</p>
Full article ">Figure 4
<p>Mean squared error(MSE) of our pose regression model in the last (5th) epoch.</p>
Full article ">Figure 5
<p>Mean absolute error(MAE) of the predicted values and the ground truth data of the right shoulder or joint <italic>j</italic><sub>7</sub> of the last (5th) epoch.</p>
Full article ">Figure 6
<p>Comparison of the virtual reality(VR) user (<bold>right</bold>) and the corresponding self-avatar using our method (<bold>left</bold>).</p>
Full article ">Figure 7
<p>Comparison of our results in blue (<bold>left</bold>) and inverse kinematics (IK) in red (<bold>middle</bold>) from the user (<bold>right</bold>) (<bold>a</bold>) left-hand blocking, (<bold>b</bold>) right-hand blocking, (<bold>c</bold>) sway motion, (<bold>d</bold>) lift motion, (<bold>e</bold>) right-hand in front motion, and (<bold>f</bold>) folded arms motion generation.</p>
Full article ">Figure 8
<p>Comparison of our results (<bold>middle</bold>) and IK results (<bold>bottom</bold>) from the user (<bold>top</bold>).</p>
Full article ">Figure 9
<p>Archery game in two modes. Camera view of the user playing the archery game (<bold>right</bold>) using our method (<bold>left</bold>) and the IK method (<bold>middle</bold>).</p>
Full article ">Figure 10
<p>Ball-grabbing game in two modes. Camera view of three users (<bold>bottom</bold>) playing the ball-grabbing game using our method (<bold>top</bold>) and the IK method (<bold>middle</bold>).</p>
Full article ">Figure 11
<p>Questionnaire results in the archery game. <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i001.tif"/> Our proposed method and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i002.tif"/> IK.</p>
Full article ">Figure 12
<p>Average of total points in percentage in the archery game. <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i003.tif"/> Our proposed method and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i004.tif"/> IK.</p>
Full article ">Figure 13
<p>Questionnaire results in the ball-grabbing game. <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i005.tif"/> Our proposed method and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i006.tif"/> IK.</p>
Full article ">Figure 14
<p>Average of total balls grabbed by the users in percentage in the ball-grabbing game. <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i007.tif"/> Our proposed method and <inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="applsci-13-02460-i008.tif"/> IK.</p>
Full article ">
Back to TopTop