[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (414)

Search Parameters:
Keywords = metaverse

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 729 KiB  
Article
Behavioral Intentions in Metaverse Tourism: An Extended Technology Acceptance Model with Flow Theory
by Qi Wu, Ming-Qi Li and Jun-Hui Wang
Information 2024, 15(10), 632; https://doi.org/10.3390/info15100632 (registering DOI) - 13 Oct 2024
Viewed by 642
Abstract
This study aims to develop a new theoretical framework from the perspective of the Technology Acceptance Model (TAM), incorporating flow theory, to explore the factors influencing behavioral intentions to participate in metaverse tourism. Using data from 518 respondents with metaverse experience and participation [...] Read more.
This study aims to develop a new theoretical framework from the perspective of the Technology Acceptance Model (TAM), incorporating flow theory, to explore the factors influencing behavioral intentions to participate in metaverse tourism. Using data from 518 respondents with metaverse experience and participation in metaverse tourism, the study employed R Studio and Structural Equation Modeling (SEM) to test the relationships between variables in the model. The results indicate that metaverse flow has a significant positive impact on users’ perceived usefulness and perceived ease of use, with flow demonstrating strong explanatory power as a precursor factor. Perceived usefulness and perceived ease of use are predictors of users’ attitudes to using metaverse technology. A positive attitude towards the metaverse can enhance users’ support for metaverse tourism and their behavioral intention to participate in it, while support also positively influences behavioral intention. Support for metaverse tourism acts as a clear mediator between attitudes and behavioral intention. The newly developed theoretical framework in this study provides a fresh perspective for metaverse tourism research and helps enrich empirical analysis in this field. By deeply analyzing tourists’ behavioral intentions, the study provides valuable insights for stakeholders to develop targeted marketing strategies and services, thus promoting the future development of metaverse tourism. Full article
Show Figures

Figure 1

Figure 1
<p>The proposed research model.</p>
Full article ">Figure 2
<p>Results of the structural model. Notes: * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
17 pages, 504 KiB  
Article
Impact of Metaverse Technology on Academic Achievement and Motivation in Middle School Science
by Norah Saleh Mohamed Al-Muqbil
Multimodal Technol. Interact. 2024, 8(10), 91; https://doi.org/10.3390/mti8100091 - 12 Oct 2024
Viewed by 311
Abstract
This study explores the effects of Metaverse technology on middle school learners’ academic performance and motivation in science subjects. Utilizing a quasi-experimental design, 33 students in the experimental group were exposed to the Metaverse for one semester, while 32 students in the control [...] Read more.
This study explores the effects of Metaverse technology on middle school learners’ academic performance and motivation in science subjects. Utilizing a quasi-experimental design, 33 students in the experimental group were exposed to the Metaverse for one semester, while 32 students in the control group continued with traditional teaching methods at School 148 in Riyadh. Data collection instruments included a validated science achievement test and a motivation scale. The results demonstrated that the nodes were statistically significantly improved in the experimental group, receiving an average post-test score of 73.1, as compared with the control group, receiving an average post-test score of 65.9 (t = 2.3, p < 0.05). The scores in motivation were also slightly higher in the experimental group, with a mean of 26.9, as compared with the control group, with a mean of 17.1 (t = 5.75, p < 0.05). For academic achievement and motivation, the effect sizes were quite high: fixed effect = 1.091; random effect equals 1.086. These results demonstrate the possibilities of Metaverse technology in revolutionizing the way students learn science. This technology could be a valuable tool for instruction in science classes to enhance performances and influence students’ attitudes positively towards enhanced learning environments in schools. Full article
Show Figures

Figure 1

Figure 1
<p>Number of items in each dimension of the scale.</p>
Full article ">
25 pages, 18926 KiB  
Article
Enhancing Digital Identity: Evaluating Avatar Creation Tools and Privacy Challenges for the Metaverse
by Jorge Castillo Alcántara, Igor Tasic and Maria-Dolores Cano
Information 2024, 15(10), 624; https://doi.org/10.3390/info15100624 - 10 Oct 2024
Viewed by 644
Abstract
This study explores the process of creating avatars for Virtual Reality and metaverse environments using a variety of specialized applications, including VRChat, Ready Player Me, VRoidStudio, Mixamo, Convai, and MetaHuman. By evaluating these platforms, the research identifies the strengths and limitations of each [...] Read more.
This study explores the process of creating avatars for Virtual Reality and metaverse environments using a variety of specialized applications, including VRChat, Ready Player Me, VRoidStudio, Mixamo, Convai, and MetaHuman. By evaluating these platforms, the research identifies the strengths and limitations of each tool in terms of customization, integration, and overall user experience. The practical implementation focuses on avatar creation within Unity and Unreal Engine, highlighting the technical aspects of rigging, animation, and real-time rendering. The study also delves into the broader implications of avatar use, particularly concerning privacy and security. Our findings reveal that while each platform offers unique advantages, the choice of tool significantly impacts the visual fidelity and performance of avatars in virtual environments. For the ease of use, Ready Player Me stands out with its intuitive interface. VRoidStudio is notable for its high degree of customizability, allowing for detailed avatar personalization. In terms of high-quality graphics, MetaHuman leads the way with its advanced graphical fidelity. At the same time, while some platforms excel in personalizing avatars or integrating with development environments (VRoidStudio, Ready Player Me), others are constrained by their limited flexibility or their platform-specific availability (MetaHuman, Mixamo). Understanding the technical intricacies of these tools is crucial for developing more immersive and secure metaverse experiences. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>Main interface of VRChat.</p>
Full article ">Figure 2
<p>Ready Player Me: avatar (<b>a</b>) creation interface, (<b>b</b>) customization details, and (<b>c</b>) final avatar view.</p>
Full article ">Figure 2 Cont.
<p>Ready Player Me: avatar (<b>a</b>) creation interface, (<b>b</b>) customization details, and (<b>c</b>) final avatar view.</p>
Full article ">Figure 3
<p>VRoidStudio: (<b>a</b>) main window and (<b>b</b>) avatar import options.</p>
Full article ">Figure 4
<p>Mixamo: (<b>a</b>) initial window and (<b>b</b>) some of the downloading avatar options.</p>
Full article ">Figure 4 Cont.
<p>Mixamo: (<b>a</b>) initial window and (<b>b</b>) some of the downloading avatar options.</p>
Full article ">Figure 5
<p>Convai: (<b>a</b>) initial window and (<b>b</b>) avatar creation.</p>
Full article ">Figure 6
<p>MetaHuman: (<b>a</b>) plugin for Unreal Engine, (<b>b</b>) Creator, (<b>c</b>) Unreal Engine version, and (<b>d</b>) predefined avatars.</p>
Full article ">Figure 7
<p>MetaHuman: (<b>a</b>) option to mix several avatars, (<b>b</b>) using mix options, and (<b>c</b>) using surprise emotion in facial expression.</p>
Full article ">Figure 8
<p>(<b>a</b>) Assets folder to store all imported avatars and (<b>b</b>) view of several created avatars.</p>
Full article ">Figure 9
<p>(<b>a</b>) Unity Package Manager, (<b>b</b>) Bone Renderer Setup option activated, and (<b>c</b>) hierarchy tab, with all objects in the project.</p>
Full article ">Figure 10
<p>Example of the movement configuration process for an avatar using the VR IK Rig in Unity.</p>
Full article ">Figure 10 Cont.
<p>Example of the movement configuration process for an avatar using the VR IK Rig in Unity.</p>
Full article ">Figure 11
<p>(<b>a</b>) Project tab with the imported XR Interaction Tool component, (<b>b</b>) empty headset tracking component framed in red, and (<b>c</b>) headset tracking component configured.</p>
Full article ">Figure 12
<p>Components and configurations to work with PICO avatars highlighted in the red frames.</p>
Full article ">Figure 13
<p>The configuration of the bones of the hand using PICO.</p>
Full article ">Figure 14
<p>Avatar preview window using PICO.</p>
Full article ">Figure 15
<p>(<b>a</b>) Quixel Bridge opening button in Unreal Engine, (<b>b</b>) MetaHuman window inside Quixel Bridge, (<b>c</b>) MetaHuman download option, (<b>d</b>) download progress bar of our MetaHuman, (<b>e</b>) dialog boxes prompting to activate missing plugins, (<b>f</b>) MetaHuman imported into the Content Browser, (<b>g</b>) MetaHuman Blueprint in the Content Browser, and (<b>h</b>) functional MetaHuman Avatar within our project.</p>
Full article ">Figure 15 Cont.
<p>(<b>a</b>) Quixel Bridge opening button in Unreal Engine, (<b>b</b>) MetaHuman window inside Quixel Bridge, (<b>c</b>) MetaHuman download option, (<b>d</b>) download progress bar of our MetaHuman, (<b>e</b>) dialog boxes prompting to activate missing plugins, (<b>f</b>) MetaHuman imported into the Content Browser, (<b>g</b>) MetaHuman Blueprint in the Content Browser, and (<b>h</b>) functional MetaHuman Avatar within our project.</p>
Full article ">
18 pages, 9624 KiB  
Article
A Diffusion Modeling-Based System for Teaching Dance to Digital Human
by Linyan Zhou, Jingyuan Zhao and Jialiang He
Appl. Sci. 2024, 14(19), 9084; https://doi.org/10.3390/app14199084 - 8 Oct 2024
Viewed by 461
Abstract
The introduction of artificial intelligence (AI) has triggered changes in modern dance education. This study investigates the application of diffusion-based modeling and virtual digital humans in dance instruction. Utilizing AI and digital technologies, the proposed system innovatively merges music-driven dance generation with virtual [...] Read more.
The introduction of artificial intelligence (AI) has triggered changes in modern dance education. This study investigates the application of diffusion-based modeling and virtual digital humans in dance instruction. Utilizing AI and digital technologies, the proposed system innovatively merges music-driven dance generation with virtual human-based teaching. It achieves this by extracting rhythmic and emotional information from music through audio analysis to generate corresponding dance sequences. The virtual human, functioning as a digital tutor, demonstrates dance movements in real time, enabling students to accurately learn and execute dance postures and rhythms. Analysis of the teaching outcomes, including effectiveness, naturalness, and fluidity, indicates that learning through the digital human results in enhanced user engagement and improved learning outcomes. Additionally, the diversity of dance movements is increased. This system enhances students’ motivation and learning efficacy, offering a novel approach to innovating dance education. Full article
(This article belongs to the Special Issue Intelligent Techniques, Platforms and Applications of E-learning)
Show Figures

Figure 1

Figure 1
<p>Motion Synthesis Algorithm Diagram.</p>
Full article ">Figure 2
<p>System Technology Roadmap.</p>
Full article ">Figure 3
<p>Schematic of the AIST++ dataset.</p>
Full article ">Figure 4
<p>Schematic of the FineDance dataset.</p>
Full article ">Figure 5
<p>Diffusion modeling to predict dance sequences.</p>
Full article ">Figure 6
<p>Transformer model structure.</p>
Full article ">Figure 7
<p>Music-Driven Dance Generation Modeling Diagram.</p>
Full article ">Figure 8
<p>ADMGD-MD Module Structure Diagram.</p>
Full article ">Figure 9
<p>SMPL model diagram.</p>
Full article ">Figure 10
<p>Functional Module Diagram.</p>
Full article ">Figure 11
<p>Digital human model.</p>
Full article ">Figure 12
<p>Dance Movement Sequence Diagram.</p>
Full article ">Figure 13
<p>Digital Human National-Style Dance Teaching.</p>
Full article ">Figure 14
<p>Teaching Modern Dance Moves to Digital Human.</p>
Full article ">Figure 15
<p>Comparison of dance sequences generated by different methods.</p>
Full article ">Figure 16
<p>Different types of dance sequences.</p>
Full article ">Figure 17
<p>Rhythm characteristics.</p>
Full article ">Figure 18
<p>Intensity characteristics.</p>
Full article ">Figure 19
<p>Comparison of the teaching effects of four types of dance.</p>
Full article ">Figure 20
<p>Comparison of fluency of dance images generated by four methods.</p>
Full article ">Figure 21
<p>Comparison of effects at different skill levels.</p>
Full article ">
21 pages, 10416 KiB  
Review
Examining the Role of Augmented Reality and Virtual Reality in Safety Training
by Georgios Lampropoulos, Pablo Fernández-Arias, Álvaro Antón-Sancho and Diego Vergara
Electronics 2024, 13(19), 3952; https://doi.org/10.3390/electronics13193952 - 7 Oct 2024
Viewed by 775
Abstract
This study aims to provide a review of the existing literature regarding the use of extended reality technologies and the metaverse focusing on virtual reality (VR), augmented reality (AR), and mixed reality (MR) in safety training. Based on the outcomes, VR was predominantly [...] Read more.
This study aims to provide a review of the existing literature regarding the use of extended reality technologies and the metaverse focusing on virtual reality (VR), augmented reality (AR), and mixed reality (MR) in safety training. Based on the outcomes, VR was predominantly used in the context of safety training with immersive VR yielding the best outcomes. In comparison, only recently has AR been introduced in safety training but with positive outcomes. Both AR and VR can be effectively adopted and integrated in safety training and render the learning experiences and environments more realistic, secure, intense, interactive, and personalized, which are crucial aspects to ensure high-quality safety training. Their ability to provide safe virtual learning environments in which individuals can practice and develop their skills and knowledge in real-life simulated working settings that do not involve any risks emerged as one of the main benefits. Their ability to support social and collaborative learning and offer experiential learning significantly contributed to the learning outcomes. Therefore, it was concluded that VR and AR emerged as effective tools that can support and enrich safety training and, in turn, increase occupational health and safety. Full article
Show Figures

Figure 1

Figure 1
<p>Document processing.</p>
Full article ">Figure 2
<p>Annual published documents.</p>
Full article ">Figure 3
<p>Top affiliations based on the number of documents published.</p>
Full article ">Figure 4
<p>Country collaboration network.</p>
Full article ">Figure 5
<p>Country collaboration map.</p>
Full article ">Figure 6
<p>Most frequent keywords plus.</p>
Full article ">Figure 7
<p>Most frequent author’s keywords.</p>
Full article ">Figure 8
<p>Keywords plus co-occurrence network.</p>
Full article ">Figure 9
<p>Relationship between countries, keywords, and sources.</p>
Full article ">Figure 10
<p>Trend topics based on keywords plus.</p>
Full article ">Figure 11
<p>Conceptual structure map.</p>
Full article ">Figure 12
<p>Thematic map of the topic.</p>
Full article ">Figure 13
<p>Thematic evolution of the topic.</p>
Full article ">Figure 14
<p>Benefits and requirements of virtual reality when applied in different areas of safety training compared to traditional learning and training methods.</p>
Full article ">
24 pages, 34599 KiB  
Article
Diverse Humanoid Robot Pose Estimation from Images Using Only Sparse Datasets
by Seokhyeon Heo, Youngdae Cho, Jeongwoo Park, Seokhyun Cho, Ziya Tsoy, Hwasup Lim and Youngwoon Cha
Appl. Sci. 2024, 14(19), 9042; https://doi.org/10.3390/app14199042 - 7 Oct 2024
Viewed by 548
Abstract
We present a novel dataset for humanoid robot pose estimation from images, addressing the critical need for accurate pose estimation to enhance human–robot interaction in extended reality (XR) applications. Despite the importance of this task, large-scale pose datasets for diverse humanoid robots remain [...] Read more.
We present a novel dataset for humanoid robot pose estimation from images, addressing the critical need for accurate pose estimation to enhance human–robot interaction in extended reality (XR) applications. Despite the importance of this task, large-scale pose datasets for diverse humanoid robots remain scarce. To overcome this limitation, we collected sparse pose datasets for commercially available humanoid robots and augmented them through various synthetic data generation techniques, including AI-assisted image synthesis, foreground removal, and 3D character simulations. Our dataset is the first to provide full-body pose annotations for a wide range of humanoid robots exhibiting diverse motions, including side and back movements, in real-world scenarios. Furthermore, we introduce a new benchmark method for real-time full-body 2D keypoint estimation from a single image. Extensive experiments demonstrate that our extended dataset-based pose estimation approach achieves over 33.9% improvement in accuracy compared to using only sparse datasets. Additionally, our method demonstrates the real-time capability of 42 frames per second (FPS) and maintains full-body pose estimation consistency in side and back motions across 11 differently shaped humanoid robots, utilizing approximately 350 training images per robot. Full article
(This article belongs to the Special Issue Computer Vision, Robotics and Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>We present a learning-based full-body pose estimation method for various humanoid robots. Our keypoint detector, trained on an extended pose dataset, consistently estimates humanoid robot poses over time from videos, capturing front, side, back, and partial poses, while excluding human bodies (see supplemental video [<a href="#B11-applsci-14-09042" class="html-bibr">11</a>]). The following robots are shown: (<b>a</b>) Optimus Gen 2 (Tesla) [<a href="#B12-applsci-14-09042" class="html-bibr">12</a>]; (<b>b</b>) Apollo (Apptronik) [<a href="#B13-applsci-14-09042" class="html-bibr">13</a>]; (<b>c</b>) Atlas (Boston Dynamics) [<a href="#B14-applsci-14-09042" class="html-bibr">14</a>]; (<b>d</b>) Darwin-OP (Robotis) [<a href="#B15-applsci-14-09042" class="html-bibr">15</a>]; (<b>e</b>) EVE (1X Technologies) [<a href="#B16-applsci-14-09042" class="html-bibr">16</a>]; (<b>f</b>) FIGURE 01 (Figure) [<a href="#B17-applsci-14-09042" class="html-bibr">17</a>]; (<b>g</b>) H1 (Unitree) [<a href="#B18-applsci-14-09042" class="html-bibr">18</a>]; (<b>h</b>) Kepler (Kepler Exploration Robot) [<a href="#B19-applsci-14-09042" class="html-bibr">19</a>]; (<b>i</b>) Phoenix (Sanctuary AI) [<a href="#B20-applsci-14-09042" class="html-bibr">20</a>]; (<b>j</b>) TALOS (PAL Robotics) [<a href="#B21-applsci-14-09042" class="html-bibr">21</a>]; (<b>k</b>) Toro (DLR) [<a href="#B22-applsci-14-09042" class="html-bibr">22</a>].</p>
Full article ">Figure 2
<p>Joint configurations of humanoid robots used in the Diverse Humanoid Robot Pose Dataset (DHRP). (<b>a</b>) Apollo. (<b>b</b>) Atlas. (<b>c</b>) Darwin-OP. (<b>d</b>) EVE. (<b>e</b>) FIGURE 01. (<b>f</b>) H1. (<b>g</b>) Kepler. (<b>h</b>) Optimus Gen 2. (<b>i</b>) Phoenix. (<b>j</b>) TALOS. (<b>k</b>) Toro. Note: Phoenix lacks lower-body data in the dataset.</p>
Full article ">Figure 3
<p>Example training images from the DHRP Dataset. (<b>a</b>) Apollo. (<b>b</b>) Atlas. (<b>c</b>) Darwin-OP. (<b>d</b>) EVE. (<b>e</b>) FIGURE 01. (<b>f</b>) H1. (<b>g</b>) Kepler. (<b>h</b>) Optimus Gen 2. (<b>i</b>) Phoenix. (<b>j</b>) TALOS. (<b>k</b>) Toro. Note: Phoenix lacks lower-body data in the dataset.</p>
Full article ">Figure 4
<p>Example images from the arbitrary random humanoid robot dataset. These 2k additional images enhance the diversity of body shapes, appearances, and motions for various humanoid robots.</p>
Full article ">Figure 5
<p>Example images from the synthetic dataset. The first row displays examples generated through AI-assisted image synthesis using Viggle [<a href="#B54-applsci-14-09042" class="html-bibr">54</a>]. The second row showcases examples created via 3D character simulations using Unreal Engine [<a href="#B55-applsci-14-09042" class="html-bibr">55</a>]. These 6.7k additional annotations enhance the diversity of motions and scenarios for various humanoid robots.</p>
Full article ">Figure 6
<p>Example images from the random background dataset. The first row displays examples from the target humanoid robot dataset, while the second row shows the corresponding foreground-removed images in the background dataset, generated using Adobe Photoshop Generative Fill [<a href="#B58-applsci-14-09042" class="html-bibr">58</a>]. This dataset includes 133 AI-assisted foreground-removed images and 1886 random indoor and outdoor background images. The 2k background images, which do not feature humanoid robots, improve the distinction between robots and their backgrounds, particularly in environments with metallic objects that may resemble the robots’ body surfaces.</p>
Full article ">Figure 7
<p>Network architecture for the 2D joint detector: Starting with a single input image, the n-stage network generates keypoint coordinates <span class="html-italic">K</span> and their corresponding confidence heat maps <span class="html-italic">H</span>. At each stage, the output from the hourglass module [<a href="#B59-applsci-14-09042" class="html-bibr">59</a>] is passed forward to both the next stage and the Differentiable Spatial-to-Numerical Transform (DSNT) regression module [<a href="#B62-applsci-14-09042" class="html-bibr">62</a>]. The DSNT module then produces both <span class="html-italic">H</span> and <span class="html-italic">K</span>. In the parsing stage, each keypoint <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>j</mi> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>∈</mo> <mi>K</mi> </mrow> </semantics></math> is identified with its associated confidence value <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mi>j</mi> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <msub> <mi>H</mi> <mi>j</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Qualitative evaluation on selected frames. The proposed learning-based full-body pose estimation method for various humanoid robots, trained on our DHRP dataset, consistently estimates the poses of humanoid robots over time from video frames, capturing front, side, back, and partial poses. (<b>a</b>) Apollo. (<b>b</b>) Atlas. (<b>c</b>) DARwln-OP. (<b>d</b>) EVE. (<b>e</b>) FIGURE 01. (<b>f</b>) H1. (<b>g</b>) Kepler. (<b>h</b>) Phoenix. (<b>i</b>) TALOS. (<b>j</b>) Toro.</p>
Full article ">Figure 9
<p>Qualitative evaluation on selected frames. Full-body pose estimation results for miniature humanoid robot models not included in the DHRP dataset. These results demonstrate that our method can be extended to other types of humanoid robots.</p>
Full article ">Figure 10
<p>Common failure cases: (<b>a</b>–<b>c</b>) False part detections caused by interference from nearby metallic objects. (<b>d</b>) False negatives due to interference from objects with similar appearances. (<b>e</b>) False negatives in egocentric views caused by the rarity of torso body part observations. (<b>f</b>) False positives on non-humanoid robots.</p>
Full article ">
13 pages, 485 KiB  
Review
Beyond Presence: Exploring Empathy within the Metaverse
by Anjitha Divakaran, Hyung-Jeong Yang, Seung-won Kim, Ji-eun Shin and Soo-Hyung Kim
Appl. Sci. 2024, 14(19), 8958; https://doi.org/10.3390/app14198958 - 4 Oct 2024
Viewed by 389
Abstract
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our [...] Read more.
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our analysis reveals a predominant focus on specific empathy, driven by the immersive nature of virtual settings, such as virtual reality (VR) and augmented reality (AR). However, we argue that such immersive scenarios alone are insufficient for a comprehensive exploration of empathy. To deepen empathetic engagement, we propose the integration of advanced sensory feedback mechanisms, such as haptic feedback and biometric sensing. This paper examines the current state of empathy in virtual environments, contrasts it with the potential for enriched empathetic connections through technological enhancements, and proposes future research directions. By fostering both specific and universal empathy, we envision a metaverse that not only bridges gaps but also cultivates meaningful, empathetic connections across its diverse user base. Full article
Show Figures

Figure 1

Figure 1
<p>Review process using PRISMA.</p>
Full article ">Figure 2
<p>Types of empathy.</p>
Full article ">Figure 3
<p>Research on specific empathy in the metaverse.</p>
Full article ">Figure 4
<p>Empathy-eliciting technologies organized by year.</p>
Full article ">
34 pages, 6990 KiB  
Article
A Bibliometric Analysis of Research on the Metaverse for Smart Cities: The Dimensions of Technology, People, and Institutions
by Lele Zhou and Woojong Suh
Systems 2024, 12(10), 412; https://doi.org/10.3390/systems12100412 - 4 Oct 2024
Viewed by 505
Abstract
The “Metaverse” is evaluated as having significant potential in a “Smart city” design and operation. Despite growing interest, there is still a lack of comprehensive quantitative analysis on the “Metaverse”, particularly in the context of smart cities. This study conducts a bibliometric analysis [...] Read more.
The “Metaverse” is evaluated as having significant potential in a “Smart city” design and operation. Despite growing interest, there is still a lack of comprehensive quantitative analysis on the “Metaverse”, particularly in the context of smart cities. This study conducts a bibliometric analysis of 604 articles selected from the “WoS” database and employs three dimensions of technology, people, and institutions as a balanced perspective on smart cities, providing a comprehensive understanding of research trends on the “Metaverse” in the context of smart cities. This study identifies the “Metaverse” as a Virtual reality technology, popular since 2021, and provides information on the active years, countries, fields, journals, authors, and institutions involved in “Metaverse” research on smart cities. This study also identifies three stages of research development as follows: Stage 1 (2007–2013) to Stage 2 (2014–2020) and Stage 3 (2021–20 October 2023), revealing the research focus evolution from basic “urban planning” to complex “urban governance” and “Smart city” construction with consideration of multi-stakeholders’ perspectives. Additionally, this study reveals that “Metaverse” research studies on the “technology” dimension have consistently outnumbered that on “institutions” and “people” across all stages in the “Smart city” domain. These findings address current theoretical gaps and offer a foundation for future research. Full article
(This article belongs to the Section Systems Practice in Social Science)
Show Figures

Figure 1

Figure 1
<p>Research stages of the bibliometric analysis. Note: PRISMA—Preferred Reporting Items for Systematic Reviews and Meta-Analyses [<a href="#B103-systems-12-00412" class="html-bibr">103</a>].</p>
Full article ">Figure 2
<p>PRISMA diagram adapted to illustrate the document selection process. Note: PRISMA—Preferred Reporting Items for Systematic Reviews and Meta-Analyses [<a href="#B103-systems-12-00412" class="html-bibr">103</a>]. Note: The symbol “#” is used to distinguish specific keyword groups.</p>
Full article ">Figure 3
<p>Research on the “Metaverse” related to the “Smart city” from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 4
<p>The top 15 productive countries involved in “Metaverse” research related to the “Smart city” from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 5
<p>Documents involving “Metaverse” research related to the “Smart city” by research area from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 6
<p>Productive journals involved in “Metaverse” research related to the “Smart city” from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 7
<p>Productive authors involved in “Metaverse” research related to the “Smart city” from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 8
<p>Productive affiliations involved in “Metaverse” research related to the “Smart city” from 2007 to 2023 (20 October) in “WoS”.</p>
Full article ">Figure 9
<p>Word cloud created in the keyword analysis.</p>
Full article ">Figure 10
<p>Result of keyword visualization in Stage 1 (2007–2013).</p>
Full article ">Figure 11
<p>Result of the top 30 keyword visualization in Stage 2 (2014–2020).</p>
Full article ">Figure 12
<p>Result of the top 30 keywords visualization in Stage 3 (2021–20 October 2023).</p>
Full article ">Figure 13
<p>Extracted topics and their classifications by the Smart city dimensions.</p>
Full article ">Figure 14
<p>Topic distribution by Smart city dimension.</p>
Full article ">Figure 15
<p>Topic distribution by research stage.</p>
Full article ">
25 pages, 1188 KiB  
Article
Adoption and Continuance in the Metaverse
by Donghyuk Shin and Hyeon Jo
Electronics 2024, 13(19), 3917; https://doi.org/10.3390/electronics13193917 - 3 Oct 2024
Viewed by 386
Abstract
The burgeoning metaverse market, encompassing virtual and augmented reality, gaming, and manufacturing processes, presents a unique domain for studying user behavior. This study delineates a research framework to investigate the antecedents of behavioral intention, bifurcating users into inexperienced and experienced cohorts. Utilizing a [...] Read more.
The burgeoning metaverse market, encompassing virtual and augmented reality, gaming, and manufacturing processes, presents a unique domain for studying user behavior. This study delineates a research framework to investigate the antecedents of behavioral intention, bifurcating users into inexperienced and experienced cohorts. Utilizing a cross-sectional survey, empirical data were amassed and analyzed using structural equation modeling, encompassing 372 responses from 131 inexperienced and 241 experienced users. For inexperienced users, the analysis underscored the significant impact of perceived usefulness on both satisfaction and adoption intention, while perceived enjoyment was found to bolster only satisfaction. Innovativeness and satisfaction do not drive adoption intention. Conversely, for experienced users, satisfaction was significantly influenced by perceived ease of use, perceived usefulness, and perceived enjoyment. Continuance intention was positively affected by perceived usefulness, perceived enjoyment, trust, innovativeness, and satisfaction. This research extends valuable insights for both theoretical advancements and practical implementations in the burgeoning metaverse landscape. Full article
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Research model.</p>
Full article ">Figure 2
<p>Analysis results (inexperienced users).</p>
Full article ">Figure 3
<p>Analysis results (experienced users).</p>
Full article ">
26 pages, 3565 KiB  
Article
Analyzing VR Game User Experience by Genre: A Text-Mining Approach on Meta Quest Store Reviews
by Dong-Min Yoon, Seung-Hyun Han, Inyoung Park and Tae-Sung Chung
Electronics 2024, 13(19), 3913; https://doi.org/10.3390/electronics13193913 - 3 Oct 2024
Viewed by 484
Abstract
With the rapid expansion of the virtual reality (VR) market, user interest in VR games has increased significantly. However, empirical research on the user experience in VR games remains relatively underdeveloped. Despite the growing popularity and commercial success of VR gaming, there is [...] Read more.
With the rapid expansion of the virtual reality (VR) market, user interest in VR games has increased significantly. However, empirical research on the user experience in VR games remains relatively underdeveloped. Despite the growing popularity and commercial success of VR gaming, there is a lack of comprehensive studies analyzing the impact of different aspects of VR games on user satisfaction and engagement. This gap includes insufficient research on the categorization of VR game genres, the identification of user challenges, and variations in user experiences across these genres. Our study aims to fill this gap by analyzing data from the Meta Quest store using K-means clustering and LDA (Latent Dirichlet Allocation) to categorize the representative genres of VR games. By employing text-mining techniques to conduct a detailed analysis of user experience, we effectively elucidate the primary issues and nuanced differences in user responses across various genres. Our findings serve as a valuable reference for researchers aiming to design games that align with VR user expectations. Furthermore, our study provides a foundational dataset for researchers aiming to enhance the user experience in VR games and suggests ways to increase the immersion and enjoyment of VR gameplay. Full article
Show Figures

Figure 1

Figure 1
<p>Steam users’ share of VR headsets by device.</p>
Full article ">Figure 2
<p>Data Flow.</p>
Full article ">Figure 3
<p>Games per Genre.</p>
Full article ">Figure 4
<p>Optimal K via Silhouette Scores.</p>
Full article ">Figure 5
<p>Representative games by genre (maximum sales criteria).</p>
Full article ">Figure 6
<p>Coherence and Perplexity Scores for Each Genre.</p>
Full article ">Figure 7
<p>Similarity Matrix between Genres.</p>
Full article ">
22 pages, 1710 KiB  
Review
Decentralized Identity Management for Metaverse-Enhanced Education: A Literature Review
by Maria Polychronaki, Michael G. Xevgenis, Dimitrios G. Kogias and Hellen C. Leligou
Electronics 2024, 13(19), 3887; https://doi.org/10.3390/electronics13193887 - 30 Sep 2024
Viewed by 364
Abstract
As we transition into the era of Web 3.0, where decentralized information and user privacy are paramount, emerging technologies are reshaping the way in which personal data are managed. This paper focuses on decentralized identity management (DID) in the metaverse, particularly within the [...] Read more.
As we transition into the era of Web 3.0, where decentralized information and user privacy are paramount, emerging technologies are reshaping the way in which personal data are managed. This paper focuses on decentralized identity management (DID) in the metaverse, particularly within the education sector, which has rapidly embraced digital tools for e-learning, especially since the COVID-19 pandemic. Technologies such as blockchain, artificial intelligence (AI), and virtual and augmented reality (VR/AR) are increasingly integrated into educational platforms, raising questions about privacy, security, and interoperability. This literature review examines the current landscape of DID in metaverse-based educational applications. Through a systematic methodology, relevant academic papers were identified, filtered, and analyzed based on four key criteria: standardization, interoperability, application scalability, and security/privacy considerations. The paper provides a comparative analysis of these papers to assess the maturity of DID implementations, highlight existing challenges, and suggest future research directions in the intersection of decentralized identity and educational metaverse applications. Full article
Show Figures

Figure 1

Figure 1
<p>Methodology stages.</p>
Full article ">Figure 2
<p>Target area of the literature review.</p>
Full article ">Figure 3
<p>Literature databases search results after filtering.</p>
Full article ">Figure 4
<p>Focus area graph of referenced literature.</p>
Full article ">Figure 5
<p>Type of contribution graph of referenced literature.</p>
Full article ">Figure 6
<p>DIM standardization and interoperability graph of referenced literature.</p>
Full article ">
10 pages, 255 KiB  
Entry
The Metaverse Territorial Brand: A Contemporary Concept
by Giovana Goretti Feijó Almeida
Encyclopedia 2024, 4(4), 1472-1481; https://doi.org/10.3390/encyclopedia4040095 - 29 Sep 2024
Viewed by 463
Definition
The “Metaverse Territorial Brand” integrates core and interconnected elements into a virtual, interactional, experiential, and immersive space known as the metaverse. This type of brand encompasses the connection with immersive territories that may or may not be digital twins of real territories. It [...] Read more.
The “Metaverse Territorial Brand” integrates core and interconnected elements into a virtual, interactional, experiential, and immersive space known as the metaverse. This type of brand encompasses the connection with immersive territories that may or may not be digital twins of real territories. It also encompasses two interconnected physical scales: the territorial and the regional, involved in another type of emerging territorial scale, known as the metaversal scale. Therefore, the “Metaverse Territorial Brand” is a digital-immersive extension of the territorial brand of physical territories, encompassing specific geographical and cultural aspects, but directed to the metaverse environment. This brand is a symbolic digital construction, but also a multifaceted one that incorporates discursive and visual elements, articulated by the social actors of the immersive territory, aiming to create a specific and distinct identity for a space in the metaverse. When talking about social actors in the metaverse (users), we highlight that this set of actors may or may not be the same as the physical territory. It is also important to highlight that both the territorial brand directed to physical territories and the “Metaverse Territorial Brand” are formed from the power relations of a given set of social actors. Therefore, without the strategic intention of a plurality of social actors that stimulate these relationships, there is no type of territorial brand involved. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
27 pages, 28326 KiB  
Article
Full-Body Pose Estimation of Humanoid Robots Using Head-Worn Cameras for Digital Human-Augmented Robotic Telepresence
by Youngdae Cho, Wooram Son, Jaewan Bak, Yisoo Lee, Hwasup Lim and Youngwoon Cha
Mathematics 2024, 12(19), 3039; https://doi.org/10.3390/math12193039 - 28 Sep 2024
Cited by 1 | Viewed by 450
Abstract
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose [...] Read more.
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose a method for overlaying a digital human model onto a humanoid robot using XR visualization, enabling an immersive 3D telepresence experience. Our approach employs a learning-based method to estimate the 2D poses of the humanoid robot from head-worn stereo views, leveraging a newly collected dataset of full-body poses for humanoid robots. The stereo 2D poses and sparse inertial measurements from the remote operator are optimized to compute 3D poses over time. The digital human is localized from the perspective of a continuously moving observer, utilizing the estimated 3D pose of the humanoid robot. Our moving camera-based pose estimation method does not rely on any markers or external knowledge of the robot’s status, effectively overcoming challenges such as marker occlusion, calibration issues, and dependencies on headset tracking errors. We demonstrate the system in a remote physical training scenario, achieving real-time performance at 40 fps, which enables simultaneous immersive and physical interactions. Experimental results show that our learning-based 3D pose estimation method, which operates without prior knowledge of the robot, significantly outperforms alternative approaches requiring the robot’s global pose, particularly during rapid headset movements, achieving markerless digital human augmentation from head-worn views. Full article
(This article belongs to the Topic Extended Reality: Models and Applications)
Show Figures

Figure 1

Figure 1
<p>We present a digital human-augmented robotic telepresence system that overlays a remote person onto a humanoid robot using only head-worn cameras worn by the local user. This approach enables immersive 3D telepresence for continuously moving observers during robot teleoperation without the need for markers or knowledge of the robot’s status. Top left: The local user interacts visually with the digital avatar through a head-mounted display (HMD) while physically interacting with the humanoid robot, facilitating both visual and physical immersion. Top right: The remote user communicates via video conferencing through the HMD. Bottom row, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> image group: The remote user operates the robot using wireless motion capture.</p>
Full article ">Figure 2
<p>Overview of our digital human-augmented robotic telepresence system: The remote user (<b>left</b>) and the local user (<b>right</b>) are located in separate spaces and are wirelessly connected. The local user interacts with the remote person through a visually immersive digital human avatar and a physically interactive humanoid robot.</p>
Full article ">Figure 3
<p>Humanoid robots used in our system: (<b>a</b>) Version 1 equipped with both arms and a head. (<b>b</b>) Version 2 equipped with both arms, legs, and a head. (<b>c</b>) Version 3 equipped with a full body and both hands.</p>
Full article ">Figure 4
<p>The digital human avatar used in our system. The digital human model is rescaled to match the size of the humanoid robot in real-world scale.</p>
Full article ">Figure 5
<p>Telepresence and capture prototype: (<b>a</b>) Prototype for the remote person, featuring six IMUs for body motion capture and an Apple Vision Pro [<a href="#B52-mathematics-12-03039" class="html-bibr">52</a>] for video conferencing. (<b>b</b>) Prototype for the local person, equipped with XReal Light [<a href="#B53-mathematics-12-03039" class="html-bibr">53</a>] for Augmented Reality (AR) and stereo cameras mounted on top of the HMD for local space observations.</p>
Full article ">Figure 6
<p>Network structure for the 2D joint detector: Given a single input image, the 4-stage network outputs the keypoint coordinates <span class="html-italic">K</span> and their confidence maps <span class="html-italic">H</span>. The Hourglass module [<a href="#B58-mathematics-12-03039" class="html-bibr">58</a>] outputs unnormalized heatmaps <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo>˜</mo> </mover> </semantics></math>, which are propagated to the next stage. The DSNT regression module [<a href="#B59-mathematics-12-03039" class="html-bibr">59</a>] normalizes <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo>˜</mo> </mover> </semantics></math> and produces <span class="html-italic">H</span> and <span class="html-italic">K</span>.</p>
Full article ">Figure 7
<p>Full-body 3D pose estimation pipeline: Heterogeneous sensor data from the XR headset and stereo camera rig worn by the local user, combined with inertial sensors worn by the remote person, are fused to estimate the full-body pose of the digital human from the local user’s perspective.</p>
Full article ">Figure 8
<p>Qualitative evaluation of the joint detector, showing selected frames of full-body humanoid robot pose estimation from head-worn camera views. The joint detector accurately identifies full-body joints even during fast head movements with motion blur, when parts of the robot are out of frame, or at varying distances, allowing for close-range interactions with the user.</p>
Full article ">Figure 9
<p>Qualitative evaluation in world space. Each row presents SMPL [<a href="#B41-mathematics-12-03039" class="html-bibr">41</a>] overlay results from a viewpoint in global space: (<b>a</b>) World space view. (<b>b</b>) Mocap using ground truth pose for the lower body, root rotation, and position. (<b>c</b>) Smplify [<a href="#B40-mathematics-12-03039" class="html-bibr">40</a>] using ground truth position. (<b>d</b>) Ours, operating without external knowledge. The checkerboard pattern on the ground is used solely for external camera calibration in global space.</p>
Full article ">Figure 10
<p>Qualitative evaluation on head-worn views. Each row displays SMPL [<a href="#B41-mathematics-12-03039" class="html-bibr">41</a>] overlay results corresponding to the frames in <a href="#mathematics-12-03039-f009" class="html-fig">Figure 9</a> from the input camera perspective: (<b>a</b>) Input head-worn view. (<b>b</b>) Mocap using ground truth pose for the lower body, root rotation, and position. (<b>c</b>) Smplify [<a href="#B40-mathematics-12-03039" class="html-bibr">40</a>] using ground truth position. (<b>d</b>) Ours, operating without external knowledge. The checkerboard pattern on the ground is used solely for external camera calibration in global space.</p>
Full article ">Figure 11
<p>Selected frames of digital human visualizations captured directly from the head-mounted display. The photos were taken externally, showing the view as visualized to the user’s eyes through the XR glasses.</p>
Full article ">
47 pages, 20404 KiB  
Systematic Review
A Systematic Review on Extended Reality-Mediated Multi-User Social Engagement
by Yimin Wang, Daojun Gong, Ruowei Xiao, Xinyi Wu and Hengbin Zhang
Systems 2024, 12(10), 396; https://doi.org/10.3390/systems12100396 - 26 Sep 2024
Viewed by 1182
Abstract
The metaverse represents a post-reality universe that seamlessly merges physical reality with digital virtuality. It provides a continuous and immersive social networking environment, enabling multi-user engagement and interaction through Extended Reality (XR) technologies, which include Virtual Reality (VR), Augmented Reality (AR), and Mixed [...] Read more.
The metaverse represents a post-reality universe that seamlessly merges physical reality with digital virtuality. It provides a continuous and immersive social networking environment, enabling multi-user engagement and interaction through Extended Reality (XR) technologies, which include Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). As a novel solution distinct from traditional methods such as mobile-based applications, the technical affordance of XR technologies in shaping multi-user social experiences remains a complex, multifaceted, and multivariate issue that has not yet been thoroughly explored. Additionally, there is a notable absence of mature frameworks and guidelines for designing and developing these multi-user socio-technical systems. Enhancing multi-user social engagement through these technologies remains a significant research challenge. This systematic review aims to address this gap by establishing an analytical framework guided by the PRISMA protocol. It analyzes 88 studies from various disciplines, including computer science, social science, psychology, and the arts, to define the mechanisms and effectiveness of XR technologies in multi-user social engagement. Quantitative methods such as descriptive statistics, correlation statistics, and text mining are used to examine the manifestation of mechanisms, potential system factors, and their effectiveness. Meanwhile, qualitative case studies identify specific measures by which system factors enhance multi-user social engagement. The study provides a pioneering framework for theoretical research and offers practical insights for developing cross-spatiotemporal co-present activities in the metaverse. It also promotes critical reflection on the evolving relationship between humans and this emerging digital universe. Full article
(This article belongs to the Special Issue Value Assessment of Product Service System Design)
Show Figures

Figure 1

Figure 1
<p>The Definition of XR Systems. This study uses the classification method and concepts proposed by the Reality-Virtuality Continuum [<a href="#B79-systems-12-00396" class="html-bibr">79</a>] and Tremosa and the Interaction Design Foundation [<a href="#B81-systems-12-00396" class="html-bibr">81</a>] in defining the XR systems.</p>
Full article ">Figure 2
<p>The Analysis Framework of XR-mediated MSE. (<b>A</b>) Cognitive–behavioral Mechanism of Multi-user Social Engagement. (<b>B</b>) Technology-mediated Mechanism of Extended Reality Systems.</p>
Full article ">Figure 3
<p>MSE-related Topics in the Collected Samples.</p>
Full article ">Figure 4
<p>XR-related Topics in the Collected Samples.</p>
Full article ">Figure 5
<p>Alluvial Map of Application Domains and XR Types.</p>
Full article ">Figure 6
<p>Alluvial Map of Application Domains and MSE Scales.</p>
Full article ">Figure 7
<p>The Data Distribution of Components in MSE’s Cognitive–Behavioral Mechanism. Note: In the variable ‘Cues’, the following abbreviations are used: (1) signifies Verbal Communication, (2) signifies Facial Expression, (3) signifies Gesture, (4) signifies Touch, (5) signifies Posture, (6) signifies Object Control, (7) signifies Information Sharing. (2) Each Bar Representing the Components in This Figure Contains Sub-bars with a Gradient from Purple to Blue to Green, Indicating the Distribution of the Number of Manifestations Corresponding to the Components (e.g., Total <span class="html-italic">N</span> of Autonomy = 88, intrinsic <span class="html-italic">N</span> = 59).</p>
Full article ">Figure 8
<p>The Data Distribution of SFs in the XR-Mediated Mechanism. Note: (1) In the variable ‘Avatar’, the following abbreviations are used: N signifies Non-body, FV signifies Full Visual Body, FP signifies Full Physical Body, PP signifies Part Physical Body, and PV signifies Part Visual Body. (2) Each Bar Representing the SFs in This Figure Contains Sub-bars with a Gradient from Purple to Blue to Green, Indicating the Distribution of the Number of Manifestations Corresponding to the SFs (e.g., Total <span class="html-italic">N</span> of Interaction Targets = 88, Human <span class="html-italic">N</span> = 72).</p>
Full article ">Figure 9
<p>The Heatmap of the Correlations between Components of the MSE and SFs of XR. Note: Note: The Correlation between the Variables was Indicated by “*” to Signify Its Significance at the 0.05 Level, Namely <span class="html-italic">p</span> &lt; 0.05, “**” to Signify its Significance at the 0.01 Level, Namely <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 10
<p>Analysis of Goals, Types, and Durations. Note: The light green, medium blue-green, and dark blue colors in the figure represent the number of the analyzed items’ (e.g., goals) samples ranked first, second, and third, respectively.</p>
Full article ">Figure 11
<p>Analysis of Participant Size.</p>
Full article ">Figure 12
<p>Analysis of Research Methods. Note: The sectors in the pie chart do not add up to 100%, but rather 99.7%, due to a very slight rounding difference. This minor discrepancy does not affect the accuracy of the arguments and conclusions in the main text.</p>
Full article ">Figure 13
<p>MSE-related Metric Network.</p>
Full article ">Figure 14
<p>XR-related Metric Network.</p>
Full article ">Figure 15
<p>Positive MSE-related Effect.</p>
Full article ">Figure 16
<p>Positive XR-related Effect.</p>
Full article ">Figure 17
<p>Negative MSE-related Effect.</p>
Full article ">Figure 18
<p>Negative XR-related Effect.</p>
Full article ">
27 pages, 3152 KiB  
Article
Innovative Integration of Poetry and Visual Arts in Metaverse for Sustainable Education
by Ji-yoon Kim and Han-sol Kim
Educ. Sci. 2024, 14(9), 1012; https://doi.org/10.3390/educsci14091012 - 15 Sep 2024
Viewed by 645
Abstract
The rapid advancement of digital technology has necessitated a reevaluation of traditional educational methodologies, particularly in literature and visual arts. This study investigates the application of metaverse technology to integrate contemporary poetry and visual arts, aiming to enhance university-level education. The purpose is [...] Read more.
The rapid advancement of digital technology has necessitated a reevaluation of traditional educational methodologies, particularly in literature and visual arts. This study investigates the application of metaverse technology to integrate contemporary poetry and visual arts, aiming to enhance university-level education. The purpose is to develop a convergent teaching method that leverages the immersive and interactive capabilities of the metaverse. The research involves a joint exhibition project with students from Sangmyung University and international participants, incorporating a metaverse-based educational program. A sample of 85 students participated in the program, and their experiences were evaluated through surveys and focus group interviews (FGIs). The findings reveal significant correlations between content satisfaction and method satisfaction, underscoring the importance of engaging and interactive methods. The study also identifies technical challenges and provides insights for optimizing digital platforms for educational purposes. The implications suggest that integrating metaverse technology in arts education can significantly enhance creativity, critical thinking, and interdisciplinary skills, offering a sustainable and innovative approach to modern education. Based on these implications, this paper proposes methods for incorporating the insights gained from case analyses and implications into the design of educational programs. It is anticipated that this approach will contribute to enhancing the quality of convergence education in higher education institutions. Furthermore, it is expected that this program will serve as a starting point for the systematic implementation of integrated education and the use of digital platforms, thereby helping to reduce disparities in integrated education between countries. Full article
(This article belongs to the Special Issue Technology-Based Immersive Teaching and Learning)
Show Figures

Figure 1

Figure 1
<p>Work by student Jo Eun-young. Translation of Poem: Robots, like insects with metal plates covering their entire bodies, might emerge. New, well-refined cockroaches might appear.</p>
Full article ">Figure 2
<p>Entrance to the metaverse exhibition hall. (The Korean language in the image reads “2023 KF Public Diplomacy Project”).</p>
Full article ">Figure 3
<p>Scene from the metaverse exhibition (Korean on the screen is simply an instruction menu such as “move” “visit book” and “setting”).</p>
Full article ">Figure 4
<p>Avatar viewing the metaverse exhibition. (Korean on the screen is simply an instruction menu such as “move” “visit book” and “setting”).</p>
Full article ">Figure 5
<p>Basic statistical analysis of responses to questions (Q1_1 to Q1_4).</p>
Full article ">Figure 6
<p>Basic statistical analysis of responses to questions (Q2_1 to Q2_3).</p>
Full article ">Figure 7
<p>Basic statistical analysis of responses to questions (Q3_1 to Q3_2).</p>
Full article ">Figure 8
<p>Correlation matrix.</p>
Full article ">Figure 9
<p>Visualization of structural equation modeling analysis of responses to questions (Q1_1 to Q6_2) (1).</p>
Full article ">Figure 10
<p>Visualization of structural equation modeling analysis of responses to questions (Q1_1 to Q6_2) (2).</p>
Full article ">Figure 11
<p>Visualization of structural equation modeling analysis of responses to questions (Q1_1 to Q6_2) (3).</p>
Full article ">Figure 12
<p>Visualization of structural equation modeling analysis of responses to questions (Q1_1 to Q6_2) (4).</p>
Full article ">
Back to TopTop