Journal Description
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction
is an international, peer-reviewed, open access journal on multimodal technologies and interaction published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Inspec, dblp Computer Science Bibliography, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Cybernetics) / CiteScore - Q2 (Neuroscience (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.5 days after submission; acceptance to publication is undertaken in 4.9 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.4 (2023)
Latest Articles
Impact of Metaverse Technology on Academic Achievement and Motivation in Middle School Science
Multimodal Technol. Interact. 2024, 8(10), 91; https://doi.org/10.3390/mti8100091 - 12 Oct 2024
Abstract
►
Show Figures
This study explores the effects of Metaverse technology on middle school learners’ academic performance and motivation in science subjects. Utilizing a quasi-experimental design, 33 students in the experimental group were exposed to the Metaverse for one semester, while 32 students in the control
[...] Read more.
This study explores the effects of Metaverse technology on middle school learners’ academic performance and motivation in science subjects. Utilizing a quasi-experimental design, 33 students in the experimental group were exposed to the Metaverse for one semester, while 32 students in the control group continued with traditional teaching methods at School 148 in Riyadh. Data collection instruments included a validated science achievement test and a motivation scale. The results demonstrated that the nodes were statistically significantly improved in the experimental group, receiving an average post-test score of 73.1, as compared with the control group, receiving an average post-test score of 65.9 (t = 2.3, p < 0.05). The scores in motivation were also slightly higher in the experimental group, with a mean of 26.9, as compared with the control group, with a mean of 17.1 (t = 5.75, p < 0.05). For academic achievement and motivation, the effect sizes were quite high: fixed effect = 1.091; random effect equals 1.086. These results demonstrate the possibilities of Metaverse technology in revolutionizing the way students learn science. This technology could be a valuable tool for instruction in science classes to enhance performances and influence students’ attitudes positively towards enhanced learning environments in schools.
Full article
Open AccessArticle
Predictive Gaze Analytics: A Comparative Case Study of the Foretelling Signs of User Performance during Interaction with Visualizations of Ontology Class Hierarchies
by
Bo Fu
Multimodal Technol. Interact. 2024, 8(10), 90; https://doi.org/10.3390/mti8100090 - 12 Oct 2024
Abstract
►▼
Show Figures
The current research landscape in ontology visualization has largely focused on tool development, yielding an extensive array of visualization tools. Although many existing solutions provide multiple ontology visualization layouts, there is limited research in adapting to an individual user’s performance, despite successful applications
[...] Read more.
The current research landscape in ontology visualization has largely focused on tool development, yielding an extensive array of visualization tools. Although many existing solutions provide multiple ontology visualization layouts, there is limited research in adapting to an individual user’s performance, despite successful applications of adaptive technologies in related fields, including information visualization. In an effort to innovate beyond traditional one-size-fits-all visualizations, this paper contributes one step towards realizing user adaptive visualization by recognizing timely moments when users may potentially need intervention, as real-time adaptation can only occur if it is possible to correctly predict user success and failure during an interaction in the first place. In addition, an open-source, reusable, and extensible software: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze) is made available to the broader scientific community interested in descriptive and predictive gaze analytics. Building on a wealth of research in eye tracking, this paper compares four approaches to predictive gaze analytics through a series of experiments that utilize scheduled gaze digests, irregular gaze events, the last known gaze status, as well as all gaze captured for a user at a given moment in time. The results from a set of experimental trials suggest that irregular gaze events are most informative of early predictions of user performance, whereas cognitive workload appears to be most indicative of overall user performance in the task scenario presented in this paper. These empirical findings highlight the importance of an analytical approach to gaze on user predictions and indicate careful consideration when applying.
Full article
Figure 1
Figure 1
<p>Descriptive gaze measures, illustrating fixations, saccadic lengths, relative and absolute angles.</p> Full article ">Figure 2
<p>Taking a scheduled digest view of the gaze data using a tumbling window that is non-overlapping and fixed in size.</p> Full article ">Figure 3
<p>Taking the most recent snapshot view of the gaze data using a hopping window that is overlapping and fixed in size.</p> Full article ">Figure 4
<p>Taking an event-based view of the gaze data using a session window that is non-overlapping and non-fixed in size.</p> Full article ">Figure 5
<p>Taking a cumulative view of the gaze data using an expanding window that is overlapping and non-fixed in size.</p> Full article ">Figure 6
<p>BEACH-Gaze workbench with a simple interface to support researchers exploring gaze-based predictions using digests, events, and snapshots of gaze, as well as cumulative gaze.</p> Full article ">Figure 7
<p>Mapping Tasks. Participants were asked to answer <span class="html-italic">true</span> or <span class="html-italic">false</span> questions in drop-down menus for each given mapping (equivalent relations between ontology classes exclusively) and create new mappings if the existing set was deemed to be incomplete. (<b>a</b>) Mapping tasks in the Conference domain; (<b>b</b>) Mapping tasks in the Biomedical domain.</p> Full article ">Figure 8
<p>Examples of the indented list visualization, where classes of an ontology pair in the Conference domain were visualized to the participants side by side to assist with the tasks shown in <a href="#mti-08-00090-f007" class="html-fig">Figure 7</a>a. (<b>a</b>) The source ontology visualized as an indented list; (<b>b</b>) The target ontology visualized as a separate indented list.</p> Full article ">Figure 9
<p>Examples of node-link diagrams, where classes of an ontology pair in the Biomedical domain were visualized to the participants side by side to assist with the tasks shown in <a href="#mti-08-00090-f007" class="html-fig">Figure 7</a>b. (<b>a</b>) The source ontology visualized as a node-link diagram; (<b>b</b>) The target ontology visualized as a separate node-link diagram.</p> Full article ">Figure 10
<p>Layout of the mapping tasks and ontology visualizations on one computer screen shown to each participant.</p> Full article ">
<p>Descriptive gaze measures, illustrating fixations, saccadic lengths, relative and absolute angles.</p> Full article ">Figure 2
<p>Taking a scheduled digest view of the gaze data using a tumbling window that is non-overlapping and fixed in size.</p> Full article ">Figure 3
<p>Taking the most recent snapshot view of the gaze data using a hopping window that is overlapping and fixed in size.</p> Full article ">Figure 4
<p>Taking an event-based view of the gaze data using a session window that is non-overlapping and non-fixed in size.</p> Full article ">Figure 5
<p>Taking a cumulative view of the gaze data using an expanding window that is overlapping and non-fixed in size.</p> Full article ">Figure 6
<p>BEACH-Gaze workbench with a simple interface to support researchers exploring gaze-based predictions using digests, events, and snapshots of gaze, as well as cumulative gaze.</p> Full article ">Figure 7
<p>Mapping Tasks. Participants were asked to answer <span class="html-italic">true</span> or <span class="html-italic">false</span> questions in drop-down menus for each given mapping (equivalent relations between ontology classes exclusively) and create new mappings if the existing set was deemed to be incomplete. (<b>a</b>) Mapping tasks in the Conference domain; (<b>b</b>) Mapping tasks in the Biomedical domain.</p> Full article ">Figure 8
<p>Examples of the indented list visualization, where classes of an ontology pair in the Conference domain were visualized to the participants side by side to assist with the tasks shown in <a href="#mti-08-00090-f007" class="html-fig">Figure 7</a>a. (<b>a</b>) The source ontology visualized as an indented list; (<b>b</b>) The target ontology visualized as a separate indented list.</p> Full article ">Figure 9
<p>Examples of node-link diagrams, where classes of an ontology pair in the Biomedical domain were visualized to the participants side by side to assist with the tasks shown in <a href="#mti-08-00090-f007" class="html-fig">Figure 7</a>b. (<b>a</b>) The source ontology visualized as a node-link diagram; (<b>b</b>) The target ontology visualized as a separate node-link diagram.</p> Full article ">Figure 10
<p>Layout of the mapping tasks and ontology visualizations on one computer screen shown to each participant.</p> Full article ">
Open AccessSystematic Review
Virtual Reality as an Interactive Tool for the Implementation of Mindfulness in University Settings: A Systematic Review
by
Paula Puente-Torre, Vanesa Delgado-Benito, Sonia Rodríguez-Cano and Miguel Ángel García-Delgado
Multimodal Technol. Interact. 2024, 8(10), 89; https://doi.org/10.3390/mti8100089 - 11 Oct 2024
Abstract
►▼
Show Figures
Over the last few years, the importance of Mindfulness in the field of research has grown exponentially, as it has demonstrated various benefits in improving mental health, although there are still various difficulties in putting these techniques into practice among the university population.
[...] Read more.
Over the last few years, the importance of Mindfulness in the field of research has grown exponentially, as it has demonstrated various benefits in improving mental health, although there are still various difficulties in putting these techniques into practice among the university population. However, Virtual Reality is emerging as a tool to improve the implementation of these techniques. For this reason, a systematic review was carried out of the different studies that aim to analyze the impact of the use of Virtual Reality for the implementation of Mindfulness techniques that contribute to the improvement of mental health among the university population at national and international levels. For this review, different international reference databases were searched, such as Web of Science and Scopus, and all selected articles had to be published in the period between 2010 and 2024. The selected publications had to be primary research involving a Mindfulness intervention, carried out among university students, and whose main tool for its implementation was Virtual Reality. A total of seventy-eight studies were initially identified, from which fourteen were selected, as the rest did not meet the inclusion criteria. In sum, the results show that the use of Virtual Reality as a tool for the implementation of Mindfulness techniques is certainly effective in reducing and mitigating high levels of anxiety, depression, and stress among university students. All of the research analyzed shows a substantial improvement in the quality of life, mental health, and life satisfaction of the participants.
Full article
Figure 1
Figure 1
<p>Flow chart of scientific articles for the elaboration of the systematic review according to the PRISMA method [<a href="#B16-mti-08-00089" class="html-bibr">16</a>].</p> Full article ">Figure 2
<p>Distribution of the publications analyzed according to country.</p> Full article ">Figure 3
<p>Evolution of scientific publications by year.</p> Full article ">Figure 4
<p>Key concepts of the systematic review.</p> Full article ">
<p>Flow chart of scientific articles for the elaboration of the systematic review according to the PRISMA method [<a href="#B16-mti-08-00089" class="html-bibr">16</a>].</p> Full article ">Figure 2
<p>Distribution of the publications analyzed according to country.</p> Full article ">Figure 3
<p>Evolution of scientific publications by year.</p> Full article ">Figure 4
<p>Key concepts of the systematic review.</p> Full article ">
Open AccessArticle
An In-Depth Evaluation of Educational Burst Games in Relation to Learner Proficiency
by
Ashish Amresh, Vipin Verma and Michelle Zandieh
Multimodal Technol. Interact. 2024, 8(10), 88; https://doi.org/10.3390/mti8100088 - 11 Oct 2024
Abstract
►▼
Show Figures
Game-based learning assessments rely on educational data mining approaches such as stealth assessments and quasi mixed methods that help gather data on student learning proficiency. Rarely do we see approaches where student proficiency in learning is woven into the game’s design. Educational burst
[...] Read more.
Game-based learning assessments rely on educational data mining approaches such as stealth assessments and quasi mixed methods that help gather data on student learning proficiency. Rarely do we see approaches where student proficiency in learning is woven into the game’s design. Educational burst games (EBGs) represent a new approach to improving learning proficiency by designing fast-paced, short, repetitive, and skill-based games. They have the potential to be effective learning interventions both during instruction in the classroom and during after-school activities such as assignments and homework. Over five years, we have developed two EBGs aimed at improving linear algebra concepts among undergraduate students. In this study, we provide the results of an in-depth evaluation of the two EBGs developed with 45 participants that represent our target population. We discuss the role of EBGs and their design constructs, such as pace and repetition, the effect of the format (2D vs. 3D), the complexity of the levels, and the influence of prior knowledge on the learning outcomes.
Full article
Figure 1
Figure 1
<p>The red line (guided path) showing the player trajectory in the bunny game.</p> Full article ">Figure 2
<p>The guided path showing the player trajectory in the pirate game.</p> Full article ">Figure 3
<p>The tutorial level in the bunny game.</p> Full article ">Figure 4
<p>The tutorial level in the pirate game.</p> Full article ">Figure 5
<p>The flowchart showing the participant workflow.</p> Full article ">Figure 6
<p>Box plots for bunny and pirate games’ individual levels. Red lines indicate the average across all the stages within the level.</p> Full article ">Figure 7
<p>Averaged box plots for bunny and pirate game levels. Red lines indicate the average across all the levels.</p> Full article ">Figure 8
<p>Averaged box plots for games based on game format and gameplay order. Red lines indicate the average across the conditions.</p> Full article ">Figure 9
<p>Averaged box plots for bunny and pirate games based on prior knowledge.</p> Full article ">Figure 10
<p>Histogram for bunny and pirate games based on prior knowledge.</p> Full article ">
<p>The red line (guided path) showing the player trajectory in the bunny game.</p> Full article ">Figure 2
<p>The guided path showing the player trajectory in the pirate game.</p> Full article ">Figure 3
<p>The tutorial level in the bunny game.</p> Full article ">Figure 4
<p>The tutorial level in the pirate game.</p> Full article ">Figure 5
<p>The flowchart showing the participant workflow.</p> Full article ">Figure 6
<p>Box plots for bunny and pirate games’ individual levels. Red lines indicate the average across all the stages within the level.</p> Full article ">Figure 7
<p>Averaged box plots for bunny and pirate game levels. Red lines indicate the average across all the levels.</p> Full article ">Figure 8
<p>Averaged box plots for games based on game format and gameplay order. Red lines indicate the average across the conditions.</p> Full article ">Figure 9
<p>Averaged box plots for bunny and pirate games based on prior knowledge.</p> Full article ">Figure 10
<p>Histogram for bunny and pirate games based on prior knowledge.</p> Full article ">
Open AccessArticle
Assessment of Purchasing Influence of Email Campaigns Using Eye Tracking
by
Evangelia Skourou and Dimitris Spiliotopoulos
Multimodal Technol. Interact. 2024, 8(10), 87; https://doi.org/10.3390/mti8100087 - 10 Oct 2024
Abstract
►▼
Show Figures
Most people struggle to articulate the reasons why a promotional email they are exposed to influences them to make a purchase. Marketing experts and companies find it beneficial to understand these reasons, even if consumers themselves cannot express them, by using neuromarketing tools,
[...] Read more.
Most people struggle to articulate the reasons why a promotional email they are exposed to influences them to make a purchase. Marketing experts and companies find it beneficial to understand these reasons, even if consumers themselves cannot express them, by using neuromarketing tools, specifically the technique of eye tracking. This study analyses various types of email campaigns and their metrics and explores neuromarketing techniques to examine how email recipients view promotional emails. This study deploys eye tracking to investigate and also verify user attention, gaze, and behaviour. As a result, this approach assesses which elements of an email influence consumer purchasing decisions and which elements capture their attention the most. Furthermore, this study examines the influence of salary and the multiple-choice series of emails on consumer purchasing choices. The findings reveal that only the row that people choose to see in an email affects their purchasing decisions. Regarding promotional emails, the title and brand play a significant role, while in welcome emails, the main factor is primarily the title. Through web eye tracking, it is found that, in both promotional and welcome emails, large images captivate consumers the most. Finally, this work proposes ideas on how to improve emails for similar campaigns.
Full article
Figure 1
Figure 1
<p>Emails set as stimuli for the experiment in the order they appeared to the consumers.</p> Full article ">Figure 2
<p>Sequence of emails.</p> Full article ">Figure 3
<p>Reasons why participants chose to view their first choice.</p> Full article ">Figure 4
<p>Heatmaps for participants who stated that they were captivated by the discount code in the Tommy Hilfiger email without measurements.</p> Full article ">Figure 5
<p>Heat maps for participants of “About You” email.</p> Full article ">Figure 6
<p>The results of the Chi-square (χ2) test for the variables: (a) final intention to visit the online store, and (b) first choice of email.</p> Full article ">
<p>Emails set as stimuli for the experiment in the order they appeared to the consumers.</p> Full article ">Figure 2
<p>Sequence of emails.</p> Full article ">Figure 3
<p>Reasons why participants chose to view their first choice.</p> Full article ">Figure 4
<p>Heatmaps for participants who stated that they were captivated by the discount code in the Tommy Hilfiger email without measurements.</p> Full article ">Figure 5
<p>Heat maps for participants of “About You” email.</p> Full article ">Figure 6
<p>The results of the Chi-square (χ2) test for the variables: (a) final intention to visit the online store, and (b) first choice of email.</p> Full article ">
Open AccessArticle
A Virtual Reality Game-Based Approach for Shoulder Rehabilitation
by
Moisés Moreira, Estela Vilhena, Vítor Carvalho and Duarte Duque
Multimodal Technol. Interact. 2024, 8(10), 86; https://doi.org/10.3390/mti8100086 - 4 Oct 2024
Abstract
►▼
Show Figures
In recent years, with widespread access to virtual reality (VR) headsets, VR has become an affordable supplement to physiotherapy. Researchers explore the use of existing commercial games or develop new ones to enhance physiotherapy sessions, finding that gamers exhibit reduced nervousness, report less
[...] Read more.
In recent years, with widespread access to virtual reality (VR) headsets, VR has become an affordable supplement to physiotherapy. Researchers explore the use of existing commercial games or develop new ones to enhance physiotherapy sessions, finding that gamers exhibit reduced nervousness, report less pain, and experience increased enjoyment. However, ensuring consistent exercise adherence poses a challenge. Another area of interest involves integrating robots to aid patients. In our study, we seamlessly integrated a Kuka LBR Med 7 R800 with Unity through a meticulously developed Application Programming Interface (API). This fusion of robotics and video games assists in physiotherapeutic exercises. The games were developed specifically for compatibility with the Oculus Quest 2 virtual reality headset, chosen as the preferred VR platform for this study. Two games, using common game-design concepts with distinct approaches, were evaluated for system acceptance via the Technology Acceptance Model (TAM) and usability through the System Usability Scale (SUS). In a well-distributed group of 15 participants with an average age of 22 years, greater technology acceptance was observed among women. Those playing more hours per day reported lower perceived ease of use, though one game achieved an excellent SUS rating of 83.3. Conversely, the other game, which was tested with 11 participants with an average age of 20 years, showed a potential negative impact on behavioral intention. The particular sample used in the study has limitations, so the study should be repeated to obtain more reliable and conclusive results. In conclusion, the successful integration of VR and robot assistance in physiotherapy games relies on the proper application of the game design principle.
Full article
Figure 1
Figure 1
<p>Movement adaptation and incorporation in the game “Standing Row”.</p> Full article ">Figure 2
<p>Score example for the “Standing Row” exercise.</p> Full article ">Figure 3
<p>Movement adaptation and incorporation in the game-external rotation.</p> Full article ">Figure 4
<p>The Technology Acceptance Model (TAM).</p> Full article ">Figure 5
<p>The Spearman correlation results applied to the TAM model–Standing Row. ** Significant correlation at a significance level of 1%, * Significant correlation at a significance level of 5%.</p> Full article ">Figure 6
<p>The Spearman correlation results applied to the TAM model—External Rotation. ** Significant correlation at a significance level of 1%.</p> Full article ">Figure 7
<p>Detailed results in a plot format.</p> Full article ">Figure 8
<p>Per item chart.</p> Full article ">
<p>Movement adaptation and incorporation in the game “Standing Row”.</p> Full article ">Figure 2
<p>Score example for the “Standing Row” exercise.</p> Full article ">Figure 3
<p>Movement adaptation and incorporation in the game-external rotation.</p> Full article ">Figure 4
<p>The Technology Acceptance Model (TAM).</p> Full article ">Figure 5
<p>The Spearman correlation results applied to the TAM model–Standing Row. ** Significant correlation at a significance level of 1%, * Significant correlation at a significance level of 5%.</p> Full article ">Figure 6
<p>The Spearman correlation results applied to the TAM model—External Rotation. ** Significant correlation at a significance level of 1%.</p> Full article ">Figure 7
<p>Detailed results in a plot format.</p> Full article ">Figure 8
<p>Per item chart.</p> Full article ">
Open AccessArticle
Exploring the Impact of VR Scaffolding on EFL Teaching and Learning: Anxiety Reduction, Perceptions, and Influencing Factors
by
Hsiang Ling Huang
Multimodal Technol. Interact. 2024, 8(10), 85; https://doi.org/10.3390/mti8100085 - 27 Sep 2024
Abstract
►▼
Show Figures
This study examined the use of virtual reality (VR) scaffolding in English as a Foreign Language (EFL) instruction, focusing on its effects on speaking anxiety and learner perceptions and the need for tailored assessment methods. The study involved 34 Taiwanese university medical students
[...] Read more.
This study examined the use of virtual reality (VR) scaffolding in English as a Foreign Language (EFL) instruction, focusing on its effects on speaking anxiety and learner perceptions and the need for tailored assessment methods. The study involved 34 Taiwanese university medical students and utilized quantitative and qualitative questionnaires. The quantitative results indicated a significant reduction in speaking anxiety and positive perceptions of VR-assisted learning. Qualitative findings revealed that students experienced dual anxieties related to language and technology during VR learning and nervousness during performance evaluations in a VR setting. This study highlights the importance of creating interactive scaffolding that considers individual learner differences and supports personalized learning experiences. It also underscores the necessity of adopting assessment strategies that align with VR environments’ unique, immersive nature in language instruction. Our findings contribute to the growing body of research on VR applications in language learning, offering valuable insights for educators and researchers aiming to leverage this innovative technology in the EFL context.
Full article
Figure 1
Figure 1
<p>Research implementation diagram.</p> Full article ">Figure 2
<p>Correlation analysis of VR interventions: (<b>A</b>) pre-test scores, (<b>B</b>) post-test scores, and (<b>C</b>) pre- vs. post-test scores.</p> Full article ">Figure 3
<p>Overall impact of VR interventions: cluster-based analysis of pre- and post-test scores.</p> Full article ">Figure 4
<p>Heatmap of pre-test and post-test scores in VR English learning clusters.</p> Full article ">Figure 5
<p>Spearman’s correlation heatmap of proficiency vs. VR English learning clusters.</p> Full article ">Figure 6
<p>McNemar’s test for scaffolding strategies.</p> Full article ">Figure 7
<p>Example of one immediate scaffolding support.</p> Full article ">Figure 8
<p>McNemar’s test for VR learning style.</p> Full article ">Figure 9
<p>McNemar’s test for feedback preferences.</p> Full article ">
<p>Research implementation diagram.</p> Full article ">Figure 2
<p>Correlation analysis of VR interventions: (<b>A</b>) pre-test scores, (<b>B</b>) post-test scores, and (<b>C</b>) pre- vs. post-test scores.</p> Full article ">Figure 3
<p>Overall impact of VR interventions: cluster-based analysis of pre- and post-test scores.</p> Full article ">Figure 4
<p>Heatmap of pre-test and post-test scores in VR English learning clusters.</p> Full article ">Figure 5
<p>Spearman’s correlation heatmap of proficiency vs. VR English learning clusters.</p> Full article ">Figure 6
<p>McNemar’s test for scaffolding strategies.</p> Full article ">Figure 7
<p>Example of one immediate scaffolding support.</p> Full article ">Figure 8
<p>McNemar’s test for VR learning style.</p> Full article ">Figure 9
<p>McNemar’s test for feedback preferences.</p> Full article ">
Open AccessArticle
Development of a Tool for Evaluating the Influence of Engineering Students’ Perception of Generative AI on University Courses Based on Personality, Perceived Roles in Design Teams, and Course Engagement
by
Stefano Filippi and Barbara Motyl
Multimodal Technol. Interact. 2024, 8(10), 84; https://doi.org/10.3390/mti8100084 - 26 Sep 2024
Abstract
►▼
Show Figures
This research investigates the possible influence of students’ perceptions of emerging AI technologies on university courses, focusing on their knowledge and perceived usefulness within engineering design. An evaluation tool implemented in a Microsoft Excel workbook was developed and tested to perform the process
[...] Read more.
This research investigates the possible influence of students’ perceptions of emerging AI technologies on university courses, focusing on their knowledge and perceived usefulness within engineering design. An evaluation tool implemented in a Microsoft Excel workbook was developed and tested to perform the process of data collection through well-known questionnaires, data analysis, and the generation of results, facilitating attention to class compositions and measuring AI awareness and perceived usefulness. The study considers traditional aspects such as roles within design teams and the psychological factors that may influence these roles, alongside contemporary topics like Large Language Models (LLMs). Questionnaires based on well-established theories were administered during courses on product innovation and representation, assessing both primary and secondary design roles. Primary roles focus on technical skills and knowledge, while secondary roles emphasize problem-solving approaches. The Big Five questionnaire was used to characterize students’ psychological profiles based on the main personality traits. Students’ perceptions of AI involvement and usefulness in engineering design were evaluated using questionnaires derived from the consolidated literature as well. Data were collected via Google forms from both in-class and off-line students. The first results of the workbook adoption highlight some relationships between personality traits, perceived roles in design teams, and AI knowledge and usefulness. These findings aim to help educators enhance course effectiveness and align courses with current AI advancements. The workbook is available to the readers to collect data and perform analyses in different countries, education disciplines, and as time goes by, in order to add the longitudinal point of view to the research.
Full article
Figure 1
Figure 1
<p>The interface of the Microsoft Excel workbook implementing the framework to highlight the relationships among the course, roles in design teams, personality, and AI perception.</p> Full article ">Figure 2
<p>The results of the computation of the workbook.</p> Full article ">Figure 3
<p>The hidden sheet of the workbook performing the <span class="html-italic">p</span>-value computations.</p> Full article ">Figure 4
<p>The references (A, B, and C) for the scatter charts of the three examples of relationships.</p> Full article ">Figure 5
<p>Chart A. Example of the relationship being continuous vs. continuous (PT2—agreeableness vs. AI4—fairness and ethics of AI).</p> Full article ">Figure 6
<p>Chart B. Example of the relationship being binary vs. continuous (PR3—manufacturing engineer vs. AI5—usefulness and performance expectancy of AI).</p> Full article ">Figure 7
<p>Chart C. Example of the relationship being discrete vs. binary (COURSE vs. PR3—manufacturing engineer).</p> Full article ">
<p>The interface of the Microsoft Excel workbook implementing the framework to highlight the relationships among the course, roles in design teams, personality, and AI perception.</p> Full article ">Figure 2
<p>The results of the computation of the workbook.</p> Full article ">Figure 3
<p>The hidden sheet of the workbook performing the <span class="html-italic">p</span>-value computations.</p> Full article ">Figure 4
<p>The references (A, B, and C) for the scatter charts of the three examples of relationships.</p> Full article ">Figure 5
<p>Chart A. Example of the relationship being continuous vs. continuous (PT2—agreeableness vs. AI4—fairness and ethics of AI).</p> Full article ">Figure 6
<p>Chart B. Example of the relationship being binary vs. continuous (PR3—manufacturing engineer vs. AI5—usefulness and performance expectancy of AI).</p> Full article ">Figure 7
<p>Chart C. Example of the relationship being discrete vs. binary (COURSE vs. PR3—manufacturing engineer).</p> Full article ">
Open AccessArticle
Extended Reality Educational System with Virtual Teacher Interaction for Enhanced Learning
by
Fotis Liarokapis, Vaclav Milata and Filip Skola
Multimodal Technol. Interact. 2024, 8(9), 83; https://doi.org/10.3390/mti8090083 - 23 Sep 2024
Abstract
►▼
Show Figures
Advancements in technology that can reshape educational paradigms, with Extended Reality (XR) have a pivotal role. This paper introduces an interactive XR intelligent assistant featuring a virtual teacher that interacts dynamically with PowerPoint presentations using OpenAI’s ChatGPT API. The system incorporates Azure Cognitive
[...] Read more.
Advancements in technology that can reshape educational paradigms, with Extended Reality (XR) have a pivotal role. This paper introduces an interactive XR intelligent assistant featuring a virtual teacher that interacts dynamically with PowerPoint presentations using OpenAI’s ChatGPT API. The system incorporates Azure Cognitive Services for multilingual speech-to-text and text-to-speech capabilities, custom lip-syncing solutions, eye gaze, head rotation and gestures. Additionally, panoramic images can be used as a sky box giving the illusion that the AI assistant is located at another location. Findings from three pilots indicate that the proposed technology has a lot of potential to be used as an additional tool for enhancing the learning process. However, special care must be taken into privacy and ethical issues.
Full article
Figure 1
Figure 1
<p>Overview of the XR intelligent system.</p> Full article ">Figure 2
<p>Main GUI of the Interactive Intelligent XR Assistant showing the different options.</p> Full article ">Figure 3
<p>ChatGPT GUI of the Interactive Intelligent XR Assistant.</p> Full article ">Figure 4
<p>Two different modes of operation. (<b>a</b>) Teacher mode interpreting a PowerPoint presentation, (<b>b</b>) Two intelligent agents (students or teachers) exchanging ideas about the teaching material.</p> Full article ">Figure 5
<p>Illustration of the extended reality intelligent teacher delivering a PowerPoint presentation. Different body language animations are illustrated from the intelligent teacher.</p> Full article ">Figure 6
<p>Panoramic XR Intelligent Assistants which are located in a laboratory environment but shown as being in an outdoor environment.</p> Full article ">Figure 7
<p>Immersive Tech Week 2023 Intelligent XR presentation.</p> Full article ">Figure 8
<p>Summary of the Qualitative Evaluation.</p> Full article ">
<p>Overview of the XR intelligent system.</p> Full article ">Figure 2
<p>Main GUI of the Interactive Intelligent XR Assistant showing the different options.</p> Full article ">Figure 3
<p>ChatGPT GUI of the Interactive Intelligent XR Assistant.</p> Full article ">Figure 4
<p>Two different modes of operation. (<b>a</b>) Teacher mode interpreting a PowerPoint presentation, (<b>b</b>) Two intelligent agents (students or teachers) exchanging ideas about the teaching material.</p> Full article ">Figure 5
<p>Illustration of the extended reality intelligent teacher delivering a PowerPoint presentation. Different body language animations are illustrated from the intelligent teacher.</p> Full article ">Figure 6
<p>Panoramic XR Intelligent Assistants which are located in a laboratory environment but shown as being in an outdoor environment.</p> Full article ">Figure 7
<p>Immersive Tech Week 2023 Intelligent XR presentation.</p> Full article ">Figure 8
<p>Summary of the Qualitative Evaluation.</p> Full article ">
Open AccessReview
Music as a Tool for Affiliative Bonding: A Second-Person Approach to Musical Engagement
by
Mark Reybrouck
Multimodal Technol. Interact. 2024, 8(9), 82; https://doi.org/10.3390/mti8090082 - 20 Sep 2024
Abstract
►▼
Show Figures
Music listening or playing can create a feeling of connection with other listeners or performers, with distinctive levels of immersion and absorption. A major question, in this regard, is whether the music does have an ontological status, as an end in itself, or
[...] Read more.
Music listening or playing can create a feeling of connection with other listeners or performers, with distinctive levels of immersion and absorption. A major question, in this regard, is whether the music does have an ontological status, as an end in itself, or whether it is only a tool for the mediation of something else. In this paper we endorse a mediating perspective, with a focus on the music’s potential to increase affiliative bonding between listeners, performers and even the music. Music, then, is hypostasized as “something that touches us” and can be considered a partner of affiliative exchange. It has the potential to move us and to modulate the way we experience the space around us. We therefore elaborate on the tactile dimension of being moved, as well as on the distinction between personal, peripersonal, and extrapersonal space, with a corresponding distinction between first-person, second-person, and third-person perspectives on musical engagement.
Full article
Figure 1
Figure 1
<p>Example of nonverbal communication between pianist Alice Sara Ott and conductor Mikko Franck, performing the 3rd piano concerto by L. van Beethoven. Image retrieved from <a href="https://www.youtube.com/watch?v=1kxai2rCs7k" target="_blank">https://www.youtube.com/watch?v=1kxai2rCs7k</a>, accessed on 22 July 2024.</p> Full article ">
<p>Example of nonverbal communication between pianist Alice Sara Ott and conductor Mikko Franck, performing the 3rd piano concerto by L. van Beethoven. Image retrieved from <a href="https://www.youtube.com/watch?v=1kxai2rCs7k" target="_blank">https://www.youtube.com/watch?v=1kxai2rCs7k</a>, accessed on 22 July 2024.</p> Full article ">
Open AccessArticle
Effects of Kahoot! on K-12 Students’ Mathematics Achievement and Multi-Screen Addiction
by
Nikolaos Pellas
Multimodal Technol. Interact. 2024, 8(9), 81; https://doi.org/10.3390/mti8090081 - 16 Sep 2024
Abstract
Digital platforms are increasingly prevalent among young students in K-12 education, offering significant opportunities but also raising concerns about their effects on self-assessment and academic performance. This study investigates the effectiveness of Kahoot! compared to traditional instructional methods in enhancing mathematics achievement and
[...] Read more.
Digital platforms are increasingly prevalent among young students in K-12 education, offering significant opportunities but also raising concerns about their effects on self-assessment and academic performance. This study investigates the effectiveness of Kahoot! compared to traditional instructional methods in enhancing mathematics achievement and its impact on multiple screen addiction (MSA) among Greek students aged 9 to 12 during a STEM summer camp. A quasi-experimental design was employed with a purposefully selected sample of one hundred and ten (n = 110) students, who were non-randomly divided into two groups: (a) an experimental group of fifty-five students (n = 55) who engaged with Kahoot! (using dynamic visual aids and interactive content) and (b) a control group of fifty-five students (n = 55) who received traditional instruction (using digital textbooks and PowerPoint slides with multimedia content) on laptops and tablets. The findings revealed a statistically significant difference in MSA scores, with the experimental group exhibiting lower MSA scores compared to their counterparts, indicating a positive impact on reducing screen addiction levels. While Kahoot! led to lower MSA levels, it significantly improved overall mathematical achievement, with a substantial effect size, suggesting a strong positive impact on learning outcomes. The current study highlights the importance of aligning educational tools with the intended outcomes and recommends further research to explore the broader impact of gamified learning on student engagement, screen addiction, and learning outcomes.
Full article
(This article belongs to the Special Issue Innovative Theories and Practices for Designing and Evaluating Inclusive Educational Technology and Online Learning)
Open AccessSystematic Review
Extended Reality Applications for CNC Machine Training: A Systematic Review
by
José Manuel Ibarra Kwick, Óscar Hernández-Uribe, Leonor Adriana Cárdenas-Robledo and Ramón Alberto Luque-Morales
Multimodal Technol. Interact. 2024, 8(9), 80; https://doi.org/10.3390/mti8090080 - 11 Sep 2024
Abstract
►▼
Show Figures
Extended reality (XR) as an immersive technology has gained significant interest in the industry for training and maintenance tasks. It offers an interactive, three-dimensional environment that can boost users’ efficiency and safety in various sectors. The present systematic review provides information based on
[...] Read more.
Extended reality (XR) as an immersive technology has gained significant interest in the industry for training and maintenance tasks. It offers an interactive, three-dimensional environment that can boost users’ efficiency and safety in various sectors. The present systematic review provides information based on a Scopus database search for research articles from 2011 to 2024 to expose 19 selected studies related to XR developments and approaches. The purpose is to grasp the state of the art, focusing on user training in goals or tasks that involve computer numerical control (CNC) machines. The study revealed approaches that broadly employed XR devices to execute diverse operations for virtual CNC machines, offering enhanced safety and skills acquisition, lessening the use of physical machines that impact energy consumption or the time invested by an expert worker to teach an operation task. The articles highlight the advantages of XR training versus traditional training in CNC machines, revealing an opportunity to enhance learning aligned to the industry 4.0 (I4.0) paradigm. Virtual reality (VR) and augmented reality (AR) applications are the most used and are mainly centered on a single-user environment. In addition, a VR approach is built as a proof of concept for learning CNC machine operations, considering the key features identified.
Full article
Figure 1
Figure 1
<p>CNC machine, Hermle model C400.</p> Full article ">Figure 2
<p>Flow diagram showing the articles selection process adapted from PRISMA [<a href="#B28-mti-08-00080" class="html-bibr">28</a>].</p> Full article ">Figure 3
<p>Virtual shop floor mockup.</p> Full article ">Figure 4
<p>Procedure for machining a workpiece: (<b>a</b>) Toolbox, table, and accessories; (<b>b</b>) tool holder and end mill mounting; (<b>c</b>) placement of the vise and adjusting the raw material with a rubber mallet; (<b>d</b>) execution of the virtual CNC machining simulation.</p> Full article ">
<p>CNC machine, Hermle model C400.</p> Full article ">Figure 2
<p>Flow diagram showing the articles selection process adapted from PRISMA [<a href="#B28-mti-08-00080" class="html-bibr">28</a>].</p> Full article ">Figure 3
<p>Virtual shop floor mockup.</p> Full article ">Figure 4
<p>Procedure for machining a workpiece: (<b>a</b>) Toolbox, table, and accessories; (<b>b</b>) tool holder and end mill mounting; (<b>c</b>) placement of the vise and adjusting the raw material with a rubber mallet; (<b>d</b>) execution of the virtual CNC machining simulation.</p> Full article ">
Open AccessBrief Report
Can Generative AI Contribute to Health Literacy? A Study in the Field of Ophthalmology
by
Carlos Ruiz-Núñez, Javier Gismero Rodríguez, Antonio J. Garcia Ruiz, Saturnino Manuel Gismero Moreno, María Sonia Cañizal Santos and Iván Herrera-Peco
Multimodal Technol. Interact. 2024, 8(9), 79; https://doi.org/10.3390/mti8090079 - 4 Sep 2024
Abstract
ChatGPT, a generative artificial intelligence model, can provide useful and reliable responses in the field of ophthalmology, comparable to those of medical professionals. Twelve frequently asked questions from ophthalmology patients were selected, and responses were generated both in the role of an expert
[...] Read more.
ChatGPT, a generative artificial intelligence model, can provide useful and reliable responses in the field of ophthalmology, comparable to those of medical professionals. Twelve frequently asked questions from ophthalmology patients were selected, and responses were generated both in the role of an expert user and a non-expert user. The responses were evaluated by ophthalmologists using three scales: Global Quality Score (GQS), Reliability Score (RS), and Usefulness Score (US), and analyzed statistically through descriptive study, association, and comparison. The results indicate that there are no significant differences between the responses of expert and non-expert users, although the responses from the expert user tend to be slightly better rated. ChatGPT’s responses proved to be reliable and useful, suggesting its potential as a complementary tool to enhance health literacy and alleviate the informational burden on healthcare professionals.
Full article
Open AccessArticle
VRChances: An Immersive Virtual Reality Experience to Support Teenagers in Their Career Decisions
by
Michael Holly, Carina Weichselbraun, Florian Wohlmuth, Florian Glawogger, Maria Seiser, Philipp Einwallner and Johanna Pirker
Multimodal Technol. Interact. 2024, 8(9), 78; https://doi.org/10.3390/mti8090078 - 4 Sep 2024
Abstract
►▼
Show Figures
In this paper, we present a tool that offers young people virtual career guidance through an immersive virtual reality (VR) experience. While virtual environments provide an effective way to explore different experiences, VR offers users immersive interactions with simulated 3D environments. This allows
[...] Read more.
In this paper, we present a tool that offers young people virtual career guidance through an immersive virtual reality (VR) experience. While virtual environments provide an effective way to explore different experiences, VR offers users immersive interactions with simulated 3D environments. This allows the realistic exploration of different job fields in a virtual environment without being physically present. The study investigates the extent to which performing occupational tasks in a virtual environment influences the career perceptions of young adults and whether it enhances their understanding of professions. In particular, the study focuses on users’ expectations of an electrician’s profession. In total, 23 teenagers and eight application experts were involved to assess the teenager’s expectations and the potential of the career guidance tool.
Full article
Figure 1
Figure 1
<p>Main menu scene for the job selection.</p> Full article ">Figure 2
<p>Guide interaction via dialogue view and hint view.</p> Full article ">Figure 3
<p>Guide emotions based on the player behavior.</p> Full article ">Figure 4
<p>Electrician job experience in a garage setting.</p> Full article ">Figure 5
<p>Interaction with a power socket, a wire connector, and a multimeter.</p> Full article ">Figure 6
<p>Cook job experience in a kitchen setting.</p> Full article ">Figure 7
<p>Cooking interactions: Cutting vegetables, weighing ingredients, cooking and rolling pancakes, ladling soup.</p> Full article ">Figure 8
<p>Most frequently mentioned skills that an electrician needs according to the experts.</p> Full article ">Figure 9
<p>Most frequently mentioned words when thinking of an electrician (<b>a</b>) before and (<b>b</b>) after playing the VR application.</p> Full article ">Figure 10
<p>Most frequently mentioned skills that an electrician needs according to the experts compared against the users’ responses (<b>a</b>) before and (<b>b</b>) after playing the VR application.</p> Full article ">Figure 11
<p>Sentiment on the job outlook expectations before playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">Figure 12
<p>Sentiment on the job outlook expectations after playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">Figure 13
<p>Sentiment on the job opinion after playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">
<p>Main menu scene for the job selection.</p> Full article ">Figure 2
<p>Guide interaction via dialogue view and hint view.</p> Full article ">Figure 3
<p>Guide emotions based on the player behavior.</p> Full article ">Figure 4
<p>Electrician job experience in a garage setting.</p> Full article ">Figure 5
<p>Interaction with a power socket, a wire connector, and a multimeter.</p> Full article ">Figure 6
<p>Cook job experience in a kitchen setting.</p> Full article ">Figure 7
<p>Cooking interactions: Cutting vegetables, weighing ingredients, cooking and rolling pancakes, ladling soup.</p> Full article ">Figure 8
<p>Most frequently mentioned skills that an electrician needs according to the experts.</p> Full article ">Figure 9
<p>Most frequently mentioned words when thinking of an electrician (<b>a</b>) before and (<b>b</b>) after playing the VR application.</p> Full article ">Figure 10
<p>Most frequently mentioned skills that an electrician needs according to the experts compared against the users’ responses (<b>a</b>) before and (<b>b</b>) after playing the VR application.</p> Full article ">Figure 11
<p>Sentiment on the job outlook expectations before playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">Figure 12
<p>Sentiment on the job outlook expectations after playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">Figure 13
<p>Sentiment on the job opinion after playing the VR app with polarity from −1 (negative) to 1 (positive) and subjectivity from 0 (objective information) to 1 (personal opinion).</p> Full article ">
Open AccessArticle
A Multispecies Interaction Design Approach: Introducing the Beings Activities Context Technologies (BACT) Framework
by
Theodora Chamaidi and Modestos Stavrakis
Multimodal Technol. Interact. 2024, 8(9), 77; https://doi.org/10.3390/mti8090077 - 4 Sep 2024
Abstract
►▼
Show Figures
For years, design has been focused on human needs, creating human-centred solutions and often neglecting the existence or the impact that design can have on other species. As designers shift from that traditional anthropocentric approach to adopting design practices that include other species’
[...] Read more.
For years, design has been focused on human needs, creating human-centred solutions and often neglecting the existence or the impact that design can have on other species. As designers shift from that traditional anthropocentric approach to adopting design practices that include other species’ perspectives in the process, there is a growing need for practices capable of providing designers with the right tools to understand non-human needs and design for their inclusion. For this reason, the Beings Activities Context Technologies (BACT) framework is proposed as a theoretical means to support the shift to a more multispecies-oriented approach, expanding the anthropocentric Benyon’s People Activities Contexts Technologies (PACT) framework. The methodological implications of the framework have been explored in a case study design project focused on the development of a wearable device designed to support beekeepers during their work. The case study explored the design by taking into consideration both the needs of humans and animals in the context of beekeeping while analysing their interactions in depth. Through this framework, we seek to contribute to the more-than-human turn in interaction design and aid designers in expanding their considerations beyond the person–technology relationship.
Full article
Figure 1
Figure 1
<p>Schematic that captures the flow and progression of the study. Each stage is divided into its key components and processes.</p> Full article ">Figure 2
<p>Beehive states divided based on human actions before and after an inspection/intervention.</p> Full article ">Figure 3
<p>Beehive states before and after an inspection/intervention.</p> Full article ">
<p>Schematic that captures the flow and progression of the study. Each stage is divided into its key components and processes.</p> Full article ">Figure 2
<p>Beehive states divided based on human actions before and after an inspection/intervention.</p> Full article ">Figure 3
<p>Beehive states before and after an inspection/intervention.</p> Full article ">
Open AccessArticle
Observations and Considerations for Implementing Vibration Signals as an Input Technique for Mobile Devices
by
Thomas Hrast, David Ahlström and Martin Hitz
Multimodal Technol. Interact. 2024, 8(9), 76; https://doi.org/10.3390/mti8090076 - 2 Sep 2024
Abstract
This work examines swipe-based interactions on smart devices, like smartphones and smartwatches, that detect vibration signals through defined swipe surfaces. We investigate how these devices, held in users’ hands or worn on their wrists, process vibration signals from swipe interactions and ambient noise
[...] Read more.
This work examines swipe-based interactions on smart devices, like smartphones and smartwatches, that detect vibration signals through defined swipe surfaces. We investigate how these devices, held in users’ hands or worn on their wrists, process vibration signals from swipe interactions and ambient noise using a support vector machine (SVM). The work details the signal processing workflow involving filters, sliding windows, feature vectors, SVM kernels, and ambient noise management. It includes how we separate the vibration signal from a potential swipe surface and ambient noise. We explore both software and human factors influencing the signals: the former includes the computational techniques mentioned, while the latter encompasses swipe orientation, contact, and movement. Our findings show that the SVM classifies swipe surface signals with an accuracy of 69.61% when both devices are used, 97.59% with only the smartphone, and 99.79% with only the smartwatch. However, the classification accuracy drops to about 50% in field user studies simulating real-world conditions such as phone calls, typing, walking, and other undirected movements throughout the day. The decline in performance under these conditions suggests challenges in ambient noise discrimination, which this work discusses, along with potential strategies for improvement in future research.
Full article
(This article belongs to the Special Issue Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives)
►▼
Show Figures
Figure 1
Figure 1
<p>Picture (<b>a</b>) shows the swipe path on the different surfaces in this work, whereas picture (<b>b</b>) points out the relationship between bumps and spaces on these surfaces.</p> Full article ">Figure 2
<p>The flow diagram illustrates the process of capturing the vibration signal and classifying it into a feature vector using SVM. It also demonstrates the implementation of the Java library <span class="html-italic">libSVM</span> and its corresponding software components. This diagram illustrates the software-determined aspects of the process. In <a href="#sec5dot1dot1-mti-08-00076" class="html-sec">Section 5.1.1</a>, we provide a detailed explanation of the stages shown in the flow diagram. This section focuses on illustrating the process of creating the sliding window <math display="inline"><semantics> <msub> <mi>w</mi> <mi>i</mi> </msub> </semantics></math>, as depicted in Figure 3. The feature elements for <math display="inline"><semantics> <mover accent="true"> <mo>Υ</mo> <mo>→</mo> </mover> </semantics></math> are listed in Table 3, while Table 4 presents the values for <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>c</mi> </msub> </semantics></math>. Additionally, <a href="#mti-08-00076-t002" class="html-table">Table 2</a> displays the <tt>svm_parameter</tt> for the SVM, and the kernels applied for the SVM are outlined in Table 4.</p> Full article ">Figure 3
<p>Stages one to five illustrate ①–⑤ the process of performing a swipe gesture over a textured surface. From this time series <math display="inline"><semantics> <mover accent="true"> <mi>s</mi> <mo>→</mo> </mover> </semantics></math> sliding windows <span class="html-italic">w</span>s are extracted. Two <span class="html-italic">w</span>s from each swipe movement are used to train the SVM on the swipe surface. One <span class="html-italic">w</span> is used to classify the swipe surface.</p> Full article ">Figure 4
<p>Vibration signals when a participant swipes over the swipe surface while the hand is moving. The first plot highlights again the five stages of a swipe gesture. The applied swipe contact is the nail, and the swipe orientation is horizontal. These last two conditions are fixed during the recorded time samples.</p> Full article ">Figure 5
<p>Swiping with different movement behaviors on the surface.</p> Full article ">Figure 6
<p>Confusion matrices (multiclass SVMs) and bar charts (one-class SVMs) for the best conditions from <a href="#mti-08-00076-t005" class="html-table">Table 5</a> are indicated by the column name. The row names represent the user studies.</p> Full article ">Figure 7
<p>Confusion matrices (multiclass SVMs) and bar charts (one-class SVMs) for the worst conditions from <a href="#mti-08-00076-t005" class="html-table">Table 5</a> are indicated by the column name. The row names represent the user studies.</p> Full article ">Figure 8
<p>Confusion matrices (multiclass SVM) and bar charts (one-class SVM) illustrating the results under the best conditions during the in-field user study. The column name in the matrices represents the best conditions for the SVM, which was listed in <a href="#mti-08-00076-t005" class="html-table">Table 5</a>. The row name in the matrices corresponds to different hand poses, as depicted in <a href="#mti-08-00076-f006" class="html-fig">Figure 6</a>.</p> Full article ">Figure 9
<p>Confusion matrices (multiclass SVM) and bar charts (one-class SVM) illustrating the results under the worst conditions during the in-field user study. The column name in the matrices represents the best conditions for the SVM, which was listed in <a href="#mti-08-00076-t005" class="html-table">Table 5</a>. The row name in the matrices corresponds to different hand poses, as depicted in <a href="#mti-08-00076-f006" class="html-fig">Figure 6</a>.</p> Full article ">Figure 10
<p>Misclassified ambient noise signals as swipe gestures. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A1
<p>The vibration signal shape while participants swipe over their closed fingers.</p> Full article ">Figure A2
<p>Feature vector shapes for swipe surfaces. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A3
<p>Feature vector shape for ambient noise signals. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A4
<p>Schematic of the best and worst feature vector conditions for high classification accuracy of different swipe surfaces in <a href="#mti-08-00076-f001" class="html-fig">Figure 1</a>.</p> Full article ">
<p>Picture (<b>a</b>) shows the swipe path on the different surfaces in this work, whereas picture (<b>b</b>) points out the relationship between bumps and spaces on these surfaces.</p> Full article ">Figure 2
<p>The flow diagram illustrates the process of capturing the vibration signal and classifying it into a feature vector using SVM. It also demonstrates the implementation of the Java library <span class="html-italic">libSVM</span> and its corresponding software components. This diagram illustrates the software-determined aspects of the process. In <a href="#sec5dot1dot1-mti-08-00076" class="html-sec">Section 5.1.1</a>, we provide a detailed explanation of the stages shown in the flow diagram. This section focuses on illustrating the process of creating the sliding window <math display="inline"><semantics> <msub> <mi>w</mi> <mi>i</mi> </msub> </semantics></math>, as depicted in Figure 3. The feature elements for <math display="inline"><semantics> <mover accent="true"> <mo>Υ</mo> <mo>→</mo> </mover> </semantics></math> are listed in Table 3, while Table 4 presents the values for <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>c</mi> </msub> </semantics></math>. Additionally, <a href="#mti-08-00076-t002" class="html-table">Table 2</a> displays the <tt>svm_parameter</tt> for the SVM, and the kernels applied for the SVM are outlined in Table 4.</p> Full article ">Figure 3
<p>Stages one to five illustrate ①–⑤ the process of performing a swipe gesture over a textured surface. From this time series <math display="inline"><semantics> <mover accent="true"> <mi>s</mi> <mo>→</mo> </mover> </semantics></math> sliding windows <span class="html-italic">w</span>s are extracted. Two <span class="html-italic">w</span>s from each swipe movement are used to train the SVM on the swipe surface. One <span class="html-italic">w</span> is used to classify the swipe surface.</p> Full article ">Figure 4
<p>Vibration signals when a participant swipes over the swipe surface while the hand is moving. The first plot highlights again the five stages of a swipe gesture. The applied swipe contact is the nail, and the swipe orientation is horizontal. These last two conditions are fixed during the recorded time samples.</p> Full article ">Figure 5
<p>Swiping with different movement behaviors on the surface.</p> Full article ">Figure 6
<p>Confusion matrices (multiclass SVMs) and bar charts (one-class SVMs) for the best conditions from <a href="#mti-08-00076-t005" class="html-table">Table 5</a> are indicated by the column name. The row names represent the user studies.</p> Full article ">Figure 7
<p>Confusion matrices (multiclass SVMs) and bar charts (one-class SVMs) for the worst conditions from <a href="#mti-08-00076-t005" class="html-table">Table 5</a> are indicated by the column name. The row names represent the user studies.</p> Full article ">Figure 8
<p>Confusion matrices (multiclass SVM) and bar charts (one-class SVM) illustrating the results under the best conditions during the in-field user study. The column name in the matrices represents the best conditions for the SVM, which was listed in <a href="#mti-08-00076-t005" class="html-table">Table 5</a>. The row name in the matrices corresponds to different hand poses, as depicted in <a href="#mti-08-00076-f006" class="html-fig">Figure 6</a>.</p> Full article ">Figure 9
<p>Confusion matrices (multiclass SVM) and bar charts (one-class SVM) illustrating the results under the worst conditions during the in-field user study. The column name in the matrices represents the best conditions for the SVM, which was listed in <a href="#mti-08-00076-t005" class="html-table">Table 5</a>. The row name in the matrices corresponds to different hand poses, as depicted in <a href="#mti-08-00076-f006" class="html-fig">Figure 6</a>.</p> Full article ">Figure 10
<p>Misclassified ambient noise signals as swipe gestures. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A1
<p>The vibration signal shape while participants swipe over their closed fingers.</p> Full article ">Figure A2
<p>Feature vector shapes for swipe surfaces. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A3
<p>Feature vector shape for ambient noise signals. The best and the worst conditions are taken from <a href="#mti-08-00076-t005" class="html-table">Table 5</a>.</p> Full article ">Figure A4
<p>Schematic of the best and worst feature vector conditions for high classification accuracy of different swipe surfaces in <a href="#mti-08-00076-f001" class="html-fig">Figure 1</a>.</p> Full article ">
Open AccessReview
Impact of Artificial Intelligence on Learning Management Systems: A Bibliometric Review
by
Diego Vergara, Georgios Lampropoulos, Álvaro Antón-Sancho and Pablo Fernández-Arias
Multimodal Technol. Interact. 2024, 8(9), 75; https://doi.org/10.3390/mti8090075 - 25 Aug 2024
Abstract
►▼
Show Figures
The field of artificial intelligence is drastically advancing. This study aims to provide an overview of the integration of artificial intelligence into learning management systems. This study followed a bibliometric review approach. Specifically, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses
[...] Read more.
The field of artificial intelligence is drastically advancing. This study aims to provide an overview of the integration of artificial intelligence into learning management systems. This study followed a bibliometric review approach. Specifically, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, 256 documents from the Scopus and Web of Science (WoS) databases over the period of 2004–2023 were identified and examined. Besides an analysis of the documents within the existing literature, emerging themes and topics were identified, and directions and recommendations for future research are provided. Based on the outcomes, the use of artificial intelligence within learning management systems offers adaptive and personalized learning experiences, promotes active learning, and supports self-regulated learning in face-to-face, hybrid, and online learning environments. Additionally, learning management systems enriched with artificial intelligence can improve students’ learning outcomes, engagement, and motivation. Their ability to increase accessibility and ensure equal access to education by supporting open educational resources was evident. However, the need to develop effective design approaches, evaluation methods, and methodologies to successfully integrate them within classrooms emerged as an issue to be solved. Finally, the need to further explore education stakeholders’ artificial intelligence literacy also arose.
Full article
Figure 1
Figure 1
<p>Document processing flowchart.</p> Full article ">Figure 2
<p>Annual scientific production.</p> Full article ">Figure 3
<p>Sources with most documents published.</p> Full article ">Figure 4
<p>Top-ten sources production over time based on Bradford’s law.</p> Full article ">Figure 5
<p>Top affiliations based on the number of documents published.</p> Full article ">Figure 6
<p>Countries that published the most over time.</p> Full article ">Figure 7
<p>Countries that received the most citations.</p> Full article ">Figure 8
<p>Most frequent keywords plus.</p> Full article ">Figure 9
<p>Keywords plus co-occurrence network.</p> Full article ">Figure 10
<p>Countries, keywords, and sources relationships.</p> Full article ">Figure 11
<p>Thematic map of the topic.</p> Full article ">Figure 12
<p>Trend topics based on keywords plus.</p> Full article ">Figure 13
<p>Thematic evolution of the topic.</p> Full article ">
<p>Document processing flowchart.</p> Full article ">Figure 2
<p>Annual scientific production.</p> Full article ">Figure 3
<p>Sources with most documents published.</p> Full article ">Figure 4
<p>Top-ten sources production over time based on Bradford’s law.</p> Full article ">Figure 5
<p>Top affiliations based on the number of documents published.</p> Full article ">Figure 6
<p>Countries that published the most over time.</p> Full article ">Figure 7
<p>Countries that received the most citations.</p> Full article ">Figure 8
<p>Most frequent keywords plus.</p> Full article ">Figure 9
<p>Keywords plus co-occurrence network.</p> Full article ">Figure 10
<p>Countries, keywords, and sources relationships.</p> Full article ">Figure 11
<p>Thematic map of the topic.</p> Full article ">Figure 12
<p>Trend topics based on keywords plus.</p> Full article ">Figure 13
<p>Thematic evolution of the topic.</p> Full article ">
Open AccessArticle
Multisensory Technologies for Inclusive Exhibition Spaces: Disability Access Meets Artistic and Curatorial Research
by
Sevasti Eva Fotiadi
Multimodal Technol. Interact. 2024, 8(8), 74; https://doi.org/10.3390/mti8080074 - 19 Aug 2024
Abstract
►▼
Show Figures
This article discusses applications of technology for sensory-disabled audiences in modern and contemporary art exhibitions. One case study of experimental artistic and curatorial research by The OtherAbilities art collective is discussed: a series of prototype tools for sensory translation from audible sound to
[...] Read more.
This article discusses applications of technology for sensory-disabled audiences in modern and contemporary art exhibitions. One case study of experimental artistic and curatorial research by The OtherAbilities art collective is discussed: a series of prototype tools for sensory translation from audible sound to vibration were developed to be embeddable in the architecture of spaces where art is presented. In the article, the case study is approached from a curatorial perspective. Based on bibliographical sources, the article starts with a brief historical reference to disability art activism and a presentation of contemporary accessibility solutions for sensory-disabled audiences in museums. The research for the case study was conducted during testing and feedback sessions on the prototypes using open-ended oral interviews, open-ended written comments, and ethnographic observation of visitors’ behavior during exhibitions. The testers were d/Deaf, hard of hearing and hearing. The results focus on the reception of the sensory translation of audible sound to vibration by test users of diverse hearing abilities and on the reception of the prototypes in the context of art and design exhibitions. The article closes with a reflection on how disability scholarship meets art curatorial theory in the example of the article’s case study.
Full article
Figure 1
Figure 1
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span> (<span class="html-italic">WDIH?</span>). Hexagon Floor Tile prototype and Piping Prototype. Bradwolff Projects, Amsterdam, The Netherlands, 22–28 November 2021.</p> Full article ">Figure 2
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span>. Prototypes of Porcelain Membrane, modular tiling system. The prototype was installed in the project team’s studio during the first phase of testing. Two test users are sensing the vibrations on the porcelain tiles with their fingers. A third is sitting and leaning against the prototype with her back while wearing noise-cancelling headphones. Amsterdam, The Netherlands, 22–28 November 2021.</p> Full article ">Figure 3
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span>. Prototype of Tactile Wall.</p> Full article ">Figure 4
<p>Project team: Adi Hollander, Andreas Tegnander, Eva Fotiadi, Ildikó Horváth, Sungeun Lee and Yonatan Cohen. <span class="html-italic">Haptic Room Study #1: Porcelain Membrane Wall</span>. Visitors to the exhibition by the Embassy of Inclusive Society, Dutch Design Week 2022, are watching the film by Yael Bartana, <span class="html-italic">Summercamp</span>. They are standing on a haptic floor and leaning with their backs against the Porcelain Membrane Wall. Their bodies feel the various sounds of the film (e.g., music, sounds of construction work, etc.) as vibrations. Van Abbemuseum, Eindhoven, The Netherlands, 22–30 October 2022.</p> Full article ">Figure 5
<p>Project team: Adi Hollander, Andreas Tegnander, Eva Fotiadi, Ildikó Horváth, Sungeun Lee and Yonatan Cohen. <span class="html-italic">Haptic Room Study #3: Conversation Piece</span>. Two almost identical units, each composed of a haptic floor, a vertical wooden frame that supports the so-called sink-in pillow (a later stage of the Piping prototype), and a microphone. The two units are connected so that when a person uses the microphone of one unit, their voice is translated into vibrations in the pillows of the other unit. The installation enables conversations, during which each person has a haptic, bodily experience of the nuances of the other person’s voice, such as tone, accent, etc. Exhibition by the Embassy of Inclusive Society, Dutch Design Week 2022. Van Abbemuseum, Eindhoven, The Netherlands, 22–30 October 2022.</p> Full article ">
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span> (<span class="html-italic">WDIH?</span>). Hexagon Floor Tile prototype and Piping Prototype. Bradwolff Projects, Amsterdam, The Netherlands, 22–28 November 2021.</p> Full article ">Figure 2
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span>. Prototypes of Porcelain Membrane, modular tiling system. The prototype was installed in the project team’s studio during the first phase of testing. Two test users are sensing the vibrations on the porcelain tiles with their fingers. A third is sitting and leaning against the prototype with her back while wearing noise-cancelling headphones. Amsterdam, The Netherlands, 22–28 November 2021.</p> Full article ">Figure 3
<p>The OtherAbilities, project team of <span class="html-italic">What Do I Hear?</span>. Prototype of Tactile Wall.</p> Full article ">Figure 4
<p>Project team: Adi Hollander, Andreas Tegnander, Eva Fotiadi, Ildikó Horváth, Sungeun Lee and Yonatan Cohen. <span class="html-italic">Haptic Room Study #1: Porcelain Membrane Wall</span>. Visitors to the exhibition by the Embassy of Inclusive Society, Dutch Design Week 2022, are watching the film by Yael Bartana, <span class="html-italic">Summercamp</span>. They are standing on a haptic floor and leaning with their backs against the Porcelain Membrane Wall. Their bodies feel the various sounds of the film (e.g., music, sounds of construction work, etc.) as vibrations. Van Abbemuseum, Eindhoven, The Netherlands, 22–30 October 2022.</p> Full article ">Figure 5
<p>Project team: Adi Hollander, Andreas Tegnander, Eva Fotiadi, Ildikó Horváth, Sungeun Lee and Yonatan Cohen. <span class="html-italic">Haptic Room Study #3: Conversation Piece</span>. Two almost identical units, each composed of a haptic floor, a vertical wooden frame that supports the so-called sink-in pillow (a later stage of the Piping prototype), and a microphone. The two units are connected so that when a person uses the microphone of one unit, their voice is translated into vibrations in the pillows of the other unit. The installation enables conversations, during which each person has a haptic, bodily experience of the nuances of the other person’s voice, such as tone, accent, etc. Exhibition by the Embassy of Inclusive Society, Dutch Design Week 2022. Van Abbemuseum, Eindhoven, The Netherlands, 22–30 October 2022.</p> Full article ">
Open AccessArticle
Micro-Credentialing and Digital Badges in Developing RPAS Knowledge, Skills, and Other Attributes
by
John Murray, Keith Joiner and Graham Wild
Multimodal Technol. Interact. 2024, 8(8), 73; https://doi.org/10.3390/mti8080073 - 15 Aug 2024
Abstract
►▼
Show Figures
This study explores the potential of micro-credentialing and digital badges in developing and validating the knowledge, skills, and other attributes (KSaOs) required for diverse Remotely Piloted Aircraft Systems (RPAS) operations. The rapid proliferation of drone usage has outpaced the development of necessary KSaOs
[...] Read more.
This study explores the potential of micro-credentialing and digital badges in developing and validating the knowledge, skills, and other attributes (KSaOs) required for diverse Remotely Piloted Aircraft Systems (RPAS) operations. The rapid proliferation of drone usage has outpaced the development of necessary KSaOs for safe and efficient drone operations. This research aims to bridge this gap by identifying the unique and specific KSaOs required for different types of drone operations and examining how micro-credentialing and digital badges can provide tangible evidence of these KSaOs. The study also investigates the potential benefits and challenges of implementing digital badges in the RPAS sector and how these challenges can be addressed. Furthermore, it explores how digital badges can contribute to the standardization and recognition of RPAS competencies across different national regulatory bodies. The methodology involves observational studies of publicly available videos of drone operations, with a focus on agriculture spraying operations. The findings highlight the importance of both generic and specific KSaOs in RPAS operations and suggest that digital badges may provide an effective means of evidencing mastery of these competencies. This research contributes to the ongoing discourse on drone regulation and competency development, offering practical insights for regulators, training providers, and drone operators.
Full article
Figure 1
Open AccessArticle
Evaluating Virtual Reality in Education: An Analysis of VR through the Instructors’ Lens
by
Vaishnavi Rangarajan, Arash Shahbaz Badr and Raffaele De Amicis
Multimodal Technol. Interact. 2024, 8(8), 72; https://doi.org/10.3390/mti8080072 - 12 Aug 2024
Abstract
►▼
Show Figures
The rapid development of virtual reality (VR) technology has triggered a significant expansion of VR applications in educational settings. This study seeks to understand the extent to which these applications meet the expectations and pedagogical needs of university instructors. We conducted semi-structured interviews
[...] Read more.
The rapid development of virtual reality (VR) technology has triggered a significant expansion of VR applications in educational settings. This study seeks to understand the extent to which these applications meet the expectations and pedagogical needs of university instructors. We conducted semi-structured interviews and observations with 16 university-level instructors from Oregon State University to gather insights into their experiences and perspectives regarding the use of VR in educational contexts. Our qualitative analysis reveals detailed trends in instructors’ requirements, their satisfaction and dissatisfaction with current VR tools, and the perceived barriers to broader adoption. The study also explores instructors’ expectations and preferences for designing and implementing VR-driven courses, alongside an evaluation of the usability of selected VR applications. By elucidating the challenges and opportunities associated with VR in education, this study aims to guide the development of more effective VR educational tools and inform future curriculum design, contributing to the enhancement of digital learning environments.
Full article
Figure 1
Figure 1
<p>Methodology.</p> Full article ">Figure 2
<p>Boxplots of features that were statistically significantly different between pre- and post-exposure questionnaires (from top: 3D models, web browser, video player/YouTube, 360 videos, cloud integration, webcam sharing, creation/personalization of avatars, customization of environment.</p> Full article ">Figure 3
<p>Stacked column chart illustrating participants’ preferences for four support methods.</p> Full article ">Figure 4
<p>Boxplots of satisfaction with ease of completion, amount of task time, and relevance across 6 tasks: 3D drawing, 3D models, PDF slides, quizzes, videos, and whiteboard (2D drawing).</p> Full article ">Figure 5
<p>An overview of the themes and sub-themes under the Requirements category.</p> Full article ">Figure 6
<p>An overview of the themes and sub-themes under the Challenges category.</p> Full article ">
<p>Methodology.</p> Full article ">Figure 2
<p>Boxplots of features that were statistically significantly different between pre- and post-exposure questionnaires (from top: 3D models, web browser, video player/YouTube, 360 videos, cloud integration, webcam sharing, creation/personalization of avatars, customization of environment.</p> Full article ">Figure 3
<p>Stacked column chart illustrating participants’ preferences for four support methods.</p> Full article ">Figure 4
<p>Boxplots of satisfaction with ease of completion, amount of task time, and relevance across 6 tasks: 3D drawing, 3D models, PDF slides, quizzes, videos, and whiteboard (2D drawing).</p> Full article ">Figure 5
<p>An overview of the themes and sub-themes under the Requirements category.</p> Full article ">Figure 6
<p>An overview of the themes and sub-themes under the Challenges category.</p> Full article ">
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Information, Mathematics, MTI, Symmetry
Youth Engagement in Social Media in the Post COVID-19 Era
Topic Editors: Naseer Abbas Khan, Shahid Kalim Khan, Abdul QayyumDeadline: 30 September 2025
Conferences
Special Issues
Special Issue in
MTI
Cooperative Intelligence in Automated Driving-2nd Edition
Guest Editors: Andreas Riener, Myounghoon Jeon (Philart), Ronald SchroeterDeadline: 20 October 2024
Special Issue in
MTI
Multimodal User Interfaces and Experiences: Challenges, Applications, and Perspectives—2nd Edition
Guest Editors: Wei Liu, Jan Auernhammer, Takumi OhashiDeadline: 15 January 2025
Special Issue in
MTI
Innovative Theories and Practices for Designing and Evaluating Inclusive Educational Technology and Online Learning
Guest Editor: Julius NganjiDeadline: 31 January 2025
Special Issue in
MTI
3D User Interfaces and Virtual Reality—2nd Edition
Guest Editor: Arun K. KulshreshthDeadline: 30 April 2025