[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (219)

Search Parameters:
Keywords = visual question answering

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6718 KiB  
Article
Using Multimodal Large Language Models (MLLMs) for Automated Detection of Traffic Safety-Critical Events
by Mohammad Abu Tami, Huthaifa I. Ashqar, Mohammed Elhenawy, Sebastien Glaser and Andry Rakotonirainy
Vehicles 2024, 6(3), 1571-1590; https://doi.org/10.3390/vehicles6030074 - 2 Sep 2024
Viewed by 441
Abstract
Traditional approaches to safety event analysis in autonomous systems have relied on complex machine and deep learning models and extensive datasets for high accuracy and reliability. However, the emerge of multimodal large language models (MLLMs) offers a novel approach by integrating textual, visual, [...] Read more.
Traditional approaches to safety event analysis in autonomous systems have relied on complex machine and deep learning models and extensive datasets for high accuracy and reliability. However, the emerge of multimodal large language models (MLLMs) offers a novel approach by integrating textual, visual, and audio modalities. Our framework leverages the logical and visual reasoning power of MLLMs, directing their output through object-level question–answer (QA) prompts to ensure accurate, reliable, and actionable insights for investigating safety-critical event detection and analysis. By incorporating models like Gemini-Pro-Vision 1.5, we aim to automate safety-critical event detection and analysis along with mitigating common issues such as hallucinations in MLLM outputs. The results demonstrate the framework’s potential in different in-context learning (ICT) settings such as zero-shot and few-shot learning methods. Furthermore, we investigate other settings such as self-ensemble learning and a varying number of frames. The results show that a few-shot learning model consistently outperformed other learning models, achieving the highest overall accuracy of about 79%. The comparative analysis with previous studies on visual reasoning revealed that previous models showed moderate performance in driving safety tasks, while our proposed model significantly outperformed them. To the best of our knowledge, our proposed MLLM model stands out as the first of its kind, capable of handling multiple tasks for each safety-critical event. It can identify risky scenarios, classify diverse scenes, determine car directions, categorize agents, and recommend the appropriate actions, setting a new standard in safety-critical event management. This study shows the significance of MLLMs in advancing the analysis of naturalistic driving videos to improve safety-critical event detection and understanding the interactions in complex environments. Full article
(This article belongs to the Special Issue Vehicle Design Processes, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Distribution of QA categories in the DRAMA dataset for traffic safety-critical event detection including (<b>a</b>) is risk, (<b>b</b>) suggested action, (<b>c</b>) direction of ego car, (<b>d</b>) scene description, and (<b>e</b>) agent type.</p>
Full article ">Figure 2
<p>Automated multi-stage hazard detection framework for safety-critical events using MLLMs.</p>
Full article ">Figure 3
<p>Conceptual 2-D diagram of augmented image prompting. The key idea of using different augmentation for the same scene under investigation is to direct the model to different places in the language distribution, which could help the model with generating more textual representation of the scene when generating a response through local sampling. The different colored areas showed the an example of how image augmentation can be done.</p>
Full article ">Figure 4
<p>Example of textual prompt with two-frame scene with the corresponding response from Gemini.</p>
Full article ">Figure 5
<p>Output from Gemini-Pro-Vision 1.5 analysis with sliding window (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>). Gemini predicted (<b>a</b>), (<b>b</b>), and (<b>d</b>) as critical-safety events, while (<b>c</b>) is not.</p>
Full article ">Figure 6
<p>Zero-shot learning performance across different numbers of frames.</p>
Full article ">Figure 7
<p>A few-shot learning performance across different numbers of examples.</p>
Full article ">Figure 8
<p>Comparison of zero-shot and few-shot methods across various metrics (top 3 highlighted).</p>
Full article ">Figure 9
<p>Self-ensemble learning across different number of candidates with top-k voting.</p>
Full article ">Figure 10
<p>Comparison of zero-shot (1-frame) and self-ensemble methods across various metrics (top bar highlighted).</p>
Full article ">Figure 11
<p>Image-augmented learning performance with top-k voting.</p>
Full article ">Figure 12
<p>Comparison of zero-shot (1-frame) and image-augmented methods across various metrics.</p>
Full article ">Figure 13
<p>Overall performance comparison across different learning methods. The highlighted bars showed the highest accuracy from each category.</p>
Full article ">
12 pages, 736 KiB  
Article
Perceived Quality in the Automotive Industry: Do Car Exterior and Interior Color Combinations Have an Impact?
by Giuseppina Tovillo, Mariachiara Rapuano, Alessandro Milite and Gennaro Ruggiero
Appl. Syst. Innov. 2024, 7(5), 79; https://doi.org/10.3390/asi7050079 - 30 Aug 2024
Viewed by 300
Abstract
Since in the automotive field colors play an important role, the present study tried to answer the following questions: is the perceived quality (PQ) of the vehicle interior color different after visually exploring the car body color? If so, how? Here, exploiting immersive [...] Read more.
Since in the automotive field colors play an important role, the present study tried to answer the following questions: is the perceived quality (PQ) of the vehicle interior color different after visually exploring the car body color? If so, how? Here, exploiting immersive virtual reality simulations and eye-tracking technology, participants were asked to visually explore an unbranded car in different exterior/interior color combinations and rate its PQ. Fixation duration (time eyes are fixed on a target) was considered as an implicit measure of visual attention allocation while PQ evaluations were considered as explicit measures of individual preferences for car colors. As for eye-tracking data, the results showed that white and red car exteriors affected the attention to interiors with the fixation duration being longer for gray than black interiors. Moreover, the subjective evaluations of car PQ predicted eye-tracking patterns: as the negative evaluation increased, the fixation duration on car interiors also increased. Overall, these preliminary results suggested the need to further explore the relationship between PQ and attentional/motivational processing as well as the role of subjective aesthetic preferences for color combinations in the automotive field. Full article
Show Figures

Figure 1

Figure 1
<p>The picture represents an example of stimuli: (<b>a</b>) 3D picture of the exterior car door colored in gray, from the driver’s perspective; (<b>b</b>) 3D picture of the car interior colored in gray from the driver’s perspective.</p>
Full article ">Figure 2
<p>This picture represents an example of a trial once the experimental session began: a fixation cross appeared for 500 ms; afterward, the exterior 3D picture of a (gray) car door was presented (5 s), followed by the 3D picture of a (white) car interior (5 s); once the stimuli disappeared, the empty parking area was presented and the experimenter administered the 6-item CPQQ (free time).</p>
Full article ">
17 pages, 2893 KiB  
Article
Student Teachers’ Perceptions of a Game-Based Exam in the Genial.ly App
by Elina Gravelsina and Linda Daniela
Computers 2024, 13(8), 207; https://doi.org/10.3390/computers13080207 - 19 Aug 2024
Viewed by 406
Abstract
This research examines student teachers’ perceptions of a game-based exam conducted in the Genial.ly app in the study course ”Legal Aspects of the Pedagogical Process”. This study aims to find out the pros and cons of game-based exams and understand which digital solutions [...] Read more.
This research examines student teachers’ perceptions of a game-based exam conducted in the Genial.ly app in the study course ”Legal Aspects of the Pedagogical Process”. This study aims to find out the pros and cons of game-based exams and understand which digital solutions can enable the development and analysis of digital game data. At the beginning of the course, students were introduced to the research and asked to provide feedback throughout the course on what they saw as the most important aspects of each class and insights on how they envisioned the game-based exam could proceed. The game-based exam was built using the digital platform Genial.ly after its update, which provided the possibility to include open-ended questions and collect data for analyses. It was designed with a narrative in which a new teacher comes to a school and is asked for help in different situations. After reading a description of each situation, the students answered questions about how they would resolve them based on Latvia’s regulations. After the exam, students wrote feedback indicating that the game-based exam helped them visualize the situations presented, resulting in lower stress levels compared to a traditional exam. This research was structured based on design-based principles and the data were analyzed from the perspective of how educators can use freely available solutions to develop game-based exams to test students’ knowledge gained during a course. The results show that Genial.ly can be used as an examination tool, as indicated by positive student teachers’ responses. However, this research has limitations as it was conducted with only one test group due to administrative reasons. Future research could address this by including multiple groups within the same course as well as testing game-based exams in other subject courses for comparison. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

Figure 1
<p>Overview of the design-based research method with 3 main phases and connected parts.</p>
Full article ">Figure 2
<p>Feedback and analysis from Genial.ly’s “individual activity” view that includes answers to both test-based questions and open-ended questions where students provided their opinions.</p>
Full article ">Figure 3
<p>The game visualizes a conversation in the hallway.</p>
Full article ">Figure 4
<p>The game progress indicator, displayed in the upper left corner.</p>
Full article ">Figure 5
<p>Inability to customize the words of the interactive question interface.</p>
Full article ">
26 pages, 9899 KiB  
Article
Spatial Cognition, Modality and Language Emergence: Cognitive Representation of Space in Yucatec Maya Sign Language (Mexico)
by Olivier Le Guen and José Alfredo Tuz Baas
Languages 2024, 9(8), 278; https://doi.org/10.3390/languages9080278 - 16 Aug 2024
Viewed by 486
Abstract
This paper analyzes spatial gestures and cognition in a new, or so-called “emerging”, visual language, the Yucatec Maya Sign Language (YSML). This sign language was created by deaf and hearing signers in various Yucatec Maya villages on the Yucatec Peninsula (Mexico). Although the [...] Read more.
This paper analyzes spatial gestures and cognition in a new, or so-called “emerging”, visual language, the Yucatec Maya Sign Language (YSML). This sign language was created by deaf and hearing signers in various Yucatec Maya villages on the Yucatec Peninsula (Mexico). Although the sign language is not a signed version of spoken Yucatec Maya, both languages evolve in a similar cultural setting. Studies have shown that cultures around the world seem to rely on one preferred spatial Frame of Reference (FoR), shaping in many ways how people orient themselves and think about the world around them. Prior research indicated that Yucatec Maya speakers rely on the use of the geocentric FoR. However, contrary to other cultures, it is mainly observable through the production of gestures and not speech only. In the case of space, gestures in spoken Yucatec Maya exhibit linguistic features, having the status of a lexicon. Our research question is the following: if the preferred spatial FoR among the Yucatec Mayas is based on co-expressivity and spatial linguistic content visually transmitted via multimodal interactions, will deaf signers of an emerging language created in the same cultural setting share the same cognitive preference? In order to answer this question, we conducted three experimental tasks in three different villages where YMSL is in use: a non-verbal rotation task, a Director-Matcher task and a localization task. Results indicate that YMSL signers share the same preference for the geocentric FoR. Full article
Show Figures

Figure 1

Figure 1
<p>Wayfinding according to gestures produced based on geocentric FoR (in green) and its understanding according to the egocentric FoR (in red).</p>
Full article ">Figure 2
<p>Toy cars used as stimuli for the rotation task.</p>
Full article ">Figure 3
<p>Predetermined configuration of the stimuli (A = blue; R = red; B = white).</p>
Full article ">Figure 4
<p>YMSL instructions: “Hey, the same as he did (you have to do), he…”.</p>
Full article ">Figure 5
<p>YMSL instructions: “…he is going to place the cars. When he finishes as you saw, you have to reproduce the same thing on the other table (there)”.</p>
Full article ">Figure 6
<p>Predictions for the rotation task.</p>
Full article ">Figure 7
<p>Results in percentage by community for the preference towards FoRs.</p>
Full article ">Figure 8
<p>Results in percentage by community and hearing status for the preference towards FoRs (100% indicates a preference for the geocentric FoR, while 0% means a preference for the egocentric FoR). BB = hearing Bilingual-Bimodal signers; Deaf = deaf signers.</p>
Full article ">Figure 9
<p>Example of the task conducted between two participants from Chicán (a Bilingual-Bimodal hearing woman on the left and a deaf man (her husband) on the right).</p>
Full article ">Figure 10
<p>The two identical sets of car toys used as stimuli in the Director-Matcher task.</p>
Full article ">Figure 11
<p>Predictions for the Matcher arrangement according to the egocentric and geocentric FoR.</p>
Full article ">Figure 12
<p>Director-Matcher results by community in percentages.</p>
Full article ">Figure 13
<p>Director-Matcher results by community and modality in percentages.</p>
Full article ">Figure 14
<p>Example of the interpreter (on the <b>left</b>) explaining the task in YMSL to a deaf participant (on the <b>right</b>).</p>
Full article ">Figure 15
<p>Location of all pairs of buildings in Chicán used as stimuli in the localization task and the two groups of participants in sitting 1 and 2.</p>
Full article ">Figure 16
<p>Predictions regarding spatial gesture production for both FoR (in red, the egocentric strategy, and in green, the geocentric strategy). Wh stands for the visual representation of Will’s house, and C stands for City hall.</p>
Full article ">Figure 17
<p>Examples of results from sitting 1 where participants faced South and sitting 2 where participants faced East. Wh stands for the visual representation of Will’s house and C stands for City hall.</p>
Full article ">Figure 18
<p>Results of the localization task from Chicán in percentage.</p>
Full article ">
23 pages, 27007 KiB  
Article
An Intelligent Hand-Assisted Diagnosis System Based on Information Fusion
by Haonan Li and Yitong Zhou
Sensors 2024, 24(14), 4745; https://doi.org/10.3390/s24144745 - 22 Jul 2024
Viewed by 528
Abstract
This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient’s [...] Read more.
This research proposes an innovative, intelligent hand-assisted diagnostic system aiming to achieve a comprehensive assessment of hand function through information fusion technology. Based on the single-vision algorithm we designed, the system can perceive and analyze the morphology and motion posture of the patient’s hands in real time. This visual perception can provide an objective data foundation and capture the continuous changes in the patient’s hand movement, thereby providing more detailed information for the assessment and providing a scientific basis for subsequent treatment plans. By introducing medical knowledge graph technology, the system integrates and analyzes medical knowledge information and combines it with a voice question-answering system, allowing patients to communicate and obtain information effectively even with limited hand function. Voice question-answering, as a subjective and convenient interaction method, greatly improves the interactivity and communication efficiency between patients and the system. In conclusion, this system holds immense potential as a highly efficient and accurate hand-assisted assessment tool, delivering enhanced diagnostic services and rehabilitation support for patients. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Sensing)
Show Figures

Figure 1

Figure 1
<p>Experimental system layout.</p>
Full article ">Figure 2
<p>The architecture of a knowledge base question-answering system based on information retrieval.</p>
Full article ">Figure 3
<p>Example of UI interface for Q&amp;A robot.</p>
Full article ">Figure 4
<p>Schematic diagram of the bone and joint distribution in the right human hand.</p>
Full article ">Figure 5
<p>Diagram of the three-dimensional finger coordinate system and joint angles. (<b>a</b>) Example of a three-dimensional coordinate system of the hand. (<b>b</b>) Two-dimensional example of joint angles.</p>
Full article ">Figure 6
<p>Comparative experiment shooting angles. (<b>a</b>) Main experiment viewpoint example. (<b>b</b>) Viewpoint 1. (<b>c</b>) Viewpoint 2.</p>
Full article ">Figure 7
<p>Experimental setup for single-frame image measurement. (<b>a</b>) Example of single-frame image measurement. (<b>b</b>) Dorsal hand measurement example.</p>
Full article ">Figure 8
<p>Experimental setup for continuous-frame hand motion measurement. (<b>a</b>) Hand Clenching Example. (<b>b</b>) Hand Relaxation Example.</p>
Full article ">Figure 9
<p>Boxplot of continuous-frame hand activity range data.</p>
Full article ">Figure 10
<p>Boxplot of hand joint functional activity score.</p>
Full article ">Figure 11
<p>Presentation of results from experiment on reliability of question answering. (<b>a</b>) Etiology and symptoms. (<b>b</b>) Preventive measures. (<b>c</b>) Dietary recommendations. (<b>d</b>) Complications. (<b>e</b>) Treatment methods.</p>
Full article ">
20 pages, 6085 KiB  
Article
Virtual Reality in Fluid Power Education: Impact on Students’ Perceived Learning Experience and Engagement
by Israa Azzam, Khalil El Breidi, Farid Breidi and Christos Mousas
Educ. Sci. 2024, 14(7), 764; https://doi.org/10.3390/educsci14070764 - 12 Jul 2024
Viewed by 663
Abstract
The significance of practical experience and visualization in the fluid power discipline, highly tied to students’ success, requires integrating immersive pedagogical tools for enhanced course delivery, offering real-life industry simulation. This study investigates the impact of using virtual reality (VR) technology as an [...] Read more.
The significance of practical experience and visualization in the fluid power discipline, highly tied to students’ success, requires integrating immersive pedagogical tools for enhanced course delivery, offering real-life industry simulation. This study investigates the impact of using virtual reality (VR) technology as an instructional tool on the learning and engagement of 48 mechanical engineering technology (MET) students registered in the MET: 230 Fluid Power course at Purdue University. An interactive VR module on hydraulic grippers was developed utilizing the constructivist learning theory for MET: 230 labs, enabling MET students to explore light- and heavy-duty gripper designs and operation through assembly, disassembly, and testing in a virtual construction environment. A survey consisting of a Likert scale and short-answer questions was designed based on the study’s objective to evaluate the students’ engagement and perceived attitude toward the module. Statistical and natural language processing (NLP) analyses were conducted on the students’ responses. The statistical analysis results revealed that 97% of the students expressed increased excitement, over 90% reported higher engagement, and 87% found the VR lab realistic and practical. The NLP analysis highlighted positive themes such as “engagement”, “valuable experience”, “hands-on learning”, and “understanding”, with over 80% of students endorsing these sentiments. These findings will contribute to future efforts aimed at improving fluid power learning through immersive digital reality technologies, while also exploring alternative approaches for individuals encountering challenges with such technologies. Full article
(This article belongs to the Special Issue Extended Reality in Education)
Show Figures

Figure 1

Figure 1
<p>Diagram showing the three phases of the VR project execution.</p>
Full article ">Figure 2
<p>Section of the developed VR construction lab with the UI controls.</p>
Full article ">Figure 3
<p>The virtual avatar serves as a virtual agent for providing guidance and assistance through visual/audio instructions.</p>
Full article ">Figure 4
<p>Diagram illustrating the adopted experimental design for conducting the study in the MET: 230 course.</p>
Full article ">Figure 5
<p>VR Setup for conducting the research study.</p>
Full article ">Figure 6
<p>Statistical diagram illustrating the Likert scale data collected from the participants’ responses to the six Likert scale questions.</p>
Full article ">
12 pages, 237 KiB  
Article
People with Disabilities and Their Families in the Roman Catholic Church in Poland: An Analysis of Barriers to Participation in Religious Practices
by Katarzyna Zielińska-Król
Religions 2024, 15(7), 840; https://doi.org/10.3390/rel15070840 - 12 Jul 2024
Viewed by 666
Abstract
The available research suggests that the rate of involvement of people with disabilities and their families in the life of the Church is significantly lower than that of people without disabilities. The engagement of people with disabilities is largely dependent on (a) the [...] Read more.
The available research suggests that the rate of involvement of people with disabilities and their families in the life of the Church is significantly lower than that of people without disabilities. The engagement of people with disabilities is largely dependent on (a) the level of religiosity; (b) intrinsic motivation; (c) the level of trust in the institutions of the Church; and (d) broadly understood accessibility factors. Barriers experienced by people with disabilities are complex in nature, and make these people dependent on the help of others. Overcoming them requires significant investment, commitment, and change in the Church institution. These issues are relatively rarely addressed in the literature. The few, usually partial studies tend to concentrate on specific disabilities, discussed with no reference to the family context. However, it is usually the case that the religiosity and church activity of a person with a disability are firmly rooted in their family reality, shaped by the level of religiosity of their parents, and sometimes dependent on their presence and involvement. The aim of this article, which is both theoretical and empirical in nature, is to answer the question of which barriers form an obstacle to participation in religious life for people with disabilities and their families in Poland. This study uses the results of a nationwide qualitative research (focus group interview method) conducted among people with physical and intellectual disabilities, the hard-of-hearing and the deaf, the visually impaired, and their carers. Data analysis enabled the identification of the following barriers: infrastructural, personal and organizational (family-related and extra-familial). These research results can provide guidance in pastoral work with people with disabilities and their families, improving not only the quality of their religious experience, but also the number of the faithful in the Church community. Full article
(This article belongs to the Special Issue Religion, Theology, and Bioethical Discourses on Marriage and Family)
18 pages, 3280 KiB  
Article
Learning to Listen: Changes in Children’s Brain Activity Following a Listening Comprehension Intervention
by Michelle Marji, Cody Schwartz, Tri Nguyen, Anne S. Kupfer, Chris Blais, Maria Adelaida Restrepo and Arthur M. Glenberg
Behav. Sci. 2024, 14(7), 585; https://doi.org/10.3390/bs14070585 - 10 Jul 2024
Viewed by 704
Abstract
“Are you LISTENING?” may be one of the most frequent questions preschoolers hear from their parents and teachers, but can children be taught to listen carefully—and thus better comprehend language—and if so, what changes occur in their brains? Twenty-seven four- and five-year-old children [...] Read more.
“Are you LISTENING?” may be one of the most frequent questions preschoolers hear from their parents and teachers, but can children be taught to listen carefully—and thus better comprehend language—and if so, what changes occur in their brains? Twenty-seven four- and five-year-old children were taught a language simulation strategy to use while listening to stories: first, they practiced moving graphics on an iPad to correspond to the story actions, and then they practiced imagining the movements. Compared to a control condition, children in the intervention answered comprehension questions more accurately when imagining moving the graphics and on a measure of transfer using a new story without any instruction and with only immovable graphics. Importantly, for children in the intervention, the change in comprehension from the first to the sixth day was strongly correlated with changes in EEG mu and alpha desynchronization, suggesting changes in motor and visual processing following the intervention. Thus, the data are consistent with our hypothesis that a language simulation listening comprehension intervention can improve children’s listening comprehension by teaching children to align visual and motor processing with language comprehension. Full article
(This article belongs to the Special Issue Neurocognitive Foundations of Embodied Learning)
Show Figures

Figure 1

Figure 1
<p>One page from one chapter in the story “A Celebration to Remember.” Children in the intervention condition use physical or imagined manipulation when encountering a sentence in blue font. Here, the child imagines moving the girl to the mangos and then to the garlic. In the control condition, children are instructed to think carefully about the sentences in blue font. Source: EMBRACE iPad application.</p>
Full article ">Figure 2
<p>Performance on comprehension questions. Error bars are ± one standard error.</p>
Full article ">Figure 3
<p>(<b>a</b>) Relation between change in text comprehension and change in mu desynchronization measured at the central electrodes for the intervention condition; (<b>b</b>) relation between change in text comprehension and change in mu desynchronization measured at the central electrodes for the control condition; (<b>c</b>) relation between change in text comprehension and change in alpha desynchronization measured at the occipital electrodes for the intervention condition; (<b>d</b>) relation between change in text comprehension and change in alpha desynchronization measured at the occipital electrodes for the control condition. In each panel, the shaded region corresponds to the 95% confidence interval for the slope of the regression line.</p>
Full article ">
18 pages, 3179 KiB  
Article
Influence of Wind Turbines as Dominants in the Landscape on the Acceptance of the Development of Renewable Energy Sources in Poland
by Natalia Świdyńska, Mirosława Witkowska-Dąbrowska and Dominika Jakubowska
Energies 2024, 17(13), 3268; https://doi.org/10.3390/en17133268 - 3 Jul 2024
Viewed by 565
Abstract
Where there are wind turbines, they become a dominant feature of the landscape. The landscape is one of the frequently identified types of impacts of these investments on the natural environment and people. Specially prepared methodologies are used to assess the impact of [...] Read more.
Where there are wind turbines, they become a dominant feature of the landscape. The landscape is one of the frequently identified types of impacts of these investments on the natural environment and people. Specially prepared methodologies are used to assess the impact of turbines on the landscape. No less important is the subjective feeling of residents because it can affect the social acceptance of these investments. The work answers questions about residents’ opinions on the impact of energy installations on the landscape. The results of the study, using chi-square, indicate that there is a relationship between the presence of windmills in the municipality and support for their development, as well as the evaluation of both the positive as well as the negative. Residents of a municipality where wind turbines have been around for more than a dozen years considered the introduction of a very strong visual stimulus as the most important negative impact on the landscape. Residents of a municipality without wind power considered interference with ecosystems as the most important negative impact. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

Figure 1
<p>Map illustrating the wind conditions across Poland, assessing suitable locations for wind power plants. Source: Adapted from [<a href="#B78-energies-17-03268" class="html-bibr">78</a>,<a href="#B79-energies-17-03268" class="html-bibr">79</a>].</p>
Full article ">Figure 2
<p>Support for renewable energy sources. Source: own elaboration.</p>
Full article ">Figure 3
<p>Knowledge of types of renewable energy sources (%). Source: own elaboration.</p>
Full article ">Figure 4
<p>Support for different types of renewable energy. Source: own elaboration.</p>
Full article ">Figure 5
<p>Support for the construction of a wind power plant in respondents’ municipalities of residence, on neighboring properties, and on leased land. Source: own elaboration.</p>
Full article ">Figure 6
<p>Landscape as an important natural asset. Source: own elaboration.</p>
Full article ">Figure 7
<p>Visual perception of the impact of wind turbines. Source: own elaboration.</p>
Full article ">Figure 8
<p>Impact of wind turbines on the landscape. Source: own elaboration.</p>
Full article ">
10 pages, 1256 KiB  
Technical Note
aPEAch: Automated Pipeline for End-to-End Analysis of Epigenomic and Transcriptomic Data
by Panagiotis Xiropotamos, Foteini Papageorgiou, Haris Manousaki, Charalampos Sinnis, Charalabos Antonatos, Yiannis Vasilopoulos and Georgios K. Georgakilas
Biology 2024, 13(7), 492; https://doi.org/10.3390/biology13070492 - 2 Jul 2024
Viewed by 1085
Abstract
With the advent of next-generation sequencing (NGS), experimental techniques that capture the biological significance of DNA loci or RNA molecules have emerged as fundamental tools for studying the epigenome and transcriptional regulation on a genome-wide scale. The volume of the generated data and [...] Read more.
With the advent of next-generation sequencing (NGS), experimental techniques that capture the biological significance of DNA loci or RNA molecules have emerged as fundamental tools for studying the epigenome and transcriptional regulation on a genome-wide scale. The volume of the generated data and the underlying complexity regarding their analysis highlight the need for robust and easy-to-use computational analytic methods that can streamline the process and provide valuable biological insights. Our solution, aPEAch, is an automated pipeline that facilitates the end-to-end analysis of both DNA- and RNA-sequencing assays, including small RNA sequencing, from assessing the quality of the input sample files to answering meaningful biological questions by exploiting the rich information embedded in biological data. Our method is implemented in Python, based on a modular approach that enables users to choose the path and extent of the analysis and the representations of the results. The pipeline can process samples with single or multiple replicates in batches, allowing the ease of use and reproducibility of the analysis across all samples. aPEAch provides a variety of sample metrics such as quality control reports, fragment size distribution plots, and all intermediate output files, enabling the pipeline to be re-executed with different parameters or algorithms, along with the publication-ready visualization of the results. Furthermore, aPEAch seamlessly incorporates advanced unsupervised learning analyses by automating clustering optimization and visualization, thus providing invaluable insight into the underlying biological mechanisms. Full article
(This article belongs to the Special Issue Machine Learning Applications in Biology)
Show Figures

Figure 1

Figure 1
<p>Overview of aPEAch’s module hierarchy and data flow depending on the NGS protocol; ChIP-seq-like (i.e., ATAC-, DNase-, MNase-seq) in orange, RNA-seq in green, miRNA-seq in blue).</p>
Full article ">Figure 2
<p>RNA-seq use case module’s results. (<b>A</b>) PCA plot including the processed samples (T4 cells in blue color and T8 cells in orange color), (<b>B</b>) volcano plot, in which dotted lines represent log2 fold change and adjusted <span class="html-italic">p</span>-value cutoffs, and (<b>C</b>) MA plot derived from the differential gene expression analysis. (<b>D</b>) KEGG pathways enriched with deregulated genes (upregulated in CD4+ T cells on the left, and upregulated in CD8+ T cells on the right).</p>
Full article ">Figure 3
<p>Clustering analysis results. (<b>A</b>) Clustering of ATAC-seq peaks based on the corrected p-value (log<sub>10</sub> scale) as calculated by Macs2, for double-positive (DP), CD4 (T4), and CD8 (T8) single-positive T cells. (<b>B</b>) Clustering of miRNAs based on their expression (normalized counts based on DESeq2 variance stabilization normalization) profile across heart, lung, and liver samples on day 0. For both (<b>A</b>,<b>B</b>), the number of clusters was automatically calculated using the optimal value of the silhouette coefficient score.</p>
Full article ">
9 pages, 541 KiB  
Article
Prevalence of Near-Vision-Related Symptoms in a University Population
by Jessica Gomes and Sandra Franco
Vision 2024, 8(2), 38; https://doi.org/10.3390/vision8020038 - 19 Jun 2024
Viewed by 737
Abstract
The university population has high visual demands. It is therefore important to assess the prevalence of symptoms in these subjects, which may affect their academic performance. In this cross-sectional study, a randomized sample of 252 subjects from a university answered the Convergence Insufficiency [...] Read more.
The university population has high visual demands. It is therefore important to assess the prevalence of symptoms in these subjects, which may affect their academic performance. In this cross-sectional study, a randomized sample of 252 subjects from a university answered the Convergence Insufficiency Symptom Survey (CISS) questionnaire. In addition, questions were asked about blurred vision during and after near tasks, the number of hours per day spent in near vision, and whether or not they wore glasses. Furthermore, 110 subjects underwent an eye exam, including a refraction and accommodation assessment. The mean age of the subjects was 28.79 ± 11.36 years, 62.3% reported wearing glasses, and on average 7.20 ± 2.92 hours/day was spent in near vision. The mean of the CISS score was 18.69 ± 9.96, and according to its criteria, 38% of the subjects were symptomatic. Some symptoms were significantly (p < 0.05) more frequent in subjects wearing glasses. Accommodative dysfunctions were present in 30.9% of the subjects, the most common being insufficiency of accommodation. We emphasise the importance of assessing symptomatology during the clinical examination in this group of subjects, as they spend many hours a day in near vision, as well as assessing accommodation, binocular vision, and the ergonomic work environment, which may be at the origin of the symptoms, in addition to the need to wear glasses. Full article
Show Figures

Figure 1

Figure 1
<p>Mean score of the symptoms with statistically significant differences between subjects who wear and do not wear glasses. Error bars indicate the standard deviation.</p>
Full article ">
20 pages, 555 KiB  
Article
ChatGPT: The End of Online Exam Integrity?
by Teo Susnjak and Timothy R. McIntosh
Educ. Sci. 2024, 14(6), 656; https://doi.org/10.3390/educsci14060656 - 17 Jun 2024
Cited by 3 | Viewed by 2079
Abstract
This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative [...] Read more.
This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative self-reflective strategy was developed for invoking critical thinking and higher-order reasoning in LLMs when responding to complex multimodal exam questions involving both visual and textual data. The proposed strategy was demonstrated and evaluated on real exam questions by subject experts and the performance of ChatGPT (GPT-4) with vision was estimated on an additional dataset of 600 text descriptions of multimodal exam questions. The results indicate that the proposed self-reflective strategy can invoke latent multi-hop reasoning capabilities within LLMs, effectively steering them towards correct answers by integrating critical thinking from each modality into the final response. Meanwhile, ChatGPT demonstrated considerable proficiency in being able to answer multimodal exam questions across 12 subjects. These findings challenge prior assertions about the limitations of LLMs in multimodal reasoning and emphasise the need for robust online exam security measures such as advanced proctoring systems and more sophisticated multimodal exam questions to mitigate potential academic misconduct enabled by AI technologies. Full article
(This article belongs to the Section Higher Education)
Show Figures

Figure 1

Figure 1
<p>An example of a multiple-choice question from a Finance exam in the original format.</p>
Full article ">Figure 2
<p>An example of a multimodal Computer Science exam question.</p>
Full article ">Figure 3
<p>Mean GPT-4V estimated proficiency score across each subject in rank-order together with the standard deviations.</p>
Full article ">Figure 4
<p>Mean GPT-4V estimated proficiency scores across each discipline.</p>
Full article ">
29 pages, 660 KiB  
Article
The Factors Influencing User Satisfaction in Last-Mile Delivery: The Structural Equation Modeling Approach
by Vijoleta Vrhovac, Dušanka Dakić, Stevan Milisavljević, Đorđe Ćelić, Darko Stefanović and Marina Janković
Mathematics 2024, 12(12), 1857; https://doi.org/10.3390/math12121857 - 14 Jun 2024
Viewed by 1205
Abstract
The primary goal of this research is to identify which factors most significantly influence customer satisfaction in the last-mile delivery (LMD) process. The sample comprised 907 participants (63.4% female) with a mean age of 34.90. All participants completed three questionnaires regarding LMD, customer [...] Read more.
The primary goal of this research is to identify which factors most significantly influence customer satisfaction in the last-mile delivery (LMD) process. The sample comprised 907 participants (63.4% female) with a mean age of 34.90. All participants completed three questionnaires regarding LMD, customer satisfaction, and trust in courier service. Furthermore, participants answered questions related to significant aspects of the delivery process: speed, price, and courier call before delivery. To determine which factors most significantly influence customer satisfaction in LMD, structural equation modeling (SEM) was applied. The tested SEM model showed a good fit. The results indicated that within the LMD dimension, visual appeal was a significant predictor in a negative direction, and all other LMD dimensions (except parcel tracking) were positive and significant predictors of customer satisfaction. Trust in courier service, delivery price, speed, and courier call before delivery were statistically significant predictors of customer satisfaction in last-mile delivery, all in a positive direction. Full article
(This article belongs to the Special Issue Data-Driven Approaches in Revenue Management and Pricing Analytics)
Show Figures

Figure 1

Figure 1
<p>Structural model. Note: The factor loadings for items within the LMD questionnaire dimensions (LMD 1 to 24) varied from 0.553 to 0.948, all statistically significant at <span class="html-italic">p</span> &lt; 0.001. For items related to trust in courier service (TCS 1 to 6), factor loadings ranged from 0.745 to 0.947, with each also showing significance at <span class="html-italic">p</span> &lt; 0.001. Additionally, the factor loadings for items of customer satisfaction (CS 1 to 10) spanned from 0.699 to 0.856, and all were significant at <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
13 pages, 1359 KiB  
Article
Image to Label to Answer: An Efficient Framework for Enhanced Clinical Applications in Medical Visual Question Answering
by Jianfeng Wang, Kah Phooi Seng, Yi Shen, Li-Minn Ang and Difeng Huang
Electronics 2024, 13(12), 2273; https://doi.org/10.3390/electronics13122273 - 10 Jun 2024
Viewed by 592
Abstract
Medical Visual Question Answering (Med-VQA) faces significant limitations in application development due to sparse and challenging data acquisition. Existing approaches focus on multi-modal learning to equip models with medical image inference and natural language understanding, but this worsens data scarcity in Med-VQA, hindering [...] Read more.
Medical Visual Question Answering (Med-VQA) faces significant limitations in application development due to sparse and challenging data acquisition. Existing approaches focus on multi-modal learning to equip models with medical image inference and natural language understanding, but this worsens data scarcity in Med-VQA, hindering clinical application and advancement. This paper proposes the ITLTA framework for Med-VQA, designed based on field requirements. ITLTA combines multi-label learning of medical images with the language understanding and reasoning capabilities of large language models (LLMs) to achieve zero-shot learning, meeting natural language module needs without end-to-end training. This approach reduces deployment costs and training data requirements, allowing LLMs to function as flexible, plug-and-play modules. To enhance multi-label classification accuracy, the framework uses external medical image data for pretraining, integrated with a joint feature and label attention mechanism. This configuration ensures robust performance and applicability, even with limited data. Additionally, the framework clarifies the decision-making process for visual labels and question prompts, enhancing the interpretability of Med-VQA. Validated on the VQA-Med 2019 dataset, our method demonstrates superior effectiveness compared to existing methods, confirming its outstanding performance for enhanced clinical applications. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of ITLTA.</p>
Full article ">Figure 2
<p>The structure of joint feature and label attention mechanism. Image features and label embeddings are concatenated to form a joint image label feature representation, which is then input into the joint feature attention module. Before inputting, the joint features should be normalized to enhance the stability of the data features. The subsequent attention module structure is identical to the encoder structure in the transformer [<a href="#B21-electronics-13-02273" class="html-bibr">21</a>] and can be stacked in multiple layers. After attention learning, the joint feature obtained includes associations between image features and label embeddings, as well as co-occurrences of feature semantics among various label embeddings. This feature is used for multi-label classification prediction by a subsequent classifier.</p>
Full article ">Figure 3
<p>The architecture of multitask-based pretraining and multi-label learning.</p>
Full article ">
19 pages, 152311 KiB  
Article
An Assessment of the Impact of Design Elements on the Liturgical Space of Church Buildings: Using Churches in the North of Iraq as a Case Study
by Naram Murqus Issa and Kadhim Fathel Khalil
Buildings 2024, 14(6), 1692; https://doi.org/10.3390/buildings14061692 - 6 Jun 2024
Viewed by 424
Abstract
Liturgical space represents the embodiment of Christian theology in church buildings, encompassing both physical and metaphysical aspects. This space carries holiness and sacredness through a set of architectural elements that create sacred and profane zones within the church architecture. For centuries, design elements [...] Read more.
Liturgical space represents the embodiment of Christian theology in church buildings, encompassing both physical and metaphysical aspects. This space carries holiness and sacredness through a set of architectural elements that create sacred and profane zones within the church architecture. For centuries, design elements have shaped the form of Eastern churches in Iraq. This research aimed to answer the following question: what does a participant see at first glance in the liturgical space of a church building? This paper revisits the impact of design elements on the liturgical space of Eastern churches. The research methodology involved analyzing qualitative data using visual attention software (VAS) 3M, version 2024, to examine eye-tracking data and identify what visitors first noticed when entering these church interiors in Mosul, Iraq. The results highlight the variations and dominance of specific design elements in their impact on Eastern churches. The conclusions emphasize the importance of scientifically based restoration for the perception of design elements in these churches. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

Figure 1
<p>The approach in this research.</p>
Full article ">Figure 2
<p>The floor plan of a typical Syrian church of Mesopotamia [<a href="#B4-buildings-14-01692" class="html-bibr">4</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Nave and aisles, the Church of St. Julian, Jerusalem, 11th Century CE [<a href="#B17-buildings-14-01692" class="html-bibr">17</a>]. The drawings were made by the researchers. (<b>b</b>) The forecourt and entrances of the Church of Mar Azaziel, Kefr Zeh, Tur Abdin. ca. 700 CE [<a href="#B18-buildings-14-01692" class="html-bibr">18</a>].</p>
Full article ">Figure 4
<p>(<b>a</b>) Reconstructed plan of bema with semi-circular synthronon, St. John Stoudios, Constantinople [<a href="#B16-buildings-14-01692" class="html-bibr">16</a>]; (<b>b</b>) The triple–apsed sanctuary of the eastern Basilica arm of St. Simeon Stylites [<a href="#B20-buildings-14-01692" class="html-bibr">20</a>]; (<b>c</b>) colored marble iconostasis, the Church of the Holy Sepulture, Jerusalem [<a href="#B21-buildings-14-01692" class="html-bibr">21</a>]; and (<b>d</b>) ambo, St. Lorenzo Fuori le Mura Church, Rome, 12th Century [<a href="#B22-buildings-14-01692" class="html-bibr">22</a>].</p>
Full article ">Figure 5
<p>The results legend of the visual attention software (VAS 3M) (prepared by the researchers).</p>
Full article ">Figure 6
<p>Visual attention software (VAS) [<a href="#B23-buildings-14-01692" class="html-bibr">23</a>]. (<b>a</b>) The science of the VAS; (<b>b</b>) the working process of the VAS.</p>
Full article ">Figure 7
<p>The statistical results of the impact of design elements on the samples in this study (model summary and ANOVA test) (prepared by the researchers).</p>
Full article ">Figure 8
<p>The statistical results of the impact of design elements on the samples in this study (coefficients values) (prepared by the researchers).</p>
Full article ">Figure 9
<p>The analysis of the Catholic Church of St. Thomas using the visual analysis software version 2024 (prepared by the researchers).</p>
Full article ">Figure 10
<p>The analysis of the Syriac Orthodox Church of St. Thomas using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure 11
<p>The analysis of the Syriac Orthodox Church of Mart Shmoni using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure 12
<p>The statistical results of the impacts of design elements on the samples in this study (coefficients values) (prepared by the researchers).</p>
Full article ">Figure A1
<p>The analysis of the Monastery Church of Our Lady of Harvest using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure A2
<p>The analysis of the Catholic Church of St. Kyriakos, Batnaya, using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure A3
<p>The analysis of the Catholic Church of St. George, Alqosh, using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure A4
<p>The analysis of the Catholic Church of St. Peter and Paul, Tilkef, using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure A5
<p>The analysis of the Church of Holy Heart of Jesus, Tilkef, using the visual analysis software (prepared by the researchers).</p>
Full article ">Figure A6
<p>The analysis of the Orthodox Syriac Church of the Virgin Mary, Bartilla, using the visual analysis software (prepared by the researchers).</p>
Full article ">
Back to TopTop