Skip to main content
Maryam Alimardani

    Maryam Alimardani

    Virtual reality (VR) has been used for training purposes in a wide range of industries, including education, healthcare, and defense. VR allows users to train in a safe and controlled digital environment while being immersed and highly... more
    Virtual reality (VR) has been used for training purposes in a wide range of industries, including education, healthcare, and defense. VR allows users to train in a safe and controlled digital environment while being immersed and highly engaged in a realistic task. One of its advantages is that VR can be combined with multiple wearable sensing technologies, allowing researchers to study (neuro)physiological and cognitive processes elicited by dynamic environments and adapt these simulations based on such processes. However, the potential of VR combined with neurotechnology to facilitate effective and efficient aviation training has not yet been fully explored. For instance, despite the growing interest in including VR as part of the training programs for military and commercial airlines pilots, it is still unclear what the effectiveness of VR is in short- and long-term training of pilots. This paper provides an overview of the state-of-the-art research in VR applications for aviation training and identifies challenges and future opportunities. We particularly discuss the potential of neurotechnology in objective measurement of training progress and providing real-time feedback during VR flight tasks. Overall, VR combined with neurotechnology for flight training holds promise to optimize individual learning progress.
    Abstract Despite advancements in computer graphics and artificial intelligence, it remains unclear which aspects of intelligent virtual agents (IVAs) make them identifiable as human-like agents. In three experiments and a computational... more
    Abstract Despite advancements in computer graphics and artificial intelligence, it remains unclear which aspects of intelligent virtual agents (IVAs) make them identifiable as human-like agents. In three experiments and a computational study, we investigated which specific facial features in static IVAs contribute to judging them human-like. In Experiment 1, participants were presented with facial images of state-of-the-art IVAs and humans and asked to rate these stimuli on human-likeness. The results showed that IVAs were judged less human-like compared to photographic images of humans, which led to the hypothesis that the discrepancy in human-likeness was driven by skin and eye reflectance. A follow-up computational analysis confirmed this hypothesis, showing that the faces of IVAs had smoother skin and a reduced number of corneal reflections than human faces. In Experiment 2, we validated these findings by systematically manipulating the appearance of skin and eyes in a set of human photographs, including both female and male faces as well as four different races. Participants indicated as quickly as possible whether the image depicted a real human face or not. The results showed that smoothening the skin and removing corneal reflections affected the perception of human-likeness when quick perceptual decisions needed to be made. Finally, in Experiment 3, we combined the images of IVA faces and those of humans, unaltered and altered, and asked participants to rate them on human-likeness. The results confirmed the causal role of both features for attributing human-likeness. Both skin and eye reflectance worked in tandem in driving judgements regarding the extent to which the face was perceived human-like in both IVAs and humans. These findings are of relevance to computer graphics artists and psychology researchers alike in drawing attention to those facial characteristics that increase realism in IVAs.
    The main objective of this research was to gain insight in the attitude that groups of elderly and young students have towards social robots. A total of 52 participants (24 elderly vs. 28 students) took part in a short-term interaction... more
    The main objective of this research was to gain insight in the attitude that groups of elderly and young students have towards social robots. A total of 52 participants (24 elderly vs. 28 students) took part in a short-term interaction with a humanoid social robot. In small groups of two to four people, they engaged in a conversation with a Nao robot. Their attitude was measured before and after the interaction using the Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire. Furthermore, the role of the robot as a facilitator for conversation was assessed by observing the interaction between individuals after the robot was removed. This research explored the use of social robots as a means to improve socialization between individuals rather than aiming to replace the human contact. Results from the questionnaire and an additional observational analysis showed a positive attitude towards the robot and the interaction from both age groups. After the interaction, elderly perceived the robot as significantly more useful than students, which could be assigned to a difference in needs and expectations they had from it. Furthermore, anxiety towards the robot for both groups decreased after the interaction. Future research can investigate the effect of long-term interaction with a similar robot. In the long-term, social robots could possibly be deployed to decrease loneliness, a common issue among elderly.
    Virtual reality (VR) offers a training environment that promotes increased learning and performance. However, to what extent VR flight simulations offer increased performance compared to less-immersive simulators is not clear, and neither... more
    Virtual reality (VR) offers a training environment that promotes increased learning and performance. However, to what extent VR flight simulations offer increased performance compared to less-immersive simulators is not clear, and neither are their underlying cognitive aspects. In a within-subject experiment, we compared fight performance and subjective measures of workload, presence, and engagement in two flight training environments (Desktop and VR) on two flight tasks of equal difficulty (changing speed and performing turns). No differences were observed in the flight performance between these conditions. However, participants reported higher presence and engagement in the VR training environment. Additionally, we found a correlation between subjective workload and flight performance in the Desktop condition, but not in VR. We conclude that VR is promising for basic flight training tasks and encourage future work to explore this further by investigating neurophysiological indices of workload and engagement and establishing relationships between presence, workload, and flight performance.
    It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of... more
    It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of different methods, yet remains difficult because of the availability of naturalistic data. To infuse more variability into the facial expressions of ECAs, we proposed a model that considered temporal dynamic of facial behaviors as a countable-state Markov process. Once trained, the model was able to output new sequences of facial expressions from an existing dataset containing facial videos with Action Unit (AU) encodings. The approach was validated by having computer software and humans identify facial emotion from video. Half of the videos employed newly generated sequences of facial expressions using the model while the other half contained sequences selected directly from the original dataset. We found no statistically significant evidence that the newly generated facial expression sequences could be differentiated from the original ones, demonstrating that the model was able to generate new facial expression data that were indistinguishable from the original data. Our proposed approach could be used to expand the amount of labelled facial expression data in order to create new training sets for machine learning methods.
    With forty-six Action Units (AUs) forming the building blocks in the Facial Action Coding System (FACS), millions of facial configurations can be formed. Most research has focused on a subset of combinations to determine the link between... more
    With forty-six Action Units (AUs) forming the building blocks in the Facial Action Coding System (FACS), millions of facial configurations can be formed. Most research has focused on a subset of combinations to determine the link between facial configurations and emotions. Despite the value of this research for psychological and computational reasons, it is not clear what the most common combinations of AUs are to form the most commonly expressed facial configurations. We used three diverse corpora with human coded facial action units for a computational analysis. The analysis demonstrated that the largest portion of facial behavior consists of the absence of AU activations, yielding only one specific facial configuration, that of the neutral face. These results are important for cognitive scientists, computer graphics designers and virtual human developers alike. They suggest that only a relatively small number of AU combinations are initially needed for the creation of natural facial behavior in Embodied Conversational Agents (ECAs).
    Sociable robots are slowly entering domains such as education and healthcare. As we are exposing our youth and elderly to these new intelligent technologies, it is important to understand their perception and attitudes towards robots.... more
    Sociable robots are slowly entering domains such as education and healthcare. As we are exposing our youth and elderly to these new intelligent technologies, it is important to understand their perception and attitudes towards robots. This study investigates the differences between elderly and young adults in ascribing mind perception to a sociable humanoid robot. Both subjective and behavioral measurements were employed to investigate the differences. Several trends were found; elderlies attributed higher scores of mind perception to the robot, whereas young adults seemed to have a more positive attitude towards it. Elderlies seemed to apply human social models of interaction, whereas young adults perceived a master-slave relationship between humans and the robot. Furthermore, a significant positive correlation was found between mind perception and attitude toward the robot for both groups.
    Recent work suggests that machine and deep learning models are prone to EEG artifacts and have staggering performance drops when used to classify EEG signals rich of noise. This particularly affects real-time performance of EEG monitoring... more
    Recent work suggests that machine and deep learning models are prone to EEG artifacts and have staggering performance drops when used to classify EEG signals rich of noise. This particularly affects real-time performance of EEG monitoring systems such as brain-computer interfaces, thus rendering their applications in uncontrolled environments useless. These limitations have motivated efforts to develop fortification layers that leverage manifold learning in the lower dimensions to possibly improve the performance and the robustness of any deep learning model by separating off-manifold data points from the dense probability mass. The present study aimed to show that the fortification layer can learn the latent structure of an EEG dataset and that this can help increase the robustness of the classifier when tested on the same dataset contaminated with varying noise. In order to evaluate the performance of the proposed model, different artifacts were synthesized with low bandpass filters to mimic biological and Gaussian white additive noise. Results showed that the EEG signals used in this study followed the manifold assumption, and that the fortification layers learnt the lower discriminative structure from the raw denoised EEG signals. However, this did not significantly increase the robustness of the model to the noise.
    Recent work suggests that machine and deep learning models are prone to EEG artifacts and have staggering performance drops when used to classify EEG signals rich of noise. This particularly affects real-time performance of EEG monitoring... more
    Recent work suggests that machine and deep learning models are prone to EEG artifacts and have staggering performance drops when used to classify EEG signals rich of noise. This particularly affects real-time performance of EEG monitoring systems such as brain-computer interfaces, thus rendering their applications in uncontrolled environments useless. These limitations have motivated efforts to develop fortification layers that leverage manifold learning in the lower dimensions to possibly improve the performance and the robustness of any deep learning model by separating off-manifold data points from the dense probability mass. The present study aimed to show that the fortification layer can learn the latent structure of an EEG dataset and that this can help increase the robustness of the classifier when tested on the same dataset contaminated with varying noise. In order to evaluate the performance of the proposed model, different artifacts were synthesized with low bandpass filters to mimic biological and Gaussian white additive noise. Results showed that the EEG signals used in this study followed the manifold assumption, and that the fortification layers learnt the lower discriminative structure from the raw denoised EEG signals. However, this did not significantly increase the robustness of the model to the noise.
    Virtual reality (VR) offers a training environment that promotes increased learning and performance. However, to what extent VR flight simulations offer increased performance compared to less-immersive simulators is not clear, and neither... more
    Virtual reality (VR) offers a training environment that promotes increased learning and performance. However, to what extent VR flight simulations offer increased performance compared to less-immersive simulators is not clear, and neither are their underlying cognitive aspects. In a within-subject experiment, we compared fight performance and subjective measures of workload, presence, and engagement in two flight training environments (Desktop and VR) on two flight tasks of equal difficulty (changing speed and performing turns). No differences were observed in the flight performance between these conditions. However, participants reported higher presence and engagement in the VR training environment. Additionally, we found a correlation between subjective workload and flight performance in the Desktop condition, but not in VR. We conclude that VR is promising for basic flight training tasks and encourage future work to explore this further by investigating neurophysiological indices...
    Alzheimer’s disease (AD) is a brain disorder that is mainly characterized by a progressive degeneration of neurons in the brain and decline of cognitive abilities. This study compared an FFT-based spectral analysis against a functional... more
    Alzheimer’s disease (AD) is a brain disorder that is mainly characterized by a progressive degeneration of neurons in the brain and decline of cognitive abilities. This study compared an FFT-based spectral analysis against a functional connectivity analysis for the diagnosis of AD. Both quantitative methods were applied on an EEG dataset including 20 diagnosed AD patients and 20 age-matched healthy controls (HC). The obtained results showed an advantage of the functional connectivity analysis when compared to the spectral analysis; while the latter could not find any significant differences between the AD and HC groups, the functional connectivity analysis showed statistically higher synchronization levels in the AD group in the lower frequency bands (delta and theta), suggesting a ‘phase-locked’ state in AD-affected brains. Further comparison of functional connectivity between the homotopic regions confirmed that the traits of AD were localized to the centro-parietal and centro-tem...
    With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas... more
    With the advancements in social robotics and virtual avatars, it becomes increasingly important that these agents adapt their behavior to the mood, feelings and personality of their users. One such aspect of the user is empathy. Whereas many studies measure empathy through offline measures that are collected after empathic stimulation (e.g. post-hoc questionnaires), the current study aimed to measure empathy online, using brain activity collected during the experience. Participants watched an affective 360 video of a child experiencing domestic violence in a virtual reality headset while their EEG signals were recorded. Results showed a significant attenuation of alpha, theta and delta asymmetry in the frontal and central areas of the brain. Moreover, a significant relationship between participants' empathy scores and their frontal alpha asymmetry at baseline was found. These results demonstrate specific brain activity alterations when participants are exposed to an affective vi...
    Sociable robots are slowly entering domains such as education and healthcare. As we are exposing our youth and elderly to these new intelligent technologies, it is important to understand their perception and attitudes towards robots.... more
    Sociable robots are slowly entering domains such as education and healthcare. As we are exposing our youth and elderly to these new intelligent technologies, it is important to understand their perception and attitudes towards robots. This study investigates the differences between elderly and young adults in ascribing mind perception to a sociable humanoid robot. Both subjective and behavioral measurements were employed to investigate the differences. Several trends were found; elderlies attributed higher scores of mind perception to the robot, whereas young adults seemed to have a more positive attitude towards it. Elderlies seemed to apply human social models of interaction, whereas young adults perceived a master-slave relationship between humans and the robot. Furthermore, a significant positive correlation was found between mind perception and attitude toward the robot for both groups.
    Over the past decade virtual reality (VR) has shown some major advancements in research and development. One of the most important aspects of VR user experience is the sense of presence, the feeling of being present in the virtual... more
    Over the past decade virtual reality (VR) has shown some major advancements in research and development. One of the most important aspects of VR user experience is the sense of presence, the feeling of being present in the virtual environment. So far, sense of presence has been most commonly measured through subjective post-experience questionnaires. In the current study, we aimed to examine whether objective measures of brain activity can provide additional insights. Participants watched an affective 360 VR video while their brain activity was monitored using electroencephalography (EEG). Moreover, participants reported their subjective sense of presence after the VR experience. Compared to a baseline and a post measure, EEG alpha power decreased during the VR experience. Furthermore, this change in brain activity was related to the participants’ subjective sense of presence. These findings highlight the high potential of brain imaging techniques in assessing a user’s experience in...
    Multimodal data enables powerful methodological approaches to investigate social group interaction. This paper specifically focuses on dialogic moments, i. e., episodes of human communication with high mutual understanding. We present... more
    Multimodal data enables powerful methodological approaches to investigate social group interaction. This paper specifically focuses on dialogic moments, i. e., episodes of human communication with high mutual understanding. We present preliminary results of a pilot study, where we apply multimodal analysis of dialogic moments in the context of storytelling, for obtaining data-driven characterizations. We collected multimodal sensor data, including skin conductance, face-to-face proximity, and vocal non-verbal features of the participants, complemented by their subjective experiences collected via self-report questionnaires. Our first preliminary findings provide novel perspectives on different profiles of dialogic moments, characterized by objective and subjective features.
    The main objective of this research was to gain insight in the attitude that groups of elderly and young students have towards social robots. A total of 52 participants (24 elderly vs. 28 students) took part in a short-term interaction... more
    The main objective of this research was to gain insight in the attitude that groups of elderly and young students have towards social robots. A total of 52 participants (24 elderly vs. 28 students) took part in a short-term interaction with a humanoid social robot. In small groups of two to four people, they engaged in a conversation with a Nao robot. Their attitude was measured before and after the interaction using the Unified Theory of Acceptance and Use of Technology (UTAUT) questionnaire. Furthermore, the role of the robot as a facilitator for conversation was assessed by observing the interaction between individuals after the robot was removed. This research explored the use of social robots as a means to improve socialization between individuals rather than aiming to replace the human contact. Results from the questionnaire and an additional observational analysis showed a positive attitude towards the robot and the interaction from both age groups. After the interaction, elderly perceived the robot as significantly more useful than students, which could be assigned to a difference in needs and expectations they had from it. Furthermore, anxiety towards the robot for both groups decreased after the interaction. Future research can investigate the effect of long-term interaction with a similar robot. In the long-term, social robots could possibly be deployed to decrease loneliness, a common issue among elderly.
    It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of... more
    It is crucial that naturally-looking Embodied Conversational Agents (ECAs) display various verbal and non-verbal behaviors, including facial expressions. The generation of credible facial expressions has been approached by means of different methods, yet remains difficult because of the availability of naturalistic data. To infuse more variability into the facial expressions of ECAs, we proposed a model that considered temporal dynamic of facial behaviors as a countable-state Markov process. Once trained, the model was able to output new sequences of facial expressions from an existing dataset containing facial videos with Action Unit (AU) encodings. The approach was validated by having computer software and humans identify facial emotion from video. Half of the videos employed newly generated sequences of facial expressions using the model while the other half contained sequences selected directly from the original dataset. We found no statistically significant evidence that the newly generated facial expression sequences could be differentiated from the original ones, demonstrating that the model was able to generate new facial expression data that were indistinguishable from the original data. Our proposed approach could be used to expand the amount of labelled facial expression data in order to create new training sets for machine learning methods.
    With forty-six Action Units (AUs) forming the building blocks in the Facial Action Coding System (FACS), millions of facial configurations can be formed. Most research has focused on a subset of combinations to determine the link between... more
    With forty-six Action Units (AUs) forming the building blocks in the Facial Action Coding System (FACS), millions of facial configurations can be formed. Most research has focused on a subset of combinations to determine the link between facial configurations and emotions. Despite the value of this research for psychological and computational reasons, it is not clear what the most common combinations of AUs are to form the most commonly expressed facial configurations. We used three diverse corpora with human coded facial action units for a computational analysis. The analysis demonstrated that the largest portion of facial behavior consists of the absence of AU activations, yielding only one specific facial configuration, that of the neutral face. These results are important for cognitive scientists, computer graphics designers and virtual human developers alike. They suggest that only a relatively small number of AU combinations are initially needed for the creation of natural facial behavior in Embodied Conversational Agents (ECAs).
    Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with mental imagination of movement and convert them into commands for external devices. Traditionally, MI-BCIs... more
    Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with mental imagination of movement and convert them into commands for external devices. Traditionally, MI-BCIs operate on Machine Learning (ML) algorithms, which require extensive signal processing and feature engineering to extract changes in sensorimotor rhythms (SMR). In recent years, Deep Learning (DL) models have gained popularity for EEG classification as they provide a solution for automatic extraction of spatio-temporal features in the signals. However, past BCI studies that employed DL models, only attempted them with a small group of participants, without investigating the effectiveness of this approach for different user groups such as inefficient users. BCI inefficiency is a known and unsolved problem within BCI literature, generally defined as the inability of the user to produce the desired SMR patterns for the BCI classifier. In this study, we evalu...
    Motor Imagery BCI systems have a high rate of users that are not capable of modulating their brain activity accurately enough to communicate with the system. Several studies have identified psychological, cognitive, and neurophysiological... more
    Motor Imagery BCI systems have a high rate of users that are not capable of modulating their brain activity accurately enough to communicate with the system. Several studies have identified psychological, cognitive, and neurophysiological measures that might explain this MI-BCI inefficiency. Traditional research had focused on mu suppression in the sensorimotor area in order to classify imagery, but this does not reflect the true dynamics that underlie motor imagery. Functional connectivity reflects the interaction between brain regions during the MI task and resting-state network and is a promising tool in improving MI-BCI classification. In this study, 54 novice MI-BCI users were split into two groups based on their accuracy and their functional connectivity was compared in three network scales (Global, Large and Local scale) during the resting-state, left vs. right-hand motor imagery task, and the transition between the two phases. Our comparison of High and Low BCI performers sh...
    Motor Imagery (MI) is a mental process by which an individual rehearses body movements without actually performing physical actions. Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity... more
    Motor Imagery (MI) is a mental process by which an individual rehearses body movements without actually performing physical actions. Motor Imagery Brain-Computer Interfaces (MI-BCIs) are AI-driven systems that capture brain activity patterns associated with this mental process and convert them into commands for external devices. Traditionally, MI-BCIs operate on Machine Learning (ML) algorithms, which require extensive signal processing and feature engineering to extract changes in sensorimotor rhythms (SMR). However, in recent years, Deep Learning (DL) models have gained popularity for EEG classification as they provide a solution for automatic extraction of spatio-temporal features in the signals. In this study, EEG signals from 54 subjects who performed a MI task of left-or right-hand grasp was employed to compare the performance of two MI-BCI classifiers; a ML approach vs. a DL approach. In the ML approach, Common Spatial Patterns (CSP) was used for feature extraction and then L...
    BCI inefficiency is one of the major challenges of motor imagery brain-computer interfaces (MI-BCI). Past research suggests that certain cognitive skills and personality traits correlate with MI-BCI real-time performance. Other studies... more
    BCI inefficiency is one of the major challenges of motor imagery brain-computer interfaces (MI-BCI). Past research suggests that certain cognitive skills and personality traits correlate with MI-BCI real-time performance. Other studies have examined sensorimotor rhythm changes (also known as μ suppression) as a valuable indicator of successful execution of the MI task. This research aims to combine these insights by investigating whether cognitive factors and personality traits can make predictions of a user’s ability to modulate μ rhythms during a MI-BCI task. Data containing 55 subjects who completed a MI task was employed, and a stepwise linear regression model was implemented to select the most relevant features for μ suppression prediction. The most accurate model was based on these factors: Spatial Ability, Visuospatial Memory, Autonomy, and Vividness of Visual Imagery. Further correlation analyses showed that a novice user’s μ suppression during a MI-BCI task can be predicted...
    Brain-computer interfaces (BCIs) are communication bridges between a human brain and external world, enabling humans to interact with their environment without muscle intervention. Their functionality, therefore, depends on both the BCI... more
    Brain-computer interfaces (BCIs) are communication bridges between a human brain and external world, enabling humans to interact with their environment without muscle intervention. Their functionality, therefore, depends on both the BCI system and the cognitive capacities of the user. Motor-imagery BCIs (MI-BCI) rely on the users’ mental imagination of body movements. However, not all users have the ability to sufficiently modulate their brain activity for control of a MI-BCI; a problem known as BCI illiteracy or inefficiency. The underlying mechanism of this phenomenon and the cause of such difference among users is yet not fully understood. In this study, we investigated the impact of several cognitive and psychological measures on MI-BCI performance. Fifty-five novice BCI-users participated in a left- versus right-hand motor imagery task. In addition to their BCI classification error rate and demographics, psychological measures including personality factors, affinity for technol...
    Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance... more
    Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different...
    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about... more
    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human mo...