[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

EEG Signal Processing Techniques and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 66981

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Engineering and Applied Sciences, Cranfield University, Cranfield MK43 0AL, UK
Interests: machine learning; artificial intelligence; human factors; pattern recognition; digital twins; instrumentation, sensors and measurement science; systems engineering; through-life engineering services
Special Issues, Collections and Topics in MDPI journals
Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 2JH, UK
Interests: nonlinear signal processing; system identification; statistical machine learning; frequency-domain analysis; causality analysis; computational neuroscience
Special Issues, Collections and Topics in MDPI journals
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Interests: brain dynamics and brain activities; brain–computer interfaces; AI for clinical disease diagnosis; neurorehabilitation; hybrid-augmented intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Electroencephalography (EEG) is a well-established non-invasive tool to record brain electrophysiological activity. It is economical, portable, easy to administer and widely available in most hospitals. Compared with other neuroimaging techniques that provide information about the anatomical structure (e.g., MRI, CT and fMRI), EEG offers ultra-high time resolution, which is critical in understanding brain function. Empirical interpretation of EEG is largely based on recognizing abnormal frequencies in specific biological states, the spatial-temporal and morphological characteristics of paroxysmal or persistent discharges, reactivity to external stimuli and activation procedures or intermittent photic stimulation. Despite being useful in many instances, these practical approaches to interpreting EEGs can leave important dynamic and nonlinear interactions between various brain network anatomical constituents undetected within the recordings, as such interactions are far beyond the observational capabilities of any specially trained physician in this field.

This Special Issue will provide a forum for original high-quality research in EEG signal pre-processing, modelling, analysis, and applications in the time, space, frequency, or time-frequency domains. The applications of artificial intelligence and machine learning approaches in this topic are particularly welcomed. The covered applications include but are not limited to:

  • Clinical studies.
  • Human factors.
  • Brain–machine interfaces.
  • Psychology and neuroscience.
  • Social interactions.

Dr. Yifan Zhao
Dr. Fei He
Prof. Dr. Yuzhu Guo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • electroencephalography
  • EEG signal processing
  • artificial intelligence in EEG data analysis
  • brain connectivity
  • time-frequency analysis
  • deep learning in EEG data analysis
  • machine learning techniques in EEG data analysis
  • computer-aided diagnosis systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 186 KiB  
Editorial
EEG Signal Processing Techniques and Applications
by Yifan Zhao, Fei He and Yuzhu Guo
Sensors 2023, 23(22), 9056; https://doi.org/10.3390/s23229056 - 9 Nov 2023
Cited by 2 | Viewed by 5119
Abstract
Electroencephalography (EEG) is a widely recognised non-invasive method for capturing brain electrophysiological activity [...] Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)

Research

Jump to: Editorial

20 pages, 7652 KiB  
Article
Multimodal Approach for Pilot Mental State Detection Based on EEG
by Ibrahim Alreshidi, Irene Moulitsas and Karl W. Jenkins
Sensors 2023, 23(17), 7350; https://doi.org/10.3390/s23177350 - 23 Aug 2023
Cited by 7 | Viewed by 2681
Abstract
The safety of flight operations depends on the cognitive abilities of pilots. In recent years, there has been growing concern about potential accidents caused by the declining mental states of pilots. We have developed a novel multimodal approach for mental state detection in [...] Read more.
The safety of flight operations depends on the cognitive abilities of pilots. In recent years, there has been growing concern about potential accidents caused by the declining mental states of pilots. We have developed a novel multimodal approach for mental state detection in pilots using electroencephalography (EEG) signals. Our approach includes an advanced automated preprocessing pipeline to remove artefacts from the EEG data, a feature extraction method based on Riemannian geometry analysis of the cleaned EEG data, and a hybrid ensemble learning technique that combines the results of several machine learning classifiers. The proposed approach provides improved accuracy compared to existing methods, achieving an accuracy of 86% when tested on cleaned EEG data. The EEG dataset was collected from 18 pilots who participated in flight experiments and publicly released at NASA’s open portal. This study presents a reliable and efficient solution for detecting mental states in pilots and highlights the potential of EEG signals and ensemble learning algorithms in developing cognitive cockpit systems. The use of an automated preprocessing pipeline, feature extraction method based on Riemannian geometry analysis, and hybrid ensemble learning technique set this work apart from previous efforts in the field and demonstrates the innovative nature of the proposed approach. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>A typical snapshot and schematic of each experiment.</p>
Full article ">Figure 2
<p>An outline of the multimodal approach based on EEG.</p>
Full article ">Figure 3
<p>A simplified form of the Autoreject algorithm operation.</p>
Full article ">Figure 4
<p>A geometric depiction of the tangent space mapping process.</p>
Full article ">Figure 5
<p>The size of the dataset before and after preprocessing the dataset.</p>
Full article ">Figure 6
<p>An eight-epoch example of the EEG signals before and after preprocessing.</p>
Full article ">Figure 7
<p>Spectral power topography during APPD mental states, namely (<b>A</b>) NE, (<b>B</b>) SS, (<b>C</b>) CA, and (<b>D</b>) DA.</p>
Full article ">Figure 8
<p>The confusion matrix for the 5-fold cross-validation results. The RF model’s confusion matrix is shown in (<b>A</b>); the ERT in (<b>B</b>), GTB in (<b>C</b>), AdaBoost in (<b>D</b>), and Voting in (<b>E</b>).</p>
Full article ">Figure A1
<p>EEG electrode names and locations.</p>
Full article ">
23 pages, 8261 KiB  
Article
Modulations of Cortical Power and Connectivity in Alpha and Beta Bands during the Preparation of Reaching Movements
by Davide Borra, Silvia Fantozzi, Maria Cristina Bisi and Elisa Magosso
Sensors 2023, 23(7), 3530; https://doi.org/10.3390/s23073530 - 28 Mar 2023
Cited by 9 | Viewed by 2584
Abstract
Planning goal-directed movements towards different targets is at the basis of common daily activities (e.g., reaching), involving visual, visuomotor, and sensorimotor brain areas. Alpha (8–13 Hz) and beta (13–30 Hz) oscillations are modulated during movement preparation and are implicated in correct motor functioning. [...] Read more.
Planning goal-directed movements towards different targets is at the basis of common daily activities (e.g., reaching), involving visual, visuomotor, and sensorimotor brain areas. Alpha (8–13 Hz) and beta (13–30 Hz) oscillations are modulated during movement preparation and are implicated in correct motor functioning. However, how brain regions activate and interact during reaching tasks and how brain rhythms are functionally involved in these interactions is still limitedly explored. Here, alpha and beta brain activity and connectivity during reaching preparation are investigated at EEG-source level, considering a network of task-related cortical areas. Sixty-channel EEG was recorded from 20 healthy participants during a delayed center-out reaching task and projected to the cortex to extract the activity of 8 cortical regions per hemisphere (2 occipital, 2 parietal, 3 peri-central, 1 frontal). Then, we analyzed event-related spectral perturbations and directed connectivity, computed via spectral Granger causality and summarized using graph theory centrality indices (in degree, out degree). Results suggest that alpha and beta oscillations are functionally involved in the preparation of reaching in different ways, with the former mediating the inhibition of the ipsilateral sensorimotor areas and disinhibition of visual areas, and the latter coordinating disinhibition of the contralateral sensorimotor and visuomotor areas. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematics of the recording setup. (<b>b</b>) Location of electrodes and regions of interest (ROIs). The electrodes were placed according to the 10/10 system and the ROIs considered in this study were taken from the Desikan–Killiany atlas. The reference channel (right earlobe) is marked in red, while the ground channel (AFz) in green. The selected ROIs were the cuneus (CU) and lateral occipital (LO) cortices as occipital regions, the precuneus (PCU) and superior parietal (SP) cortices as parietal regions, the post-central gyrus (PoC), the precentral gyrus (PrC), and the paracentral lobule (PaC) as peri-central regions, and the superior frontal gyrus (SF) as frontal region. (<b>c</b>) Trial sequence. Each trial started with a rest interval (2–3 s, random) that ended once a cue signal (target LED turning on) was provided to the participant indicating the target position. Then, the participant started preparing the center-out reaching movement (forward movement) and started the movement only after 2 s, once the first go-signal was provided (neighbor LED turning on). Once they reached the target, the participant held the position for 2 s while all LEDs were turned off. Finally, the second go-signal was provided (same as for the forward movement), triggering the backward movement toward the rest position. The fixation cross is displayed for each interval; note that, in the first scheme of panel c (rest interval 2–3 s), the fixation cross (at the rest position) is not visible since covered by the participants’ hand.</p>
Full article ">Figure 2
<p>Event-related spectral perturbations (ERSPs). The grand-average ERSP is reported for each selected ROI of the left (label prefix “L.”) and right (label prefix “R.”) hemisphere. The small black and purple triangles at the bottom of each plot mark the time associated with the cue onset and go onset of the center-out reaching movement, respectively. To increase readability, x- and y-labels are reported only for the first plot. The position of each ROI is also visualized, limited to the left hemisphere, highlighted in red in the 3-D view of the cortex (A: anterior, L: lateral).</p>
Full article ">Figure 3
<p>Alpha (<b>a</b>) and beta (<b>b</b>) event-related spectral perturbations (ERSPs). Here, the ERSPs reported in <a href="#sensors-23-03530-f002" class="html-fig">Figure 2</a> were averaged within alpha and beta bands and visualized as a function of time. The grand-average alpha-ERSP and beta-ERSP is reported for each selected ROI of the left (black thick lines) and right (red thick lines) hemisphere. Shaded areas denote the standard error of the mean across subjects (in grey for the left ROI, in red for the right ROI). The small black and purple triangles shown at the bottom of each plot mark the time associated with the cue onset and go onset of the center-out reaching movement, respectively. Note that in this figure, to increase the readability, x- and y-labels are reported only for the first plot.</p>
Full article ">Figure 4
<p>Alpha (<b>a</b>) and beta (<b>b</b>) event-related spectral perturbations (ERSPs) during reaching movement preparation. Here, the alpha-ERSP and beta-ERSP reported in <a href="#sensors-23-03530-f003" class="html-fig">Figure 3</a> were averaged within the second half of the movement preparation interval of the center-out reaching movement (i.e., from 1 to 2 s with respect to cue onset). These values are also referred to in the manuscript as <span class="html-italic">post-cue<sub>late</sub> alpha-ERSP</span> and <span class="html-italic">post-cue<sub>late</sub> beta-ERSP</span>. In each panel, for each ROI (grey: left ROI, red: right ROI) the bar height denotes the mean value across the subject and the error bar the standard error of the mean. Results of the performed statistical analyses are reported too. Specifically, symbols * (reported at the bottom of each panel) denote ERSPs significantly different compared to the baseline (* <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, ** <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>, *** <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.001</mn> </mrow> </semantics></math>). Symbols † (reported at the top of each panel) denote ROIs with significantly different ERSP between the left and right hemisphere († <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, †† <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Directed connections between ROIs—as measured by the spectral Granger causality—that resulted significantly higher (in red) or lower (in blue) during reaching movement preparation compared to rest, in alpha (<b>left</b>) and beta (<b>right</b>) bands. To improve readability, ROI labels are displayed on the cortex in the middle panel, separately from the other panels.</p>
Full article ">Figure 6
<p>(<b>a</b>) ROIs with a significantly different in degree (left panel) and out degree (right panel) during reaching movement preparation compared to rest in the alpha band. Circle size reflects the strength of the significance (small: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.05</mn> </mrow> </semantics></math>, medium <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>, large: <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.001</mn> </mrow> </semantics></math>); red/blue circles denote an increased/decreased measure (in degree or out degree) during movement preparation compared to rest. (<b>b</b>) Each bar plot shows, for a selected ROI among the ones in panel a, the difference in the connections (movement preparation—rest) entering in the selected ROI from all other ROIs or exiting from the selected ROIs towards all other ROIs. The bar height denotes the mean value across the subjects and the black line the standard error of the mean. Significant differences are marked via grey bars.</p>
Full article ">Figure 7
<p>(<b>a</b>) ROIs with a significantly different in degree (left panel) and out degree (right panel) during reaching movement preparation compared to rest in the beta band. (<b>b</b>) Each bar plot shows, for a selected ROI among the ones in panel a, the difference in the connections (movement preparation—rest) entering in the selected ROI from all other ROIs or exiting from the selected ROIs towards all other ROIs. See the caption of <a href="#sensors-23-03530-f006" class="html-fig">Figure 6</a> for further details.</p>
Full article ">
11 pages, 1339 KiB  
Article
Extraction of Individual EEG Gamma Frequencies from the Responses to Click-Based Chirp-Modulated Sounds
by Aurimas Mockevičius, Yusuke Yokota, Povilas Tarailis, Hatsunori Hasegawa, Yasushi Naruse and Inga Griškova-Bulanova
Sensors 2023, 23(5), 2826; https://doi.org/10.3390/s23052826 - 4 Mar 2023
Cited by 3 | Viewed by 2810
Abstract
Activity in the gamma range is related to many sensory and cognitive processes that are impaired in neuropsychiatric conditions. Therefore, individualized measures of gamma-band activity are considered to be potential markers that reflect the state of networks within the brain. Relatively little has [...] Read more.
Activity in the gamma range is related to many sensory and cognitive processes that are impaired in neuropsychiatric conditions. Therefore, individualized measures of gamma-band activity are considered to be potential markers that reflect the state of networks within the brain. Relatively little has been studied in respect of the individual gamma frequency (IGF) parameter. The methodology for determining the IGF is not well established. In the present work, we tested the extraction of IGFs from electroencephalogram (EEG) data in two datasets where subjects received auditory stimulation consisting of clicks with varying inter-click periods, covering a 30–60 Hz range: in 80 young subjects EEG was recorded with 64 gel-based electrodes; in 33 young subjects, EEG was recorded using three active dry electrodes. IGFs were extracted from either fifteen or three electrodes in frontocentral regions by estimating the individual-specific frequency that most consistently exhibited high phase locking during the stimulation. The method showed overall high reliability of extracted IGFs for all extraction approaches; however, averaging over channels resulted in somewhat higher reliability scores. This work demonstrates that the estimation of individual gamma frequency is possible using a limited number of both the gel and dry electrodes from responses to click-based chirp-modulated sounds. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) A schematic representation of the sound stimulus used in this study. (<b>B</b>) Electrode placement for 64- and 3-channel systems. Channels used for analysis are colored in green. (<b>C</b>) A schematic representation of time-window definition for the calculation of IGFs from PLI. The bold red line indicates the timing of the stimulation; the red dashed line denotes the edge of averaging window (+150 ms). a.u.—arbitrary units.</p>
Full article ">Figure 2
<p>An example of IGF estimation on an average of 15 channels and averaged chirp-down and chirp-up parts in two subjects: the matrix of 100 trial iterations and 5 frequencies displaying the highest PLI response. The extracted IGF is marked in red.</p>
Full article ">Figure 3
<p>Example of time-frequency plots of PLIs. (<b>A</b>) Time-frequency plots of PLIs for two subjects from 15 gel electrodes data. Topoplots at IGF were created separately for chirp-down, chirp-up, and the averaging of both parts. (<b>B</b>) Time-frequency plots of PLIs for two subjects from three dry electrodes data.</p>
Full article ">
18 pages, 3144 KiB  
Article
A Sparse Representation Classification Scheme for the Recognition of Affective and Cognitive Brain Processes in Neuromarketing
by Vangelis P. Oikonomou, Kostas Georgiadis, Fotis Kalaganis, Spiros Nikolopoulos and Ioannis Kompatsiaris
Sensors 2023, 23(5), 2480; https://doi.org/10.3390/s23052480 - 23 Feb 2023
Cited by 10 | Viewed by 2545
Abstract
In this work, we propose a novel framework to recognize the cognitive and affective processes of the brain during neuromarketing-based stimuli using EEG signals. The most crucial component of our approach is the proposed classification algorithm that is based on a sparse representation [...] Read more.
In this work, we propose a novel framework to recognize the cognitive and affective processes of the brain during neuromarketing-based stimuli using EEG signals. The most crucial component of our approach is the proposed classification algorithm that is based on a sparse representation classification scheme. The basic assumption of our approach is that EEG features from a cognitive or affective process lie on a linear subspace. Hence, a test brain signal can be represented as a linear (or weighted) combination of brain signals from all classes in the training set. The class membership of the brain signals is determined by adopting the Sparse Bayesian Framework with graph-based priors over the weights of linear combination. Furthermore, the classification rule is constructed by using the residuals of linear combination. The experiments on a publicly available neuromarketing EEG dataset demonstrate the usefulness of our approach. For the two classification tasks offered by the employed dataset, namely affective state recognition and cognitive state recognition, the proposed classification scheme manages to achieve a higher classification accuracy compared to the baseline and state-of-the art methods (more than 8% improvement in classification accuracy). Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Averaged classification accuracy (with standard error) between the least and most preferred products.</p>
Full article ">Figure 2
<p>Overall accuracy and confusion matrices for each method with respect to products’ preferences. Each matrix provides the overall performance of each classifier with respect to each class (in our case, product’s preferences). Furthermore, class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.</p>
Full article ">Figure 2 Cont.
<p>Overall accuracy and confusion matrices for each method with respect to products’ preferences. Each matrix provides the overall performance of each classifier with respect to each class (in our case, product’s preferences). Furthermore, class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.</p>
Full article ">Figure 3
<p>Overall accuracy and confusion matrices for each method with respect to which product the participant views. These matrices provide the performance of each classifier with respect to each class (in our case participant views). Furthermore, the class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.</p>
Full article ">Figure 3 Cont.
<p>Overall accuracy and confusion matrices for each method with respect to which product the participant views. These matrices provide the performance of each classifier with respect to each class (in our case participant views). Furthermore, the class-wise precision (last two separated columns on the right) and class-wise recall (last two separated rows on the bottom) are provided.</p>
Full article ">Figure 4
<p>Averaged classification accuracy by changing the number of training samples from 20 to 160 training samples.</p>
Full article ">
18 pages, 2576 KiB  
Article
Cross-Domain Transfer of EEG to EEG or ECG Learning for CNN Classification Models
by Chia-Yen Yang, Pin-Chen Chen and Wen-Chen Huang
Sensors 2023, 23(5), 2458; https://doi.org/10.3390/s23052458 - 23 Feb 2023
Cited by 8 | Viewed by 3786
Abstract
Electroencephalography (EEG) is often used to evaluate several types of neurological brain disorders because of its noninvasive and high temporal resolution. In contrast to electrocardiography (ECG), EEG can be uncomfortable and inconvenient for patients. Moreover, deep-learning techniques require a large dataset and a [...] Read more.
Electroencephalography (EEG) is often used to evaluate several types of neurological brain disorders because of its noninvasive and high temporal resolution. In contrast to electrocardiography (ECG), EEG can be uncomfortable and inconvenient for patients. Moreover, deep-learning techniques require a large dataset and a long time for training from scratch. Therefore, in this study, EEG–EEG or EEG–ECG transfer learning strategies were applied to explore their effectiveness for the training of simple cross-domain convolutional neural networks (CNNs) used in seizure prediction and sleep staging systems, respectively. The seizure model detected interictal and preictal periods, whereas the sleep staging model classified signals into five stages. The patient-specific seizure prediction model with six frozen layers achieved 100% accuracy for seven out of nine patients and required only 40 s of training time for personalization. Moreover, the cross-signal transfer learning EEG–ECG model for sleep staging achieved an accuracy approximately 2.5% higher than that of the ECG model; additionally, the training time was reduced by >50%. In summary, transfer learning from an EEG model to produce personalized models for a more convenient signal can both reduce the training time and increase the accuracy; moreover, challenges such as data insufficiency, variability, and inefficiency can be effectively overcome. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of four epileptic states in EEG signals.</p>
Full article ">Figure 2
<p>Scheme of the training process for a 10-fold cross-validation by using (<b>a</b>) recordwise, (<b>b</b>) subjectwise, and (<b>c</b>) patient-specific approaches.</p>
Full article ">Figure 3
<p>Basic procedure for the classification of preictal and interictal periods by using (<b>a</b>) recordwise, (<b>b</b>) subjectwise, and (<b>c</b>) patient-specific approaches.</p>
Full article ">Figure 4
<p>Examples of sleep recordings and hypnograms from the (<b>a</b>) EEG, and (<b>b</b>) ECG datasets.</p>
Full article ">Figure 5
<p>Basic procedure for the sleep staging classification in the (<b>a</b>) ECG model, (<b>b</b>) EEG model, and (<b>c</b>) EEG–ECG transfer learning model.</p>
Full article ">Figure 6
<p>Scheme of the training process for a 5-fold cross-validation.</p>
Full article ">Figure 7
<p>Accuracy (upper panel) and loss (lower panel) functions of the (<b>a</b>) EEG model, (<b>b</b>) ECG model, and (<b>c</b>) EEG–ECG model (frozen block_1).</p>
Full article ">Figure 8
<p>Confusion matrix of the (<b>a</b>) EEG, (<b>b</b>) ECG, and (<b>c</b>) EEG–ECG model (frozen block_1).</p>
Full article ">
24 pages, 3286 KiB  
Article
An Efficient Machine Learning-Based Emotional Valence Recognition Approach Towards Wearable EEG
by Lamiaa Abdel-Hamid
Sensors 2023, 23(3), 1255; https://doi.org/10.3390/s23031255 - 21 Jan 2023
Cited by 15 | Viewed by 4085
Abstract
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their [...] Read more.
Emotion artificial intelligence (AI) is being increasingly adopted in several industries such as healthcare and education. Facial expressions and tone of speech have been previously considered for emotion recognition, yet they have the drawback of being easily manipulated by subjects to mask their true emotions. Electroencephalography (EEG) has emerged as a reliable and cost-effective method to detect true human emotions. Recently, huge research effort has been put to develop efficient wearable EEG devices to be used by consumers in out of the lab scenarios. In this work, a subject-dependent emotional valence recognition method is implemented that is intended for utilization in emotion AI applications. Time and frequency features were computed from a single time series derived from the Fp1 and Fp2 channels. Several analyses were performed on the strongest valence emotions to determine the most relevant features, frequency bands, and EEG timeslots using the benchmark DEAP dataset. Binary classification experiments resulted in an accuracy of 97.42% using the alpha band, by that outperforming several approaches from literature by ~3–22%. Multiclass classification gave an accuracy of 95.0%. Feature computation and classification required less than 0.1 s. The proposed method thus has the advantage of reduced computational complexity as, unlike most methods in the literature, only two EEG channels were considered. In addition, minimal features concluded from the thorough analyses conducted in this study were used to achieve state-of-the-art performance. The implemented EEG emotion recognition method thus has the merits of being reliable and easily reproducible, making it well-suited for wearable EEG devices. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Valence-arousal model [<a href="#B6-sensors-23-01255" class="html-bibr">6</a>].</p>
Full article ">Figure 2
<p>The cerebral cortex divided into the frontal, temporal, parietal, and occipital lobes [<a href="#B13-sensors-23-01255" class="html-bibr">13</a>].</p>
Full article ">Figure 3
<p>The international 10/20 system for electrode placement [<a href="#B14-sensors-23-01255" class="html-bibr">14</a>].</p>
Full article ">Figure 4
<p>Samples from delta, theta, alpha, beta, and gamma brain waves [<a href="#B28-sensors-23-01255" class="html-bibr">28</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Conventional lab EEG headset [<a href="#B29-sensors-23-01255" class="html-bibr">29</a>] versus (<b>b</b>) wearable headset from NeuroSky [<a href="#B30-sensors-23-01255" class="html-bibr">30</a>].</p>
Full article ">Figure 6
<p>Emotion AI system diagram.</p>
Full article ">Figure 7
<p>Characteristic changes in an arbitrary reference signal, illustrating their relation to the different Hjorth parameters [<a href="#B100-sensors-23-01255" class="html-bibr">100</a>].</p>
Full article ">Figure 8
<p>Experimental workflow.</p>
Full article ">Figure 9
<p>Valence classification accuracies for the different features and EEG frequency bands.</p>
Full article ">Figure 10
<p>Boxplots of the variance and PSD features for the delta, alpha, and fast gamma bands considering the full 1 minute EEG signal.</p>
Full article ">
16 pages, 4713 KiB  
Article
Estimating the Depth of Anesthesia from EEG Signals Based on a Deep Residual Shrinkage Network
by Meng Shi, Ziyu Huang, Guowen Xiao, Bowen Xu, Quansheng Ren and Hong Zhao
Sensors 2023, 23(2), 1008; https://doi.org/10.3390/s23021008 - 15 Jan 2023
Cited by 13 | Viewed by 4591
Abstract
The reliable monitoring of the depth of anesthesia (DoA) is essential to control the anesthesia procedure. Electroencephalography (EEG) has been widely used to estimate DoA since EEG could reflect the effect of anesthetic drugs on the central nervous system (CNS). In this study, [...] Read more.
The reliable monitoring of the depth of anesthesia (DoA) is essential to control the anesthesia procedure. Electroencephalography (EEG) has been widely used to estimate DoA since EEG could reflect the effect of anesthetic drugs on the central nervous system (CNS). In this study, we propose that a deep learning model consisting mainly of a deep residual shrinkage network (DRSN) and a 1 × 1 convolution network could estimate DoA in terms of patient state index (PSI) values. First, we preprocessed the four raw channels of EEG signals to remove electrical noise and other physiological signals. The proposed model then takes the preprocessed EEG signals as inputs to predict PSI values. Then we extracted 14 features from the preprocessed EEG signals and implemented three conventional feature-based models as comparisons. A dataset of 18 patients was used to evaluate the models’ performances. The results of the five-fold cross-validation show that there is a relatively high similarity between the ground-truth PSI values and the predicted PSI values of our proposed model, which outperforms the conventional models, and further, that the Spearman’s rank correlation coefficient is 0.9344. In addition, an ablation experiment was conducted to demonstrate the effectiveness of the soft-thresholding module for EEG-signal processing, and a cross-subject validation was implemented to illustrate the robustness of the proposed method. In summary, the procedure is not merely feasible for estimating DoA by mimicking PSI values but also inspired us to develop a precise DoA-estimation system with more convincing assessments of anesthetization levels. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Our workflow.</p>
Full article ">Figure 2
<p>The algorithm flowchart of the WT-CEEMDAN-ICA method used to remove EOAs from EEG signals.</p>
Full article ">Figure 3
<p>The structure of residual building block (RBB): (<b>a</b>) the identity block where the input feature map is the same size as the output feature map. H, W, and C represent the height, width, and channels of the input and output feature map, respectively. (<b>b</b>) the convolutional block where the size of the input feature map is different from that of the output feature map. There is a convolution operation and a Batch-normalization operation in the convolutional shortcut for changing the shape of the input. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">W</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> represent the height, width, and channels of the input feature map, respectively. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">W</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>2</mn> </msub> </mrow> </semantics></math> represent the height, width, and channels of the output feature map, respectively. An RBB consists of two convolutional layers, two batch normalization (BN) layers, two rectifier linear units (ReLUs) layers, and one shortcut connection.</p>
Full article ">Figure 4
<p>The structure of residual shrinkage building unit with channel-wise thresholds (RSBU-CW). <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">W</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>1</mn> </msub> </mrow> </semantics></math> represent the height, width, and channels of the input feature map, respectively. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">W</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">C</mi> <mn>2</mn> </msub> </mrow> </semantics></math> represent the height, width, and channels of the output feature map, respectively. There is a soft thresholding module in RSBU-CW. <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math>, <span class="html-italic">z</span>, and <span class="html-italic">α</span> are the indicators of the feature maps used to determine the threshold <span class="html-italic">τ</span>. <span class="html-italic">x</span> and <span class="html-italic">y</span> are the input and output feature maps of the soft thresholding module, respectively.</p>
Full article ">Figure 5
<p>The illustration of 1 × 1 convolution. H, W, and C represent the height, width, and channels of the input feature map, respectively. A 1 × 1 convolution does not change the height or width but the number of channels of inputs.</p>
Full article ">Figure 6
<p>The structure of our proposed model consists of the DRSN-CW block and 1 × 1 convolution block. The inputs of our proposed model are 4 channel-EEG signals, and the outputs are the corresponding predicted PSI values.</p>
Full article ">Figure 7
<p>The data distribution of the dataset used in this study.</p>
Full article ">Figure 8
<p>The classification performances (ACC, SE, and F1) of all the models on different anesthetized states (AW, LA, NA, and DA) and the regression performance (MSE) of all the models.</p>
Full article ">Figure 9
<p>Part of the predicted PSI values of our proposed model. The red line represents the ideal prediction model where the predicted PSI values equal the ground truth PSI values exactly.</p>
Full article ">Figure 10
<p>The regression and classification performances of the two models in the ablation experiment on the soft thresholding module in the RSBU-CW.</p>
Full article ">Figure 11
<p>The classification performances (ACC, SE, and F1) of all the models on different anesthetized states (AW, LA, NA, and DA) and the regression performance (MSE) of all the models in cross-subject validation.</p>
Full article ">Figure 12
<p>Part of the predicted PSI values of our proposed model in cross-subject validation.</p>
Full article ">
19 pages, 1457 KiB  
Article
Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings
by Rajamanickam Yuvaraj, Prasanth Thagavel, John Thomas, Jack Fogarty and Farhan Ali
Sensors 2023, 23(2), 915; https://doi.org/10.3390/s23020915 - 12 Jan 2023
Cited by 38 | Viewed by 6057
Abstract
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on [...] Read more.
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>An overview of the proposed machine learning framework for emotion recognition based on EEG signals.</p>
Full article ">Figure 2
<p>The two-dimensional model of emotions: valence–arousal plane.</p>
Full article ">Figure 3
<p>Top three feature sets. Boxplot of CART accuracy on each DEAP, DREAMER, MAHNOB, AMIGOS and SEED emotion dataset. <span class="html-italic">X</span>-axis represents the dataset name. <span class="html-italic">Y</span>-axis indicates the classification accuracy. Black dot in the figure represents average classification accuracy of each participant across 4-folds.</p>
Full article ">Figure 4
<p>Topography of normalized EEG FD features for high/low valence. GM denotes the grand mean of each FD feature across all the datasets. KFD—Katz’s fractal dimension, PFD—Petrosian fractal dimension, HFD—Higuchi’s fractal dimension.</p>
Full article ">Figure 5
<p>Topography of normalized EEG FD features for high/low arousal. SEED dataset does not have arousal class. GM denotes the grand mean of FD each feature across all the datasets.KFD-Katz’s fractal dimension, PFD-Petrosian fractal dimension, HFD-Higuchi’s fractal dimension.</p>
Full article ">
12 pages, 3996 KiB  
Article
Electroencephalography Reflects User Satisfaction in Controlling Robot Hand through Electromyographic Signals
by Hyeonseok Kim, Makoto Miyakoshi, Yeongdae Kim, Sorawit Stapornchaisit, Natsue Yoshimura and Yasuharu Koike
Sensors 2023, 23(1), 277; https://doi.org/10.3390/s23010277 - 27 Dec 2022
Cited by 5 | Viewed by 2294
Abstract
This study addresses time intervals during robot control that dominate user satisfaction and factors of robot movement that induce satisfaction. We designed a robot control system using electromyography signals. In each trial, participants were exposed to different experiences as the cutoff frequencies of [...] Read more.
This study addresses time intervals during robot control that dominate user satisfaction and factors of robot movement that induce satisfaction. We designed a robot control system using electromyography signals. In each trial, participants were exposed to different experiences as the cutoff frequencies of a low-pass filter were changed. The participants attempted to grab a bottle by controlling a robot. They were asked to evaluate four indicators (stability, imitation, response time, and movement speed) and indicate their satisfaction at the end of each trial by completing a questionnaire. The electroencephalography signals of the participants were recorded while they controlled the robot and responded to the questionnaire. Two independent component clusters in the precuneus and postcentral gyrus were the most sensitive to subjective evaluations. For the moment that dominated satisfaction, we observed that brain activity exhibited significant differences in satisfaction not immediately after feeding an input but during the later stage. The other indicators exhibited independently significant patterns in event-related spectral perturbations. Comparing these indicators in a low-frequency band related to the satisfaction with imitation and movement speed, which had significant differences, revealed that imitation covered significant intervals in satisfaction. This implies that imitation was the most important contributing factor among the four indicators. Our results reveal that regardless of subjective satisfaction, objective performance evaluation might more fully reflect user satisfaction. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Experimental environment (not to scale). During the experiment, participants sat on a chair and placed their right arm on an armrest attached to the table. A keyboard was placed on the table through which participants responded to questions by pressing a button. For the task, participants were asked to grab a bottle, which was positioned such that the robot hand could be bent to grab the bottle. The robot hand could bend/extend a wrist and grip fingers with one degree of freedom (DOF). An opaque face cover prevented the participants from seeing their right arm. A monitor was placed in front of the table along the midline of the body, which includes a robot hand on the line to enable participants to easily see the screen.</p>
Full article ">Figure 2
<p>Dipole densities of clusters showing significant differences between conditions. The mean Montreal Neurological Institute (MNI) coordinate of the first cluster was (35 0 52), and the estimated location was the postcentral gyrus, with a probability of 17.9%. The mean MNI coordinate of the second cluster was (26 −69 55), and the estimated location was the precuneus, with a probability of 22.5%.</p>
Full article ">Figure 3
<p>Event-related spectral perturbation of clusters including a significant area in the comparison of satisfaction. The dotted line represents the onset of the “Go” cue. In each figure set, the third column represents t-statistics and the significant area. This plot indicates that satisfaction is determined dominantly in the final phase of control.</p>
Full article ">Figure 4
<p>Significant areas in the comparisons; stability (unstable vs. stable), imitation (bad vs. good), response time (delayed vs. no delay), and movement speed (extremely slow vs. extremely fast). The dotted line represents the onset of the “Go” cue. The blue areas represent negative t-statistics, and the red areas represent positive t-statistics.</p>
Full article ">Figure 5
<p>Power (1–8 Hz) of the clusters related to the precuneus. We extracted powers within the range of 1–8 Hz shown as a significant area in ERSP. The comparison of the movement speed exhibited a power difference within the range of 0.6–1 s, although the power difference in satisfaction was not significant. Although the power difference within the range of 1.6–2 s was significant in the comparison of satisfaction and imitation, the difference in the movement speed was not. Within the range of 0.3–0.6 s, satisfaction exhibited a moderate difference, and imitation exhibited a more significant difference. All tests were performed by <span class="html-italic">t</span>-test (*: <span class="html-italic">p</span> &lt; 0.05; **: <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">
27 pages, 1474 KiB  
Article
An Ensemble Model for Consumer Emotion Prediction Using EEG Signals for Neuromarketing Applications
by Syed Mohsin Ali Shah, Syed Muhammad Usman, Shehzad Khalid, Ikram Ur Rehman, Aamir Anwar, Saddam Hussain, Syed Sajid Ullah, Hela Elmannai, Abeer D. Algarni and Waleed Manzoor
Sensors 2022, 22(24), 9744; https://doi.org/10.3390/s22249744 - 12 Dec 2022
Cited by 16 | Viewed by 4600
Abstract
Traditional advertising techniques seek to govern the consumer’s opinion toward a product, which may not reflect their actual behavior at the time of purchase. It is probable that advertisers misjudge consumer behavior because predicted opinions do not always correspond to consumers’ actual purchase [...] Read more.
Traditional advertising techniques seek to govern the consumer’s opinion toward a product, which may not reflect their actual behavior at the time of purchase. It is probable that advertisers misjudge consumer behavior because predicted opinions do not always correspond to consumers’ actual purchase behaviors. Neuromarketing is the new paradigm of understanding customer buyer behavior and decision making, as well as the prediction of their gestures for product utilization through an unconscious process. Existing methods do not focus on effective preprocessing and classification techniques of electroencephalogram (EEG) signals, so in this study, an effective method for preprocessing and classification of EEG signals is proposed. The proposed method involves effective preprocessing of EEG signals by removing noise and a synthetic minority oversampling technique (SMOTE) to deal with the class imbalance problem. The dataset employed in this study is a publicly available neuromarketing dataset. Automated features were extracted by using a long short-term memory network (LSTM) and then concatenated with handcrafted features like power spectral density (PSD) and discrete wavelet transform (DWT) to create a complete feature set. The classification was done by using the proposed hybrid classifier that optimizes the weights of two machine learning classifiers and one deep learning classifier and classifies the data between like and dislike. The machine learning classifiers include the support vector machine (SVM), random forest (RF), and deep learning classifier (DNN). The proposed hybrid model outperforms other classifiers like RF, SVM, and DNN and achieves an accuracy of 96.89%. In the proposed method, accuracy, sensitivity, specificity, precision, and F1 score were computed to evaluate and compare the proposed method with recent state-of-the-art methods. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Preprocessing of EEG signals in literature.</p>
Full article ">Figure 2
<p>Feature extraction of EEG signals in the literature.</p>
Full article ">Figure 3
<p>Classification of EEG signals in the literature.</p>
Full article ">Figure 4
<p>Flow diagram of the proposed consumer emotion prediction.</p>
Full article ">Figure 5
<p>Decomposition of EEG signal into four levels by using DWT.</p>
Full article ">Figure 6
<p>Architecture of LSTM model [<a href="#B36-sensors-22-09744" class="html-bibr">36</a>].</p>
Full article ">Figure 7
<p>Block diagram of ensemble classifier.</p>
Full article ">Figure 8
<p>AUC curve to examine the performance of ensemble classifier.</p>
Full article ">Figure 9
<p>Analysis of different preprocessing techniques in the proposed system.</p>
Full article ">Figure 10
<p>Analysis of different feature extraction techniques in proposed system.</p>
Full article ">Figure 11
<p>Analysis of different classification techniques in proposed system.</p>
Full article ">Figure 12
<p>Comparison of results achieved for consumer choice recognition by using different experiments.</p>
Full article ">Figure 13
<p>Comparison of confusion matrix for different experimental settings.</p>
Full article ">
16 pages, 2217 KiB  
Article
Implementing Performance Accommodation Mechanisms in Online BCI for Stroke Rehabilitation: A Study on Perceived Control and Frustration
by Mads Jochumsen, Bastian Ilsø Hougaard, Mathias Sand Kristensen and Hendrik Knoche
Sensors 2022, 22(23), 9051; https://doi.org/10.3390/s22239051 - 22 Nov 2022
Cited by 7 | Viewed by 2533
Abstract
Brain–computer interfaces (BCIs) are successfully used for stroke rehabilitation, but the training is repetitive and patients can lose the motivation to train. Moreover, controlling the BCI may be difficult, which causes frustration and leads to even worse control. Patients might not adhere to [...] Read more.
Brain–computer interfaces (BCIs) are successfully used for stroke rehabilitation, but the training is repetitive and patients can lose the motivation to train. Moreover, controlling the BCI may be difficult, which causes frustration and leads to even worse control. Patients might not adhere to the regimen due to frustration and lack of motivation/engagement. The aim of this study was to implement three performance accommodation mechanisms (PAMs) in an online motor imagery-based BCI to aid people and evaluate their perceived control and frustration. Nineteen healthy participants controlled a fishing game with a BCI in four conditions: (1) no help, (2) augmented success (augmented successful BCI-attempt), (3) mitigated failure (turn unsuccessful BCI-attempt into neutral output), and (4) override input (turn unsuccessful BCI-attempt into successful output). Each condition was followed-up and assessed with Likert-scale questionnaires and a post-experiment interview. Perceived control and frustration were best predicted by the amount of positive feedback the participant received. PAM-help increased perceived control for poor BCI-users but decreased it for good BCI-users. The input override PAM frustrated the users the most, and they differed in how they wanted to be helped. By using PAMs, developers have more freedom to create engaging stroke rehabilitation games. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Data flow from the BCI cap to the fishing game developed in Unity. The BCI only controls the game when the black cursor is within the input window, marked by the green area on a bar displayed in the fishing game.</p>
Full article ">Figure 2
<p>In the fishing game, participants control a fisherman reeling fish. Participants use arrow keys to move the hook up and down between three lanes. A fish may appear in a random lane from either left or right side and may swim into the participant’s hook. The BCI input window then begins and the participant may then perform MI when the black cursor is within the green area.</p>
Full article ">Figure 3
<p>Each condition consisted of 20 trials. In the helped conditions, help trials with predefined outcomes (blue) were shuffled with normal (no PAM) trials (gray) to provide users with 30% help. Forced rejections (red) were inserted when people were succeeding above the 70% target control rate.</p>
Full article ">Figure 4
<p>Each participant in the experiment (1) underwent BCI setup and BCI calibration, (2) played a fishing game in four conditions, starting with the normal condition, followed by (3) three helped conditions in a shuffled order. Participants were then debriefed about their experiences.</p>
Full article ">Figure 5
<p>The relationship between perceived control and positive feedback is shown in the top row of each of the four conditions, while the relationship between frustration and positive feedback is shown in the middle row. In the bottom row, the relationship between frustration and perceived control is shown. AS: augmented success, IO: input override, MF: mitigated failure, and NO: normal condition without PAM help. Each data point represents the rating of a single participant.</p>
Full article ">Figure 5 Cont.
<p>The relationship between perceived control and positive feedback is shown in the top row of each of the four conditions, while the relationship between frustration and positive feedback is shown in the middle row. In the bottom row, the relationship between frustration and perceived control is shown. AS: augmented success, IO: input override, MF: mitigated failure, and NO: normal condition without PAM help. Each data point represents the rating of a single participant.</p>
Full article ">
21 pages, 5698 KiB  
Article
Learning Optimal Time-Frequency-Spatial Features by the CiSSA-CSP Method for Motor Imagery EEG Classification
by Hai Hu, Zihang Pu, Haohan Li, Zhexian Liu and Peng Wang
Sensors 2022, 22(21), 8526; https://doi.org/10.3390/s22218526 - 5 Nov 2022
Cited by 9 | Viewed by 2760
Abstract
The common spatial pattern (CSP) is a popular method in feature extraction for motor imagery (MI) electroencephalogram (EEG) classification in brain–computer interface (BCI) systems. However, combining temporal and spectral information in the CSP-based spatial features is still a challenging issue, which greatly affects [...] Read more.
The common spatial pattern (CSP) is a popular method in feature extraction for motor imagery (MI) electroencephalogram (EEG) classification in brain–computer interface (BCI) systems. However, combining temporal and spectral information in the CSP-based spatial features is still a challenging issue, which greatly affects the performance of MI-based BCI systems. Here, we propose a novel circulant singular spectrum analysis embedded CSP (CiSSA-CSP) method for learning the optimal time-frequency-spatial features to improve the MI classification accuracy. Specifically, raw EEG data are first segmented into multiple time segments and spectrum-specific sub-bands are further derived by CiSSA from each time segment in a set of non-overlapping filter bands. CSP features extracted from all time-frequency segments contain more sufficient time-frequency-spatial information. An experimental study was implemented on the publicly available EEG dataset (BCI Competition III dataset IVa) and a self-collected experimental EEG dataset to validate the effectiveness of the CiSSA-CSP method. Experimental results demonstrate that discriminative and robust features are extracted effectively. Compared with several state-of-the-art methods, the proposed method exhibited optimal accuracies of 96.6% and 95.2% on the public and experimental datasets, respectively, which confirms that it is a promising method for improving the performance of MI-based BCIs. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of the CiSSA-CSP method for motor-imagery classification.</p>
Full article ">Figure 2
<p>(<b>a</b>) Electrodes used in our study (yellow circles) according to the extended international 10–20 system. (<b>b</b>) The scheme of the experiment. A single trial of the experiment was divided into two periods. In the first period, the subject relaxed for 1.75–2.25 s; and then the visual cues were indicated for 3.5 s when the subject performed the motor imageries.</p>
Full article ">Figure 3
<p>Experiment setup. (<b>a</b>) Electrodes used in the experiment (yellow circles) according to the international 10−20 system. (<b>b</b>) The scheme of the experiment. A single trial of the experiment was divided into three periods. In the first period, the subject relaxed for 3 s; and then the visual cues were indicated for 2 s for preparation. Finally, subjects performed the motor-imagery tasks (right hand or foot) for 5 s.</p>
Full article ">Figure 4
<p>The topographical map and the filter coefficient of the most significant spatial filter learned by the CSP method of each sub-band for subject av. The electrode indexes 1, 2, …, 17 correspond to the electrode FC3, FC1, FCz, FC2, FC4, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2, CP4, respectively. Electrodes inside the red outline represent the electrode indexes 1, 2, …, 17.</p>
Full article ">Figure 5
<p>The power spectrum density (PSD) of the sub-bands extracted by CiSSA, FIR, IIR, WDec, and ICA + FIR for subject av at electrode C3. The PSDs of sun-bands extracted by FIR and IIR are higher than those by CiSSA and ICA + FIR. The PSDs of sun-bands extracted by WDec contain components falling outside the frequency width (e.g., 6–10 Hz for sub-band1).</p>
Full article ">Figure 6
<p>Performance of time segmentation for subject aa. (<b>a</b>) Pictorial representation of the classification accuracy (ACC) on the feature space learned by the proposed method for subject aa. Each time-frequency segment contains 4 CSP features. (<b>b</b>) The topographical maps of the most significant spatial filter learned by the CSP from all time windows in sub-band 14–18 Hz (marked by red outline in <a href="#sensors-22-08526-f006" class="html-fig">Figure 6</a>a). Electrodes inside red outline in <a href="#sensors-22-08526-f006" class="html-fig">Figure 6</a>b represent the electrodes of the sensorimotor area.</p>
Full article ">Figure 7
<p>Distribution of MIBIF values in all time-frequency segments for subjects aa. Index 1, 2, …, 24 in the frequency bands represent the CSP feature index.</p>
Full article ">Figure 8
<p>Distributions of the most two significant features obtained by CSP, CiSSA + CSP, Subtime + CSP and Subtime + CiSSA + CSP, for subjects aa.</p>
Full article ">Figure 9
<p>Classification accuracy over the number of selected features by MIBIF and PCA for subjects av.</p>
Full article ">Figure 10
<p>(<b>a</b>) The ROC curve of the 57 features selected by MIBIF and 5 features selected by PCA for subjects aa. (<b>b</b>) The distribution of the first two features obtained by PCA for subject aa. Note that the right-hand (blue, circle) and right-foot (red, cross) imagery classes are nearly linearly separable with only 2 features.</p>
Full article ">Figure 11
<p>The distribution of mutual information between the top 25 features selected by (<b>a</b>) MIBIF and (<b>b</b>) PCA for subject av.</p>
Full article ">Figure 12
<p>Computational time taken by different methods on Competition III dataset IVa with 10-fold cross-validation. (<b>a</b>) Computational time taken by CSP, CiSSA + CSP, Subtime + CiSSA + CSP, Subtime + CiSSA + CSP + MIBIF and Subtime + CiSSA + CSP + PCA. (<b>b</b>) Computational time taken by FIR + CSP, IIR + CSP, WDec + CSP, ICA + CSP, ICA + FIR + CSP and CiSSA + CSP.</p>
Full article ">
12 pages, 2697 KiB  
Article
Spatio-Temporal Neural Dynamics of Observing Non-Tool Manipulable Objects and Interactions
by Zhaoxuan Li and Keiji Iramina
Sensors 2022, 22(20), 7771; https://doi.org/10.3390/s22207771 - 13 Oct 2022
Cited by 1 | Viewed by 1722
Abstract
Previous studies have reported that a series of sensory–motor-related cortical areas are affected when a healthy human is presented with images of tools. This phenomenon has been explained as familiar tools launching a memory-retrieval process to provide a basis for using the tools. [...] Read more.
Previous studies have reported that a series of sensory–motor-related cortical areas are affected when a healthy human is presented with images of tools. This phenomenon has been explained as familiar tools launching a memory-retrieval process to provide a basis for using the tools. Consequently, we postulated that this theory may also be applicable if images of tools were replaced with images of daily objects if they are graspable (i.e., manipulable). Therefore, we designed and ran experiments with human volunteers (participants) who were visually presented with images of three different daily objects and recorded their electroencephalography (EEG) synchronously. Additionally, images of these objects being grasped by human hands were presented to the participants. Dynamic functional connectivity between the visual cortex and all the other areas of the brain was estimated to find which of them were influenced by visual stimuli. Next, we compared our results with those of previous studies that investigated brain response when participants looked at tools and concluded that manipulable objects caused similar cerebral activity to tools. We also looked into mu rhythm and found that looking at a manipulable object did not elicit a similar activity to seeing the same object being grasped. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Four kinds of images used in our experiment. Condition A presented participants with images of an orange, bottle, and smart phone (three objects). Condition B presented images of hands. Condition C combined the three objects and hands within the images. Condition D showed whole actions of hands grabbing objects (interactions). (<b>b</b>) Workflow of the trial. The images after the cross were randomly chosen from images corresponding to the current session (e.g., orange session, bottle session, and phone session).</p>
Full article ">Figure 2
<p>Functional connectivity between visual cortex and other regions. Colored electrode indicates that connectivity between that region and the occipital lobe actually exists. Time is indicated at the bottom right corner of each topography, as “time (ms) that most connectivity occurred when seeing objects/time (ms) that most connectivity occurred when seeing objects being grasped”. Note that each topography is an overlay of two graphs at two different moments. Red and blue electrodes represent the connections that only occurred when seeing objects and when seeing objects being grasped, respectively, while the green ones mean the two conditions share the same electrode.</p>
Full article ">Figure 3
<p>PLVs over time. Red line shows phase locking values (PLVs) when participants were shown objects, while the blue line shows PLVs when they were shown objects being grasped by human hands. Shaded areas are standard error. On these shown regions, PLVs from the two conditions varied similarly for the theta and beta bands.</p>
Full article ">Figure 4
<p>PLV observed at BA and LAG. Red line shows PLVs when participants were shown objects, while the blue line shows PLVs when they were shown objects being grasped by human hands. Shaded areas are standard error. Significant difference was noticed between seeing objects and seeing interactions at 200 ms after presenting the stimulus to participants (α = 0.05).</p>
Full article ">Figure 5
<p>(<b>a</b>) Topography of ERSP at 400 ms. Mu rhythm ERD distributed at bilateral posterior central gyrus with a little left advantage and performed similarly in all six situations. (<b>b</b>) ERSP over time. Red line shows ERSP when participants were shown objects, while the blue line shows ERSP when they were shown objects being grasped by human hands. Shaded areas are standard error. A clear ERS was observed only when seeing interactions, and its peak time is indicated with an arrow. The significance of ERS was confirmed by a permutation test on the ERSP value in the two conditions at the corresponding time (α = 0.05).</p>
Full article ">Figure 6
<p>(<b>a</b>) Topography of 8–13 Hz ERSP when seeing human right hand and seeing interactions using the right hand at 152, 180, and 158 ms. ERS at LS is weaker when only images of a hand are presented to participants. (<b>b</b>) Plot shows a grand averaged ERP difference between electrodes PO7 and PO8. A remarkable second peak (black line) appeared when participants were presented with images in condition C. The bar graph on the right shows mean and standard error of the difference data in the range from 246 to 300 ms.</p>
Full article ">
17 pages, 6019 KiB  
Article
EEG/fNIRS Based Workload Classification Using Functional Brain Connectivity and Machine Learning
by Jun Cao, Enara Martin Garro and Yifan Zhao
Sensors 2022, 22(19), 7623; https://doi.org/10.3390/s22197623 - 8 Oct 2022
Cited by 32 | Viewed by 5486
Abstract
There is high demand for techniques to estimate human mental workload during some activities for productivity enhancement or accident prevention. Most studies focus on a single physiological sensing modality and use univariate methods to analyse multi-channel electroencephalography (EEG) data. This paper proposes a [...] Read more.
There is high demand for techniques to estimate human mental workload during some activities for productivity enhancement or accident prevention. Most studies focus on a single physiological sensing modality and use univariate methods to analyse multi-channel electroencephalography (EEG) data. This paper proposes a new framework that relies on the features of hybrid EEG–functional near-infrared spectroscopy (EEG–fNIRS), supported by machine-learning features to deal with multi-level mental workload classification. Furthermore, instead of the well-used univariate power spectral density (PSD) for EEG recording, we propose using bivariate functional brain connectivity (FBC) features in the time and frequency domains of three bands: delta (0.5–4 Hz), theta (4–7 Hz) and alpha (8–15 Hz). With the assistance of the fNIRS oxyhemoglobin and deoxyhemoglobin (HbO and HbR) indicators, the FBC technique significantly improved classification performance at a 77% accuracy for 0-back vs. 2-back and 83% for 0-back vs. 3-back using a public dataset. Moreover, topographic and heat-map visualisation indicated that the distinguishing regions for EEG and fNIRS showed a difference among the 0-back, 2-back and 3-back test results. It was determined that the best region to assist the discrimination of the mental workload for EEG and fNIRS is different. Specifically, the posterior area performed the best for the posterior midline occipital (POz) EEG in the alpha band and fNIRS had superiority in the right frontal region (AF8). Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed framework. The pipeline contains four main steps: pre-processing, feature extraction, feature selection and machine-learning classification.</p>
Full article ">Figure 2
<p>Layout of a set in the experiment. A single task consisted of a 2 s instruction indicating the type of task (0-, 2-, or 3-back), a 40 s task period that consisted of 20 trials, a 1 s stop period, and a 20 s rest period. Each participant completed nine sets of n-back tasks.</p>
Full article ">Figure 3
<p>Time window analysis (<b>A</b>) Time interval analysis for fNIRS features; Time window-size evaluation for EEG and fNIRS features for (<b>B</b>) 0-back vs. 2-back, (<b>C</b>) 0-back vs. 3-back, (<b>D</b>) 2-back vs. 3-back.</p>
Full article ">Figure 4
<p>Comparison of four FBC estimations (MI, PCC, MSC and PLV) in terms of the average of the Top 10 classification accuracies along with maximum and minimum value.</p>
Full article ">Figure 5
<p>The receiver operating characteristic (ROC) curves for three binary classification tasks: 0-back vs. 2-back, 0-back vs. 3-back, and 2-back vs. 3-back.</p>
Full article ">Figure 6
<p>Topographic map of the EEG alpha-band PSD. Left: average; Middle: each participant; Right: Accuracy using each-channel PSD as the input. The area that provides the highest accuracy is highlighted.</p>
Full article ">Figure 7
<p>Topographic map of the fNIRS HbR features. Left: average; Middle: each participant; Right: Accuracy using each-channel HbR feature as the input. The area that provided the highest accuracy is highlighted.</p>
Full article ">Figure 8
<p>The heat map of MI FBC features and the accuracy results. (<b>A</b>–<b>C</b>) shows the MI value for each participant in 0-back, 2-back, and 3-back task. (<b>D</b>–<b>F</b>) represents the classification accuracy for 0-back vs. 2-back, 0-back vs.3-back and 2-back vs. 3-back, respectively, using each pair of EEG channels as the input where the FBC value was estimated by MI.</p>
Full article ">
13 pages, 4085 KiB  
Article
A Classification Model of EEG Signals Based on RNN-LSTM for Diagnosing Focal and Generalized Epilepsy
by Tahereh Najafi, Rosmina Jaafar, Rabani Remli and Wan Asyraf Wan Zaidi
Sensors 2022, 22(19), 7269; https://doi.org/10.3390/s22197269 - 25 Sep 2022
Cited by 27 | Viewed by 3908
Abstract
Epilepsy is a chronic neurological disorder caused by abnormal neuronal activity that is diagnosed visually by analyzing electroencephalography (EEG) signals. Background: Surgical operations are the only option for epilepsy treatment when patients are refractory to treatment, which highlights the role of classifying focal [...] Read more.
Epilepsy is a chronic neurological disorder caused by abnormal neuronal activity that is diagnosed visually by analyzing electroencephalography (EEG) signals. Background: Surgical operations are the only option for epilepsy treatment when patients are refractory to treatment, which highlights the role of classifying focal and generalized epilepsy syndrome. Therefore, developing a model to be used for diagnosing focal and generalized epilepsy automatically is important. Methods: A classification model based on longitudinal bipolar montage (LB), discrete wavelet transform (DWT), feature extraction techniques, and statistical analysis in feature selection for RNN combined with long short-term memory (LSTM) is proposed in this work for identifying epilepsy. Initially, normal and epileptic LB channels were decomposed into three levels, and 15 various features were extracted. The selected features were extracted from each segment of the signals and fed into LSTM for the classification approach. Results: The proposed algorithm achieved a 96.1% accuracy, a 96.8% sensitivity, and a 97.4% specificity in distinguishing normal subjects from subjects with epilepsy. This optimal model was used to analyze the channels of subjects with focal and generalized epilepsy for diagnosing purposes, relying on statistical parameters. Conclusions: The proposed approach is promising, as it can be used to detect epilepsy with satisfactory classification performance and diagnose focal and generalized epilepsy. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>A flowchart of the study.</p>
Full article ">Figure 2
<p>Longitudinal bipolar montage calculation separated in the left and right posterior and anterior areas.</p>
Full article ">Figure 3
<p>Samples of raw signals (top) and de-noised signals (down) of normal (<b>a</b>) and epileptic (<b>b</b>) signals recorded from T4−T6. The X-axis shows the potential difference (µv).</p>
Full article ">Figure 4
<p>Generalized epilepsy (<b>left</b>) and TLE (<b>right</b>) samples based on the LB montage.</p>
Full article ">Figure 5
<p>A Sample of power spectral density for one normal channel (<b>a</b>) and one epileptic (<b>b</b>) channel.</p>
Full article ">Figure 6
<p>Correlation coefficient among features (<b>a</b>), <span class="html-italic">p</span>-value for each feature in group (<b>b</b>).</p>
Full article ">Figure 7
<p>The result of the classification model for focal (blue) and generalized (grey) groups in classifying each channel as affected channels. The X-axis represents LB channels categorized in the left and right posterior and anterior areas. The Y-axis represents the percentage of affected channels according to the population of each group.</p>
Full article ">
18 pages, 2327 KiB  
Article
Epileptic Disorder Detection of Seizures Using EEG Signals
by Mariam K. Alharthi, Kawthar M. Moria, Daniyal M. Alghazzawi and Haythum O. Tayeb
Sensors 2022, 22(17), 6592; https://doi.org/10.3390/s22176592 - 31 Aug 2022
Cited by 27 | Viewed by 4503
Abstract
Epilepsy is a nervous system disorder. Encephalography (EEG) is a generally utilized clinical approach for recording electrical activity in the brain. Although there are a number of datasets available, most of them are imbalanced due to the presence of fewer epileptic EEG signals [...] Read more.
Epilepsy is a nervous system disorder. Encephalography (EEG) is a generally utilized clinical approach for recording electrical activity in the brain. Although there are a number of datasets available, most of them are imbalanced due to the presence of fewer epileptic EEG signals compared with non-epileptic EEG signals. This research aims to study the possibility of integrating local EEG signals from an epilepsy center in King Abdulaziz University hospital into the CHB-MIT dataset by applying a new compatibility framework for data integration. The framework comprises multiple functions, which include dominant channel selection followed by the implementation of a novel algorithm for reading XLtek EEG data. The resulting integrated datasets, which contain selective channels, are tested and evaluated using a deep-learning model of 1D-CNN, Bi-LSTM, and attention. The results achieved up to 96.87% accuracy, 96.98% precision, and 96.85% sensitivity, outperforming the other latest systems that have a larger number of EEG channels. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

Figure 1
<p>The proposed compatibility framework architecture.</p>
Full article ">Figure 2
<p>Schematic presentation of EEG electrode positions for: (<b>a</b>) CHB-MIT electrode positions where the adopted electrodes are highlighted with the blue color; (<b>b</b>) KAU electrode positions.</p>
Full article ">Figure 3
<p>Proposed wavelet decomposition tree (db4).</p>
Full article ">Figure 4
<p>Approximation and detailed coefficients of the EEG signals.</p>
Full article ">Figure 5
<p>The deep-learning model architecture.</p>
Full article ">Figure 6
<p>Average values of experiments before and after data integration for performance metrics.</p>
Full article ">Figure 7
<p>The performance metric charts of testing against the epochs.</p>
Full article ">
Back to TopTop