Online Learning for Wearable EEG-Based Emotion Classification
<p>Different electrode positions according to the international 10–20 system of the EEG devices used in Dataset I (<b>a</b>) and in Dataset II and III (<b>b</b>,<b>c</b>). Sensor locations are marked in blue, references are in orange.</p> "> Figure 2
<p>Two consumer-grade EEG devices with integrated electrodes used in the experiments.</p> "> Figure 3
<p>Screenshots from the PsychoPy [<a href="#B48-sensors-23-02387" class="html-bibr">48</a>] setup of self-assessment questions. (<b>a</b>) Partial PANAS questionnaire with five different levels represented by clickable radio buttons (in red) with the levels’ explanation on top, (<b>b</b>) AS for valence displayed on top and the slider for arousal on the bottom.</p> "> Figure 4
<p>Experimental setup for curating Dataset II. The participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider and familiarity with the video.</p> "> Figure 5
<p>Experimental setup for curating Dataset III. In the first session, the participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider, familiarity with the video, and had seen the actual AS label. In the second session, they watched the same set of videos while the prediction was available to the experimenter before the delayed label arrived.</p> "> Figure 6
<p>Overview of pipeline steps for affect classification. The top gray rectangle shows the pipeline steps employed in an immediate label setting with prerecorded data. For each extracted feature vector the model (1) first classifies its label before (2) being updated with the true label for that sample. In the live setting, the model is not updated after every prediction, as the true label of a video only becomes available after the stimulus has ended. The timestamp of the video is matched to the samples’ timestamps to find all samples that fall into the corresponding time frame and update the model with their true labels (shown in dotted lines).</p> "> Figure 7
<p>The incoming data stream is processed in tumbling windows (gray rectangles). One window includes all samples <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">x</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mo>…</mo> </mrow> </semantics></math> arriving during a specified time period, e.g., 1 s. The pipeline extracts one feature vector, <math display="inline"><semantics> <msub> <mi mathvariant="bold">F</mi> <mi>i</mi> </msub> </semantics></math>, per window. Windows during a stimulus (video) are marked in dark gray. Participants rated each video with one label per affect dimension, <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>j</mi> </msub> </semantics></math>. All feature vectors extracted from windows that fall into the time frame of a video (between <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>r</mi> <mi>t</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> </semantics></math> of that video) receive a label <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math> corresponding to the reported label, <math display="inline"><semantics> <msub> <mi>Y</mi> <mi>j</mi> </msub> </semantics></math>, of that video. If possible, the windows are aligned with the end of the stimulus; otherwise, all windows that lie completely inside a video’s time range are considered.</p> "> Figure 8
<p>(<b>a</b>) Progressive validation incorporated into the basic flow of the training process (‘test-then-train’) of an online classifier in an immediate label setting. (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> represents an input feature vector and its corresponding label. (<b>b</b>) Evaluation incorporated into the basic flow of the training process of an online classifier when labels arrive delayed (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>≥</mo> <mi>j</mi> </mrow> </semantics></math>).</p> "> Figure 9
<p>F1-Score for Valence and Arousal classification achieved by ARF and SRP per subject from Dataset I.</p> "> Figure 10
<p>Mean F1-Score achieved by ARF, SRP, and LR over the whole dataset for both affect dimension with respect to window length.</p> "> Figure 11
<p>Confusion matrices for the live affect classification (Dataset III, part 2). Employed model: ARF (four trees), window length = 1 s. Recall was calculated only for a low class for both the models.</p> ">
Abstract
:1. Introduction
1.1. Problem Statement
1.2. Key Contributions
1.3. Related Work
2. Materials and Methods
2.1. Dataset I: AMIGOS Dataset
2.2. Participants
2.3. Data Acquisition
- Hardware: During the experiments, two consumer-grade devices, Muse S Headband Gen 1 (https://choosemuse.com/compare/ (accessed on 20 February 2023)) and Neurosity Crown (https://neurosity.co/crown (accessed on 20 February 2023)), were used to collect the EEG data from the participants, as depicted in Figure 2. Both devices operated with a sampling rate of 256 Hz and the EEG data were collected with four and eight channels, respectively.
- Software: In this paper, the experiment was implemented using the software PsychoPy (v 2021.2.3) [48] in a way that guided the participants through instructions, questionnaires, and stimuli. The participants were allowed to go at their own pace by clicking on the “Next” (“Weiter” in German) button, as shown in the screenshots of PsychoPy in Figure 3.
2.4. Stimuli Selection
2.5. Behavioral Data
- PANAS: During the experiments, participants were asked to assess their baseline levels of affect in the PANAS scale. As depicted in Figure 3a, in total 20 questions (10 questions from each of the Positive Affect (PA) and Negative Affect (NA) dimensions) were answered using a 5-point Likert scale with the options ranging from “very slightly or not at all” (1) to “extremely” (5). To see if the participants’ moods generally changed over the course of the experiments, they were asked to answer the PANAS once at the beginning and once again at the end. For the German version of the PANAS questionnaire, the translation of Breyer and Bluemke [58] was used.
- Affect Self-Assessment: The Affective Slider (AS) [59] was used in the experiment to capture participants’ emotional self-assessment after presenting each stimulus, as depicted in the screenshot in Figure 3b (https://github.com/albertobeta/AffectiveSlider (accessed on 20 February 2023)). The AS is a digital self-reporting tool composed of two slider controls for the quick assessment of pleasure and arousal. The two sliders show emoticons at their ends to represent the extreme points of their respective scales, i.e., unhappy/happy for pleasure (valence) and sleepy/wide awake for arousal. For the experiments, AS was designed in a continuous normalized scale with a step size of 0.01 (i.e., a resolution of 100), and the order of the two sliders was randomized each time.
- Familiarity: The participants were asked to indicate their familiarity with each video on a discrete 5-point-scale ranging from “Have never seen this video before” (1) to “Know the video very well” (5). The PsychoPy slide with this question was always shown after the AS.
2.6. Dataset II
- Briefing Session: In the beginning, each participant went through a pre-experimental briefing where the experimenter explained the study procedure and informed the participant of the experiment’s duration, i.e., two parts of approximately 20 min each with a small intermediate break. The participant then received and read the data information sheet, filled out the personal information sheet, and signed the consent to participate. Personal information included age, nationality, biological sex, handedness (left- or right-handed), education level, and neurological- or mental health-related problems. The documents and the study platform (i.e., PsychoPy) were provided according to the participant’s choice of study language between English and German. Afterward, the experimenter explained the three scales mentioned in and allowed the participant to accustom to the PsychoPy platform. This ensured the understanding of the different terms and scales used for the experiment without having to interrupt the experiment afterwards. The participant could refrain from participating at any moment during the experiment.
- Data Collection: After the briefing, the experimenter put either the Muse headband or the Crown on the participant by a random choice. Putting headphones over the device, the participant was asked to refrain from strong movements, especially of the head. The experimenter then checked the incoming EEG data and let the participant begin the experiment. After greeting the participant with a welcome screen, a relaxation video was shown to the participant (https://www.youtube.com/watch?v=S6jCd2hSVKA (accessed on 20 February 2023)) for 3 min. They answered the PANAS questionnaire to rate their current mood and closed eyes for half a minute to get a baseline measure of EEG data. Afterwards, they were asked to initially rate the valence and arousal state with the AS. Following this, an instruction about watching eight short videos was provided. Each of those was preceded by a video counter and followed by two questionnaires: the AS and the familiarity. The order of the videos and the order of the two sliders of AS were randomized over both parts of the experiments, fulfilling the condition that the labels of the videos are balanced. The first part of the experiment ended after watching eight videos and answering the corresponding questionnaire. The participant was allowed a short break after taking the EEG device and the headphones off.
2.7. Dataset III: Live Training and Classification
3. Emotion Classification Pipeline
3.1. Data Pre-Processing
3.2. Data Windowing and Shuffling
3.3. Feature Extraction
3.4. Labeling
3.5. Evaluation
- Online Learning and Progressive Validation: This paper aims at building a classification pipeline from evolving data streams. Therefore, the static data from Dataset I and Dataset II were streamed using a library for online learning: river [66]. Progressive validation, also called test-then-train evaluation [67], was used for model evaluation in the supervised immediate label setting, when the labels for all samples were present at processing time [68]. Figure 8a shows the training process of an online classifier including progressive validation. Every time the model sees a new sample , it first classifies this sample as the test step of the test-then-train procedure. In the training process, the model will calculate the loss by comparing the true label, , which might come from a different data source than the samples. The updated model will go on to classify the next incoming sample, , before seeing its label, , and, again, execute the training and performance metric updating step. This continues as long as data are streamed to the model. In this way, all samples can be used for training as well as for validation without corrupting the performance evaluation.
3.6. Machine Learning Classifiers
- Evaluation Metrics: The participants’ self-reported assessment of their valence and arousal levels was used as the ground truth in all training and evaluation processes in this paper. Among the different metrics of reporting the classifier’s performance [20], the commonly reported metrics Accuracy and F1-Score will be disclosed in this work. They are defined as follows [72]:
4. Results
4.1. Immediate Label Setting
4.2. Effects of Window Size
4.3. Delayed Label Setting: Live Classification
5. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
MDPI | Multidisciplinary Digital Publishing Institute |
ARF | Adaptive Random Forest |
AS | Affective Slider |
EEG | Electroencephalography |
HCI | Human–Computer Interaction |
HVLA | High Valence Low Arousal (different combinations are possible) |
LR | Logistic Regression |
OSC | Open Sound Control |
PANAS | Positive Furthermore, Negative Affect Schedules |
PSD | Power Spectral Density |
SRP | Streaming Random Patches |
Appendix A
Algorithm A1: Live Emotion Classification from an EEG Stream. |
References
- Picard, R.W. Affective Computing; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar] [CrossRef]
- Cowie, R.; Douglas-Cowie, E.; Tsapatsoulis, N.; Votsis, G.; Kollias, S.; Fellenz, W.; Taylor, J. Emotion recognition in human–computer interaction. IEEE Signal Process. Mag. 2001, 18, 32–80. [Google Scholar] [CrossRef]
- Haut, S.R.; Hall, C.B.; Borkowski, T.; Tennen, H.; Lipton, R.B. Clinical features of the pre-ictal state: Mood changes and premonitory symptoms. Epilepsy Behav. 2012, 23, 415–421. [Google Scholar] [CrossRef] [PubMed]
- Kocielnik, R.; Sidorova, N.; Maggi, F.M.; Ouwerkerk, M.; Westerink, J.H.D.M. Smart technologies for long-term stress monitoring at work. In Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems (CBMS), Porto, Portugal, 20–22 June 2013; pp. 53–58. [Google Scholar] [CrossRef]
- Schulze-Bonhage, A.; Kurth, C.; Carius, A.; Steinhoff, B.J.; Mayer, T. Seizure anticipation by patients with focal and generalized epilepsy: A multicentre assessment of premonitory symptoms. Epilepsy Res. 2006, 70, 83–88. [Google Scholar] [CrossRef]
- Privitera, M.; Haut, S.R.; Lipton, R.B.; McGinley, J.S.; Cornes, S. Seizure self-prediction in a randomized controlled trial of stress management. Neurology 2019, 93, e2021–e2031. [Google Scholar] [CrossRef] [PubMed]
- Kotwas, I.; McGonigal, A.; Trebuchon, A.; Bastien-Toniazzo, M.; Nagai, Y.; Bartolomei, F.; Micoulaud-Franchi, J.A. Self-control of epileptic seizures by nonpharmacological strategies. Epilepsy Behav. 2016, 55, 157–164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Scaramelli, A.; Braga, P.; Avellanal, A.; Bogacz, A.; Camejo, C.; Rega, I.; Messano, T.; Arciere, B. Prodromal symptoms in epileptic patients: Clinical characterization of the pre-ictal phase. Seizure 2009, 18, 246–250. [Google Scholar] [CrossRef] [Green Version]
- Moontaha, S.; Steckhan, N.; Kappattanavar, A.; Surges, R.; Arnrich, B. Self-prediction of seizures in drug resistance epilepsy using digital phenotyping: A concept study. In Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth ’20, Atlanta, GA, USA, 18–20 May 2020; Association for Computing Machinery: New York, NY, USA, 2021; pp. 384–387. [Google Scholar] [CrossRef]
- Levenson, R.; Lwi, S.; Brown, C.; Ford, B.; Otero, M.; Verstaen, A. Emotion. In Handbook of Psychophysiology, 4th ed.; Cambridge University Press: Cambridge, UK, 2016; pp. 444–464. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, Y.; Li, Y.; Kong, X. Review on Emotion Recognition Based on Electroencephalography. Front. Comput. Neurosci. 2021, 15, 758212. [Google Scholar] [CrossRef] [PubMed]
- Krigolson, O.E.; Williams, C.C.; Norton, A.; Hassall, C.D.; Colino, F.L. Choosing MUSE: Validation of a Low-Cost, Portable EEG System for ERP Research. Front. Neurosci. 2017, 11, 109. [Google Scholar] [CrossRef] [Green Version]
- Bird, J.J.; Manso, L.J.; Ribeiro, E.P.; Ekárt, A.; Faria, D.R. A Study on Mental State Classification using EEG-based Brain–Machine Interface. In Proceedings of the 2018 International Conference on Intelligent Systems (IS), Funchal, Portugal, 25–27 September 2018; pp. 795–800. [Google Scholar] [CrossRef]
- Teo, J.; Chia, J.T. Deep Neural Classifiers For Eeg-Based Emotion Recognition In Immersive Environments. In Proceedings of the 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Gonzalez, H.A.; Yoo, J.; Elfadel, I.M. EEG-based Emotion Detection Using Unsupervised Transfer Learning. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 694–697. [Google Scholar] [CrossRef]
- Hasnul, M.A.; Aziz, N.A.A.; Alelyani, S.; Mohana, M.; Aziz, A.A. Electrocardiogram-Based Emotion Recognition Systems and Their Applications in Healthcare—A Review. Sensors 2021, 21, 5015. [Google Scholar] [CrossRef]
- Huang, X.; Kortelainen, J.; Zhao, G.; Li, X.; Moilanen, A.; Seppänen, T.; Pietikäinen, M. Multi-modal emotion analysis from facial expressions and electroencephalogram. Comput. Vis. Image Underst. 2016, 147, 114–124. [Google Scholar] [CrossRef]
- Li, J.; Qiu, S.; Shen, Y.Y.; Liu, C.L.; He, H. Multisource Transfer Learning for Cross-Subject EEG Emotion Recognition. IEEE Trans. Cybern. 2019, 50, 3281–3293. [Google Scholar] [CrossRef] [PubMed]
- Hasan, T.F.; Tatum, W.O. Ambulatory EEG Usefulness in Epilepsy Management. J. Clin. Neurophysiol. 2021, 38, 101–111. [Google Scholar] [CrossRef] [PubMed]
- Bota, P.J.; Wang, C.; Fred, A.L.N.; Plácido Da Silva, H. A Review, Current Challenges, and Future Possibilities on Emotion Recognition Using Machine Learning and Physiological Signals. IEEE Access 2019, 7, 140990–141020. [Google Scholar] [CrossRef]
- Horvat, M.; Dobrinic, M.; Novosel, M.; Jercic, P. Assessing emotional responses induced in virtual reality using a consumer EEG headset: A preliminary report. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 1006–1010. [Google Scholar] [CrossRef]
- Miranda-Correa, J.A.; Abadi, M.K.; Sebe, N.; Patras, I. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups. IEEE Trans. Affect. Comput. 2021, 12, 479–493. [Google Scholar] [CrossRef] [Green Version]
- Laureanti, R.; Bilucaglia, M.; Zito, M.; Circi, R.; Fici, A.; Rivetti, F.; Valesi, R.; Oldrini, C.; Mainardi, L.T.; Russo, V. Emotion assessment using Machine Learning and low-cost wearable devices. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 576–579. [Google Scholar] [CrossRef]
- Dadebayev, D.; Goh, W.W.; Tan, E.X. EEG-based emotion recognition: Review of commercial EEG devices and machine learning techniques. J. King Saud-Univ.-Comput. Inf. Sci. 2022, 34, 4385–4401. [Google Scholar] [CrossRef]
- Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities. Comput. Intell. Neurosci. 2020, 2020, 8875426. [Google Scholar] [CrossRef]
- Gomes, H.M.; Bifet, A.; Read, J.; Barddal, J.P.; Enembreck, F.; Pfahringer, B.; Holmes, G.; Abdessalem, T. Adaptive random forests for evolving data stream classification. Mach. Learn. 2017, 106, 1469–1495. [Google Scholar] [CrossRef] [Green Version]
- Müller, K.R.; Tangermann, M.; Dornhege, G.; Krauledat, M.; Curio, G.; Blankertz, B. Machine learning for real-time single-trial EEG-analysis: From brain–computer interfacing to mental state monitoring. J. Neurosci. Methods 2008, 167, 82–90. [Google Scholar] [CrossRef]
- Liu, Y.; Sourina, O.; Nguyen, M.K. Real-Time EEG-Based Human Emotion Recognition and Visualization. In Proceedings of the 2010 International Conference on Cyberworlds CW, Singapore, 20–22 October 2010; pp. 262–269. [Google Scholar] [CrossRef]
- Liu, Y.; Sourina, O. EEG-based subject-dependent emotion recognition algorithm using fractal dimension. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 3166–3171. [Google Scholar] [CrossRef]
- Lan, Z.; Sourina, O.; Wang, L.; Liu, Y. Real-time EEG-based emotion monitoring using stable features. Vis. Comput. 2016, 32, 347–358. [Google Scholar] [CrossRef]
- Lan, Z. EEG-Based Emotion Recognition Using Machine Learning Techniques. Ph.D. Thesis, Nanyang Technological University, Singapore, 2018. [Google Scholar] [CrossRef]
- Hou, X.; Liu, Y.; Sourina, O.; Mueller-Wittig, W. CogniMeter: EEG-based Emotion, Mental Workload and Stress Visual Monitoring. In Proceedings of the 2015 International Conference on Cyberworlds (CW), Visby, Sweden, 7–9 October 2015; pp. 153–160. [Google Scholar] [CrossRef]
- Lan, Z.; Liu, Y.; Sourina, O.; Wang, L.; Scherer, R.; Müller-Putz, G. SAFE: An EEG dataset for stable affective feature selection. Adv. Eng. Inform. 2020, 44, 101047. [Google Scholar] [CrossRef]
- Javaid, M.M.; Yousaf, M.A.; Sheikh, Q.Z.; Awais, M.M.; Saleem, S.; Khalid, M. Real-Time EEG-Based Human Emotion Recognition. In Neural Information Processing; Arik, S., Huang, T., Lai, W.K., Liu, Q., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9492, pp. 182–190. [Google Scholar] [CrossRef]
- Sarno, R.; Munawar, M.N.; Nugraha, B.T. Real-Time Electroencephalography-Based Emotion Recognition System. Int. Rev. Comput. Softw. IRECOS 2016, 11, 456. [Google Scholar] [CrossRef]
- Bajada, J.; Bonello, F.B. Real-time EEG-based Emotion Recognition using Discrete Wavelet Transforms on Full and Reduced Channel Signals. arXiv 2021, arXiv:2110.05635. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Chen, H.; Cai, T. FOIT: Fast Online Instance Transfer for Improved EEG Emotion Recognition. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 2618–2625. [Google Scholar] [CrossRef]
- Zheng, W.-L.; Lu, B.-L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Mental Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
- Zheng, W.L.; Liu, W.; Lu, Y.; Lu, B.L.; Cichocki, A. EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Trans. Cybern. 2019, 49, 1110–1122. [Google Scholar] [CrossRef]
- Nandi, A.; Xhafa, F.; Subirats, L.; Fort, S. Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts. Sensors 2021, 21, 1589. [Google Scholar] [CrossRef]
- Bifet, A.; Gavaldà, R. Adaptive Learning from Evolving Data Streams. In Proceedings of the 8th International Symposium on Intelligent Data Analysis: Advances in Intelligent Data Analysis VIII, IDA ’09, Lyon, France, 31 August–2 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 249–260. [Google Scholar] [CrossRef]
- Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. JBHI 2018, 22, 98–107. [Google Scholar] [CrossRef] [Green Version]
- Subramanian, R.; Wache, J.; Abadi, M.K.; Vieriu, R.L.; Winkler, S.; Sebe, N. ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors. IEEE Trans. Affect. Comput. 2018, 9, 147–160. [Google Scholar] [CrossRef]
- Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
- Watson, D.; Clark, L.A.; Tellegen, A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Personal. Soc. Psychol. 1988, 54, 1063–1070. [Google Scholar] [CrossRef]
- Towle, V.L.; Bolaños, J.; Suarez, D.; Tan, K.B.; Grzeszczuk, R.P.; Levin, D.N.; Cakmur, R.; Frank, S.A.; Spire, J.P. The spatial location of EEG electrodes: Locating the best-fitting sphere relative to cortical anatomy. Electroencephalogr. Clin. Neurophysiol. 1993, 86, 1–6. [Google Scholar] [CrossRef]
- Peirce, J.; Gray, J.; Simpson, S.; MacAskill, M.; Höchenberger, R.; Sogo, H.; Kastman, E.; Lindeløv, J. PsychoPy2: Experiments in behavior made easy. Behav. Res. Methods 2019, 51, 195–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dan-Glauser, E.S.; Scherer, K.R. The Geneva affective picture database (GAPED): A new 730-picture database focusing on valence and normative significance. Behav. Res. Methods 2011, 43, 468–477. [Google Scholar] [CrossRef] [PubMed]
- Kurdi, B.; Lozano, S.; Banaji, M. Introducing the Open Affective Standardized Image Set (OASIS). Behav. Res. Methods 2017, 49, 457–470. [Google Scholar] [CrossRef] [PubMed]
- Lang, P.J.; Bradley, M.M.; Cuthbert, B.N. International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual; Technical Report; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
- Panda, R.; Malheiro, R.; Paiva, R.P. Novel Audio Features for Music Emotion Recognition. IEEE Trans. Affect. Comput. 2018, 11, 614–626. [Google Scholar] [CrossRef]
- Zhang, K.; Zhang, H.; Li, S.; Yang, C.; Sun, L. The PMEmo Dataset for Music Emotion Recognition. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, ICMR ’18, Yokohama, Japan, 11–14 June 2018; ACM: New York, NY, USA, 2018; pp. 135–142. [Google Scholar] [CrossRef]
- Abadi, M.K.; Subramanian, R.; Kia, S.M.; Avesani, P.; Patras, I.; Sebe, N. DECAF: MEG-Based Multimodal Database for Decoding Affective Physiological Responses. IEEE Trans. Affect. Comput. 2015, 6, 209–222. [Google Scholar] [CrossRef]
- Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multi-Modal Affective Database for Affect Recognition and Implicit Tagging. Affect. Comput. IEEE Trans. 2012, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
- Verma, G.; Dhekane, E.G.; Guha, T. Learning Affective Correspondence between Music and Image. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 3975–3979. [Google Scholar] [CrossRef]
- Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161. [Google Scholar] [CrossRef]
- Breyer, B.; Bluemke, M. Deutsche Version der Positive and Negative Affect Schedule PANAS (GESIS Panel). In Zusammenstellung Sozialwissenschaftlicher Items und Skalen; Social Science Open Access Repository (SSOAR): Mannheim, Germany, 2016. [Google Scholar] [CrossRef]
- Betella, A.; Verschure, P. The Affective Slider: A Digital Self-Assessment Scale for the Measurement of Human Emotions. PLoS ONE 2016, 11, e0148037. [Google Scholar] [CrossRef] [Green Version]
- Jiang, X.; Bian, G.B.; Tian, Z. Removal of Artifacts from EEG Signals: A Review. Sensors 2019, 19, 987. [Google Scholar] [CrossRef] [Green Version]
- Sörnmo, L.; Laguna, P. Chapter 3—EEG Signal Processing. In Bioelectrical Signal Processing in Cardiac and Neurological Applications; Biomedical Engineering; Academic Press: Burlington, NJ, USA, 2005; pp. 55–179. [Google Scholar] [CrossRef]
- Akwei-Sekyere, S. Powerline noise elimination in neural signals via blind source separation and wavelet analysis. PeerJ PrePrints 2014, 3. [Google Scholar] [CrossRef]
- Sweeney, K.; Ward, T.; Mcloone, S. Artifact Removal in Physiological Signals-Practices and Possibilities. IEEE Trans. Inf. Technol. Biomed. Publ. IEEE Eng. Med. Biol. Soc. 2012, 16, 488–500. [Google Scholar] [CrossRef] [PubMed]
- Yao, D.; Qin, Y.; Hu, S.; Dong, l.; Vega, M.; Sosa, P. Which Reference Should We Use for EEG and ERP practice? Brain Topogr. 2019, 32, 530–549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Welch, P. The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoust. 1967, 15, 70–73. [Google Scholar] [CrossRef] [Green Version]
- Montiel, J.; Halford, M.; Mastelini, S.M.; Bolmier, G.; Sourty, R.; Vaysse, R.; Zouitine, A.; Gomes, H.M.; Read, J.; Abdessalem, T.; et al. River: Machine learning for streaming data in Python. J. Mach. Learn. Res. 2021, 22, 4945–4952. [Google Scholar] [CrossRef]
- Grzenda, M.; Gomes, H.M.; Bifet, A. Delayed labelling evaluation for data streams. Data Min. Knowl. Discov. 2020, 34, 1237–1266. [Google Scholar] [CrossRef] [Green Version]
- Blum, A.; Kalai, A.T.; Langford, J. Beating the hold-out: Bounds for K-fold and progressive cross-validation. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory COLT ’99, Santa Cruz, CA, USA, 7–9 July 1999. [Google Scholar]
- McMahan, H.B.; Holt, G.; Sculley, D.; Young, M.; Ebner, D.; Grady, J.; Nie, L.; Phillips, T.; Davydov, E.; Golovin, D.; et al. Ad Click Prediction: A View from the Trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Chicago, IL, USA, 11–14 August 2013; pp. 1222–1230. [Google Scholar] [CrossRef] [Green Version]
- Gomes, H.M.; Read, J.; Bifet, A. Streaming Random Patches for Evolving Data Stream Classification. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 240–249. [Google Scholar] [CrossRef]
- Parker, B.; Khan, L. Detecting and Tracking Concept Class Drift and Emergence in Non-Stationary Fast Data Streams. In Proceedings of the AAAI, Austin, TX, USA, 25–30 January 2015. [Google Scholar] [CrossRef]
- Aggarwal, C.C. Data Classification: Algorithms and Applications, 1st ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2014; pp. 636–638. [Google Scholar]
- Siddharth, S.; Jung, T.P.; Sejnowski, T.J. Utilizing Deep Learning Towards Multi-Modal Bio-Sensing and Vision-Based Affective Computing. IEEE Trans. Affect. Comput. 2019, 13, 96–107. [Google Scholar] [CrossRef] [Green Version]
- Topic, A.; Russo, M. Emotion recognition based on EEG feature maps through deep learning network. Eng. Sci. Technol. Int. J. JESTECH 2021, 24, 1442–1454. [Google Scholar] [CrossRef]
- Liu, Y.J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals. IEEE Trans. Affect. Comput. 2018, 9, 550–562. [Google Scholar] [CrossRef]
- Ekman, P.; Friesen, W. Unmasking the Face: A Guide to Recognizingemotions from Facial Clues; Prentice-Hall: Oxford, UK, 1975. [Google Scholar]
Category | Source Movie |
---|---|
HAHV | Airplane (4), When Harry Met Sally (5), Hot Shots (9), Love Actually (80) |
LAHV | August Rush (10), Love Actually (13), House of Flying Daggers (18), |
Mr Beans’ Holiday (58) | |
LALV | Gandhi (19), My Girl(20), My Bodyguard (23), The Thin Red Line (138) |
HALV | Silent Hill (30), Prestige (31), Pink Flamingos (34), Black Swan (36) |
Device | Number of Channels | Number of Derived Features |
---|---|---|
Muse Headband | 4 | 64 |
Neurosity Crown | 8 | 128 |
Emotiv EPOC | 14 | 224 |
Study or Classifier | F1-Score | Accuracy | ||
---|---|---|---|---|
Valence | Arousal | Valence | Arousal | |
LR | 0.669 | 0.65 | 0.702 | 0.688 |
ARF | 0.825 | 0.826 | 0.82 | 0.846 |
SRP | 0.834 | 0.831 | 0.826 | 0.847 |
Miranda-Correa et al. [22] | 0.576 | 0.592 | NR | NR |
Siddharth et al. [73] | 0.8 | 0.74 | 0.83 | 0.791 |
Topic et al. [74] | NR | NR | 0.874 | 0.905 |
Subject ID | ARF | SRP | LR | ||||
---|---|---|---|---|---|---|---|
Crown | Muse | Crown | Muse | Crown | Muse | ||
Arousal | 3 | 0.902 | 0.885 | 0.895 | 0.898 | 0.8 | 0.785 |
4 | 0.836 | 0.794 | 0.838 | 0.845 | 0.793 | 0.604 | |
5 | 0.651 | 0.812 | 0.699 | 0.827 | 0.764 | 0.682 | |
6 | 0.836 | 0.843 | 0.863 | 0.889 | 0.771 | 0.62 | |
7 | 0.958 | 0.833 | 0.933 | 0.878 | 0.841 | 0.725 | |
8 | 0.889 | 0.749 | 0.893 | 0.783 | 0.683 | 0.584 | |
9 | 0.888 | 0.921 | 0.836 | 0.931 | 0.756 | 0.703 | |
10 | 0.969 | 0.903 | 0.951 | 0.915 | 0.816 | 0.898 | |
11 | 0.938 | 0.768 | 0.955 | 0.861 | 0.765 | 0.908 | |
12 | 0.864 | 0.871 | 0.884 | 0.878 | 0.669 | 0.697 | |
13 | 0.792 | 0.913 | 0.8 | 0.887 | 0.701 | 0.734 | |
Mean | 0.866 | 0.845 | 0.868 | 0.872 | 0.76 | 0.722 | |
Valence | 3 | 0.837 | 0.887 | 0.811 | 0.876 | 0.716 | 0.712 |
4 | 0.841 | 0.69 | 0.773 | 0.859 | 0.804 | 0.524 | |
5 | 0.546 | 0.734 | 0.639 | 0.748 | 0.781 | 0.58 | |
6 | 0.713 | 0.687 | 0.785 | 0.778 | 0.73 | 0.393 | |
7 | 0.935 | 0.666 | 0.926 | 0.757 | 0.776 | 0.616 | |
8 | 0.813 | 0.551 | 0.819 | 0.623 | 0.594 | 0.444 | |
9 | 0.812 | 0.844 | 0.721 | 0.863 | 0.72 | 0.561 | |
10 | 0.982 | 0.859 | 0.979 | 0.871 | 0.74 | 0.874 | |
11 | 0.924 | 0.653 | 0.957 | 0.811 | 0.64 | 0.884 | |
12 | 0.889 | 0.756 | 0.914 | 0.784 | 0.633 | 0.663 | |
13 | 0.584 | 0.826 | 0.6 | 0.775 | 0.543 | 0.595 | |
Mean | 0.807 | 0.735 | 0.819 | 0.787 | 0.698 | 0.622 |
Subject ID | F1-Score | Accuracy | ||
---|---|---|---|---|
Valence | Arousal | Valence | Arousal | |
14 | 0.521 | 0.357 | 0.562 | 0.385 |
15 | 0.601 | 0.64 | 0.609 | 0.575 |
16 | 0.353 | 0.73 | 0.502 | 0.575 |
17 | 0.512 | 0.383 | 0.533 | 0.24 |
Participant ID | Valence | Arousal | ||
---|---|---|---|---|
Crown | Muse | Crown | Muse | |
3 | 0.338 | 0.584 | 0.614 | 0.718 |
4 | 0.674 | 0.429 | 0.551 | 0.575 |
5 | 0.282 | 0.554 | 0.355 | 0.69 |
6 | 0.357 | 0.27 | 0.608 | 0.619 |
7 | 0.568 | 0.574 | 0.698 | 0.769 |
8 | 0.266 | 0.286 | 0.561 | 0.574 |
9 | 0.553 | 0.53 | 0.719 | 0.749 |
10 | 0.767 | 0.561 | 0.784 | 0.691 |
11 | 0.469 | 0.207 | 0.676 | 0.418 |
12 | 0.443 | 0.51 | 0.575 | 0.679 |
13 | 0.335 | 0.451 | 0.646 | 0.711 |
Mean | 0.476 | 0.46 | 0.637 | 0.637 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Moontaha, S.; Schumann, F.E.F.; Arnrich, B. Online Learning for Wearable EEG-Based Emotion Classification. Sensors 2023, 23, 2387. https://doi.org/10.3390/s23052387
Moontaha S, Schumann FEF, Arnrich B. Online Learning for Wearable EEG-Based Emotion Classification. Sensors. 2023; 23(5):2387. https://doi.org/10.3390/s23052387
Chicago/Turabian StyleMoontaha, Sidratul, Franziska Elisabeth Friederike Schumann, and Bert Arnrich. 2023. "Online Learning for Wearable EEG-Based Emotion Classification" Sensors 23, no. 5: 2387. https://doi.org/10.3390/s23052387