EEG Classification of Motor Imagery Using a Novel Deep Learning Framework
<p>Diagram of a trial and timings during a session of BCI Competition IV dataset 2b (<b>a</b>) and our own dataset (<b>b</b>).</p> "> Figure 2
<p>Input image with 3 electrodes and 5 electrodes. (<b>a</b>) is the input image with 3 electrodes; (<b>b</b>) is the input image with 5 electrodes</p> "> Figure 3
<p>The proposed CNN network with 3 electrodes and 5 electrodes. (<b>a</b>) is the CNN network with 3 electrodes; (<b>b</b>) is the CNN network with 5 electrodes.</p> "> Figure 4
<p>A training-time variational autoencoder implemented as a feedforward neural network. (<b>Left</b>) is without the “reparameterization trick”, and (<b>right</b>) is with it [<a href="#B24-sensors-19-00551" class="html-bibr">24</a>].</p> "> Figure 5
<p>The proposed CNN-VAE network with 3 electrodes and 5 electrodes. (<b>a</b>) is the CNN-VAE network with 3 electrodes; (<b>b</b>) is the CNN-VAE network with 5 electrodes.</p> "> Figure 6
<p>The ANOVA stats of different methods.(<b>a</b>) is The ANOVA stats of different methods on BCI competition IV dataset 2b; (<b>b</b>) is The ANOVA stats of different methods on our own dataset with 3 electrodes; (<b>c</b>) is The ANOVA stats of different methods on our own dataset with 5 electrodes.</p> "> Figure 7
<p>Distribution of both datasets. (<b>a</b>) is the distribution of BCI competition IV dataset 2b; (<b>b</b>) is the distribution of our own dataset.</p> "> Figure 8
<p>input images for left- and right-hand classes of subject 2 and subject 4.</p> "> Figure 9
<p>input images for left- and right-hand classes of subject D and subject Z.</p> "> Figure 10
<p>Effect of number of epoch on kappa value and training time. (<b>a</b>) The performance on BCI Competition IV dataset 2b; (<b>b</b>) The performance on our own dataset with 3 electrodes; (<b>c</b>) The performance on our own dataset with 5 electrodes.</p> "> Figure 11
<p>Kappa value and standard deviation results of our own dataset with 3 electrodes and 5 electrodes.</p> ">
Abstract
:1. Introduction
2. Datasets
3. Deep Learning Framework
3.1. Input Form
3.2. Convolutional Neural Network (CNN)
3.3. Variational Autoencoder (VAE)
3.4. Combined CNN-VAE
4. Results and Discussion
4.1. Results
4.2. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Birbaumer, N. Breaking the silence: Brain-computer interfaces (BCI) for communication and motor control. Psychophysiology 2006, 43, 517–532. [Google Scholar] [CrossRef] [PubMed]
- Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-computer interfaces in medicine. Mayo Clin. Proc. 2012, 87, 268–279. [Google Scholar] [CrossRef] [PubMed]
- Kong, W.; Lin, W.; Babiloni, F.; Hu, S.; Borghini, G. Investigating Driver Fatigue versus Alertness Using the Granger Causality Network. Sensors 2015, 15, 19181–19198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pfurtscheller, G.; Da Silva, F.L. Event-related EEG/MEG synchronization and desynchronization: Basic principles. Clin. Neurophysiol. 1999, 110, 1842–1857. [Google Scholar] [CrossRef]
- Tang, Z.; Sun, S.; Zhang, S. A Brain-Machine Interface Based on ERD/ERS for an Upper-Limb Exoskeleton Control. Sensors 2016, 16, 2050. [Google Scholar] [CrossRef]
- Lemm, S.; Schafer, C.; Curio, G. BCI competition 2003-data set III: Probabilistic modeling of sensorimotor μ rhythms for classification of imaginary hand movements. IEEE Trans. Biomed. Eng. 2004, 51, 1077–1080. [Google Scholar] [CrossRef]
- Nicolas Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef]
- Ramoser, H.; Muller-Gerking, J.; Pfurtscheller, G. Optimal Spatial Filtering of Single Trial EEG During Imagined Hand Movement. IEEE Trans. Rehabil. Eng. 2000, 8, 441–446. [Google Scholar] [CrossRef]
- Xygonakis, I.; Athanasiou, A. Decoding Motor Imagery through Common Spatial Pattern Filters at the EEG Source Space. Comput. Intell. Neurosci. 2018, 2018, 7957408. [Google Scholar] [CrossRef]
- Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filterbank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings of the IEEE International Joint Conference on Neural Networks, Hong Kong, China, 1–8 June 2008; pp. 2391–2398. [Google Scholar]
- Yang, H.; Sakhavi, S.; Ang, K.K. On the use of convolutional neural networks and augmented CSP features for multi-class motor imagery of EEG signals classification. In Proceedings of the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015. [Google Scholar]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef] [Green Version]
- Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Recognition, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2001. [Google Scholar]
- Jensen, F.V. Bayesian Networks and Decision Graphs; Springer: New York, NY, USA, 2001; p. 34. [Google Scholar]
- Lotte, F.; Bougrain, L.; Cichocki, A.; Clerc, M.; Congedo, M.; Rakotomamonjy, A.; Yger, F. A review of classification algorithms for EEG-based brain-computer interfaces: A 10 year update. J. Neural Eng. 2018, 15, 031005. [Google Scholar] [CrossRef]
- Bengio, Y. Learning deep architectures for AI. Found. Trends Mach. Learn. 2009, 2, 1–127. [Google Scholar] [CrossRef]
- Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
- Lu, N.; Li, T.; Ren, X.; Miao, H. A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann Machines. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 566–576. [Google Scholar] [CrossRef]
- Schirrmeister, R.T. Deep Learning with Convolutional Neural Networks for EEG Decoding and Visualization. Hum. Brain Mapp. 2017, 38, 5391–5420. [Google Scholar] [CrossRef]
- Bashivan, P.; Rish, I.; Yeasin, M.; Codella, N. Learning representations from EEG with deep recurrent-convolutional neural networks. arXiv, 2016; arXiv:1511.06448. [Google Scholar]
- Tabor, Y.R.; Halici, U. A novel deep learning approach for classifiaction of EEG motor imagery signals. J. Neural Eng. 2017, 14, 016003. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the NIPS, Lake Tahoe, NV, USA, 3–8 December 2012; pp. 1097–1105. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational Bayes. In Proceedings of the ICLR, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Doersch, C. Tutorial on variational autoencoders. arXiv, 2016; arXiv:1606.05908. [Google Scholar]
- Wu, W.; Chen, Z.; Gao, S.; Brown, E.N. A hierarchical Bayesian approach for learning sparse spatio-temporal decompositions of multichannel EEG. NeuroImage 2011, 56, 1929–1945. [Google Scholar] [CrossRef] [Green Version]
- Wainwright, M.J.; Simoncelli, E.P. Scale mixtures of Gaussians and the statistics of natural images. Adv. Neural Inf. Process. Syst. 2000, 12, 855–861. [Google Scholar]
- Leeb, R.; Lee, F.; Keinrath, C.; Scherer, R.; Bischof, H.; Pfurtscheller, G. Brain-computer communication: Motivation, aim, and impact of exploring a virtual apartment. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 473–482. [Google Scholar] [CrossRef]
Dataset | Subjects | Channels | Trials |
---|---|---|---|
Competition IV dataset 2b | 9 | C3, Cz, C4 | 400 |
Our own dataset | 5 | C3, C1, Cz, C2, C4 | 400 |
Subject | BCI Competition III (Mean ± Std. Dev.) | |||
---|---|---|---|---|
FBCSP | CNN | CNN-SAE | CNN-VAE | |
1 | 0.546 ± 0.017 | 0.488 ± 0.158 | 0.517 ± 0.095 | 0.522 ± 0.076 |
2 | 0.208 ± 0.028 | 0.289 ± 0.068 | 0.324 ± 0.065 | 0.346 ± 0.068 |
3 | 0.244 ± 0.023 | 0.427 ± 0.071 | 0.494 ± 0.084 | 0.436 ± 0.060 |
4 | 0.888 ± 0.003 | 0.888 ± 0.008 | 0.905 ± 0.017 | 0.908 ± 0.009 |
5 | 0.692 ± 0.005 | 0.593 ± 0.083 | 0.655 ± 0.060 | 0.646 ± 0.075 |
6 | 0.534 ± 0.012 | 0.495 ± 0.073 | 0.579 ± 0.099 | 0.642 ± 0.057 |
7 | 0.409 ± 0.013 | 0.409 ± 0.079 | 0.488 ± 0.065 | 0.550 ± 0.072 |
8 | 0.413 ± 0.013 | 0.443 ± 0.133 | 0.494 ± 0.106 | 0.506 ± 0.083 |
9 | 0.583 ± 0.010 | 0.415 ± 0.050 | 0.463 ± 0.152 | 0.518 ± 0.078 |
Average | 0.502 ± 0.014 | 0.494 ± 0.080 | 0.547 ± 0.083 | 0.564 ± 0.065 |
Subject | Kappa Value (Mean ± Std. Dev.) | |||
---|---|---|---|---|
FBCSP | CNN | CNN-SAE | CNN-VAE | |
D | 0.410 ± 0.029 | 0.319 ± 0.057 | 0.412 ± 0.038 | 0.406 ± 0.045 |
S | 0.421 ± 0.061 | 0.396 ± 0.149 | 0.456 ± 0.132 | 0.462 ± 0.118 |
W | 0.503 ± 0.022 | 0.488 ± 0.097 | 0.534 ± 0.084 | 0.558 ± 0.081 |
Z | 0.688 ± 0.008 | 0.669 ± 0.035 | 0.711 ± 0.036 | 0.748 ± 0.027 |
Y | 0.611 ± 0.037 | 0.576 ± 0.074 | 0.647 ± 0.068 | 0.666 ± 0.072 |
Average | 0.527 ± 0.031 | 0.490 ± 0.082 | 0.552 ± 0.072 | 0.568 ± 0.068 |
Subject | Kappa Value (Mean ± Std. Dev.) | |||
---|---|---|---|---|
FBCSP | CNN | CNN-SAE | CNN-VAE | |
D | 0.410 ± 0.022 | 0.319 ± 0.065 | 0.488 ± 0.040 | 0.459 ± 0.047 |
S | 0.421 ± 0.056 | 0.396 ± 0.138 | 0.512 ± 0.122 | 0.501 ± 0.109 |
W | 0.503 ± 0.029 | 0.488 ± 0.107 | 0.531 ± 0.084 | 0.585 ± 0.081 |
Z | 0.704 ± 0.006 | 0.691 ± 0.050 | 0.723 ± 0.046 | 0.771 ± 0.031 |
Y | 0.611 ± 0.030 | 0.576 ± 0.085 | 0.668 ± 0.075 | 0.702 ± 0.068 |
Average | 0.530 ± 0.028 | 0.494 ± 0.089 | 0.584 ± 0.073 | 0.603 ± 0.067 |
Dataset | FBCSP (p-Value) | CNN (p-Value) | CNN-SAE (p-Value) | CNN-VAE (p-Value) |
---|---|---|---|---|
BCI Competition IV dataset 2b | 0.116 | 0.653 | 0.534 | 0.434 |
Our own dataset with 3 electrodes | 0.188 | 0.619 | 0.479 | 0.434 |
Our own dataset with 5 electrodes | 0.174 | 0.598 | 0.479 | 0.416 |
Dataset | CNN-VAE vs. CNN-SAE (p-Value) | CNN-VAE vs. CNN (p-Value) | CNN-VAE vs. FBCSP (p-Value) | CNN-SAE vs. CNN (p-Value) | CNN-SAE vs. FBCSP (p-Value) | CNN vs. FBCSP (p-Value) |
---|---|---|---|---|---|---|
BCI Competition IV dataset 2b | 0.822 | 0.381 | 0.362 | 0.510 | 0.473 | 0.901 |
Our own dataset with 3 electrodes | 0.854 | 0.402 | 0.632 | 0.479 | 0.753 | 0.666 |
Our own dataset with 5 electrodes | 0.805 | 0.249 | 0.392 | 0.293 | 0.428 | 0.690 |
Dataset | Kernel Size | Kappa | Time (s) |
---|---|---|---|
BCI Competition IV dataset 2b | 0.492 | 154 | |
0.546 | 191 | ||
0.564 | 226 | ||
0.521 | 274 | ||
0.521 | 325 | ||
Our own dataset with 3 electrodes | 0.500 | 153 | |
0.543 | 190 | ||
0.568 | 226 | ||
0.526 | 274 | ||
0.522 | 325 | ||
Our own dataset with 5 electrodes | 0.525 | 155 | |
0.572 | 190 | ||
0.603 | 227 | ||
0.562 | 275 | ||
0.550 | 326 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, M.; Zheng, D.; Na, R.; Wang, S.; Zhang, S. EEG Classification of Motor Imagery Using a Novel Deep Learning Framework. Sensors 2019, 19, 551. https://doi.org/10.3390/s19030551
Dai M, Zheng D, Na R, Wang S, Zhang S. EEG Classification of Motor Imagery Using a Novel Deep Learning Framework. Sensors. 2019; 19(3):551. https://doi.org/10.3390/s19030551
Chicago/Turabian StyleDai, Mengxi, Dezhi Zheng, Rui Na, Shuai Wang, and Shuailei Zhang. 2019. "EEG Classification of Motor Imagery Using a Novel Deep Learning Framework" Sensors 19, no. 3: 551. https://doi.org/10.3390/s19030551