A Lightweight GCT-EEGNet for EEG-Based Individual Recognition Under Diverse Brain Conditions
<p>The architecture of the attention-based EEGNet model (GCT–EEGNet), where <span class="html-italic">B</span> is the number of frequency bands, <span class="html-italic">T</span> is the time points, <span class="html-italic">C</span> is the number of channels of the EEG signal, α, <span class="html-italic">β, γ, θ,</span> and <span class="html-italic">δ</span> are the alpha, beta, gamma, theta, and delta frequency bands, respectively. The <span class="html-italic">TConv, CConv, BN,</span> and <span class="html-italic">GAP</span> represent temporal and channel convolutions, batch normalization, and global average pooling layers, respectively, and <span class="html-italic">v</span> is the learned feature vector and <span class="html-italic">l</span> is the subject label.</p> "> Figure 2
<p>Gate channel transformation (GCT) module, where <span class="html-italic">B</span> is the number of frequency bands, <span class="html-italic">T</span> is the time points, <span class="html-italic">C</span> is the number of channels of the EEG signal, α, <span class="html-italic">β, γ, θ,</span> and <span class="html-italic">δ</span> are the alpha, beta, gamma, theta, and delta frequency bands, respectively. <math display="inline"><semantics> <mrow> <mi>η</mi> </mrow> </semantics></math> denotes the trainable embedding weights, <span class="html-italic">W</span> represents the global context information, <math display="inline"><semantics> <mrow> <mi>λ</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ω</mi> </mrow> </semantics></math> represent the gating weights and biases, and <math display="inline"><semantics> <mrow> <mi>κ</mi> </mrow> </semantics></math> is the output of tanh function. Different colors in the output a<sup>(1)</sup> indicate the varying significance assigned to each band.</p> "> Figure 3
<p>Channel positions of all 64 electrodes (channels) using a 10–20 system where the highlighted channels were used in experiments [<a href="#B24-mathematics-12-03286" class="html-bibr">24</a>].</p> "> Figure 4
<p>Performance in identification: CMC curves for the combined dataset.</p> "> Figure 5
<p>Performance in verification: DET curves for the combined dataset.</p> "> Figure 6
<p>The effect of frequency bands on the combined dataset. (<b>a</b>) The GCT attention mechanism weights. (<b>b</b>) The respective mean SHAP values.</p> "> Figure 7
<p>Five different channel configurations, each highlighting different regions of the scalp: (<b>a</b>) frontal (F), (<b>b</b>) central and parietal (CP), (<b>c</b>) temporal (T), (<b>d</b>) occipital and parietal (OP), (<b>e</b>) frontal and parietal (FP).</p> "> Figure 8
<p>Performance of the proposed method among five different sets of channels. (<b>a</b>) All frequency bands, (<b>b</b>) gamma band, where CRR (5) denotes the performance of the five distinct channel sets, while CRR (5)–CRR (32) indicate the performance differences among the five channel subsets and the 32 channels.</p> "> Figure 9
<p>The t-SNE visualization for high-dimensional features of the GAP layer.</p> ">
Abstract
:1. Introduction
- A lightweight deep neural network model based on the design ideas of CNN models and an attention mechanism to selectively focus on salient frequency bands for extracting discriminative features relevant to the identity of a subject from an EEG trial under various brain conditions.
- A robust EEG-based system for identification and authentication that is agnostic to various brain conditions, i.e., resting states, emotions, alcoholism, etc., and one that uses a short EEG trial of one second to reveal or authenticate the identity of a subject.
- A thorough evaluation for validating the proposed EEG-based system using a large dataset of 263 subjects who underwent EEG trials that were captured in various brain states.
2. Related Work
3. The Proposed Method
3.1. Problem Specification and Formulation
3.2. Deep-Learning-Based Feature Extractor
3.2.1. Data Preprocessing
3.2.2. GCT Attention Block
3.2.3. Temporal Convolution Block
3.2.4. Depth-Wise Channel Convolution Block
3.2.5. Separable Temporal Convolution Block
3.2.6. Training of GCT–EEGNet
4. Evaluation Protocol
4.1. Datasets
4.2. Performance Metrics
5. Experimental Results and Discussion
5.1. Ablation Study
5.1.1. Input Configuration and Optimizers
5.1.2. Number of Kernels and Activation Functions
Experiment | Choices | Datasets | |||
---|---|---|---|---|---|
DEAP | PhysioNet | EEG UCI | Combined | ||
Raw 2D input without DWT decomposition (32 × 128) | |||||
Optimizers, kernels 8, 16 | Adam | 99.87 ± 0.08 | 74.51 ± 2.75 | 49.10 ± 7.29 | 69.80 ± 3.02 |
AdamW | 99.92 ± 0.06 | 73.72 ± 2.14 | 50.50 ± 5.26 | 69.81 ± 3.05 | |
Raw 3D input with DWT decomposition (5 × 32 × 128) | |||||
Optimizers, kernels 8, 16 | Adam | 99.88 ± 0.05 | 76.90 ± 1.63 | 62.57 ± 2.74 | 74.56 ± 1.35 |
AdamW | 99.87 ± 0.11 | 77.57 ± 0.87 | 64.22 ± 3.80 | 74.69 ± 1.41 | |
Number of kernels | 32, 64 | 99.99 ± 0.02 | 99.13 ± 0.15 | 95.58 ± 0.64 | 98.69 ± 0.19 |
64, 128 | 100 ± 0.01 | 99.75 ± 0.05 | 97.41 ± 0.94 | 99.54 ± 0.08 | |
Activation functions | ReLU [51] | 97.75 ± 0.77 | 99.67 ± 0.07 | 97.75 ±0.77 | 99.39 ± 0.09 |
SiLU [52] | 97.84 ± 0.54 | 99.77 ± 0.07 | 97.84 ± 0.54 | 99.53 ± 0.04 | |
GeLU [39] | 100 ± 0.01 | 99.79 ± 0.06 | 97.90 ± 0.52 | 99.50 ± 0.03 | |
Pooling Layer | Max | 100 ± 0.01 | 99.68 ± 0.08 | 97.50 ± 0.61 | 99.38 ± 0.09 |
GAP layer | GAP | 100 ± 0.00 | 99.80 ± 0.06 | 98.51 ± 0.40 | 99.58 ± 0.08 |
Attention Layer | SE | 100 ± 0.00 | 99.68 ± 0.06 | 98.73 ± 0.36 | 99.54 ± 0.10 |
GCT | 100 ± 0.00 | 99.84 ± 0.05 | 98.87 ± 0.33 | 99.66 ± 0.04 | |
Dropout | 0.5 | - | - | - | 99.66 ± 0.04 |
0.25 | - | - | - | 99.63 ± 0.06 | |
Without dropout | - | - | - | 99.24 ± 0.19 |
5.1.3. Pooling Layer
5.1.4. GAP and Attention Layer
5.1.5. The Effect of Employing GELU with Dropout
5.2. The Identification and Verification Results
5.3. Robustness to Diverse Brain States
5.4. The Effects of Different Frequency Bands
5.5. The Effect of Channel Reduction
5.6. Comparison with the State of the Art
5.7. Visualization of the Features Learned by the Model from EEG Segments
5.8. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, D.D. Automated Biometrics: Technologies and Systems; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7. [Google Scholar]
- Jain, A.K.; Ross, A.; Prabhakar, S. An Introduction to Biometric Recognition. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 4–20. [Google Scholar] [CrossRef]
- Poulos, M.; Rangoussi, M.; Chrissikopoulos, V.; Evangelou, A. Parametric Person Identification from the EEG Using Computational Geometry. In Proceedings of the ICECS’99. 6th IEEE International Conference on Electronics, Circuits and Systems (Cat. No. 99EX357), Paphos, Cyprus, 5–8 September 1999; pp. 1005–1008. [Google Scholar]
- Gui, Q.; Ruiz-Blondet, M.V.; Laszlo, S.; Jin, Z. A Survey on Brain Biometrics. ACM Comput. Surv. 2019, 51, 1–38. [Google Scholar] [CrossRef]
- Van Dis, H.; Corner, M.; Dapper, R.; Hanewald, G.; Kok, H. Individual Differences in the Human Electroencephalogram during Quiet Wakefulness. Electroencephalogr. Clin. Neurophysiol. 1979, 47, 87–94. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Yao, L.; Wang, X.; Zhang, W.; Zhang, S.; Liu, Y. Know Your Mind: Adaptive Cognitive Activity Recognition with Reinforced CNN. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 896–905. [Google Scholar]
- Chen, J.X.; Mao, Z.J.; Yao, W.X.; Huang, Y.F. EEG-Based Biometric Identification with Convolutional Neural Network. Multimed. Tools Appl. 2019, 79, 1–21. [Google Scholar] [CrossRef]
- Xu, T.; Wang, H.; Lu, G.; Wan, F.; Deng, M.; Qi, P.; Bezerianos, A.; Guan, C.; Sun, Y. E-Key: An EEG-Based Biometric Authentication and Driving Fatigue Detection System. IEEE Trans. Affect. Comput. 2021, 14, 864–877. [Google Scholar] [CrossRef]
- Maiorana, E. Learning Deep Features for Task-Independent EEG-Based Biometric Verification. Pattern Recognit. Lett. 2021, 143, 122–129. [Google Scholar] [CrossRef]
- Seha, S.N.A.; Hatzinakos, D. Longitudinal Assessment of EEG Biometrics under Auditory Stimulation: A Deep Learning Approach. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 1386–1390. [Google Scholar]
- Das, B.B.; Kumar, P.; Kar, D.; Ram, S.K.; Babu, K.S.; Mohapatra, R.K. A Spatio-Temporal Model for EEG-Based Person Identification. Multimed. Tools Appl. 2019, 78, 28157–28177. [Google Scholar] [CrossRef]
- Jijomon, C.M.; Vinod, A.P. Person-Identification Using Familiar-Name Auditory Evoked Potentials from Frontal EEG Electrodes. Biomed. Signal Process. Control. 2021, 68, 102739. [Google Scholar] [CrossRef]
- Sun, Y.; Lo, F.P.-W.; Lo, B. EEG-Based User Identification System Using 1D-Convolutional Long Short-Term Memory Neural Networks. Expert Syst. Appl. 2019, 125, 259–267. [Google Scholar] [CrossRef]
- Wilaiprasitporn, T.; Ditthapron, A.; Matchaparn, K.; Tongbuasirilai, T.; Banluesombatkul, N.; Chuangsuwanich, E. Affective EEG-Based Person Identification Using the Deep Learning Approach. IEEE Trans. Cogn. Dev. Syst. 2019, 12, 486–496. [Google Scholar] [CrossRef]
- Yang, S.; Deravi, F. On the Usability of Electroencephalographic Signals for Biometric Recognition: A Survey. IEEE Trans. Hum. -Mach. Syst. 2017, 47, 958–969. [Google Scholar] [CrossRef]
- Maiorana, E.; La Rocca, D.; Campisi, P. EEG-Based Biometric Recognition Using EigenBrains. In Proceedings of the 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Turin, Italy, 29 June–5 July 2015; pp. 1–6. [Google Scholar]
- Rodrigues, D.; Silva, G.F.; Papa, J.P.; Marana, A.N.; Yang, X.-S. EEG-Based Person Identification through Binary Flower Pollination Algorithm. Expert Syst. Appl. 2016, 62, 81–90. [Google Scholar] [CrossRef]
- Thomas, K.P.; Vinod, A.P. EEG-Based Biometric Authentication Using Gamma Band Power during Rest State. Circuits Syst. Signal Process. 2018, 37, 277–289. [Google Scholar] [CrossRef]
- Jijomon, C.M.; Vinod, A.P. EEG-Based Biometric Identification Using Frequently Occurring Maximum Power Spectral Features. In Proceedings of the 2018 IEEE Applied Signal Processing Conference (ASPCON), Kolkata, India, 7–9 December 2018; pp. 249–252. [Google Scholar]
- Nakamura, T.; Goverdovsky, V.; Mandic, D.P. In-Ear EEG Biometrics for Feasible and Readily Collectable Real-World Person Authentication. IEEE Trans. Inf. Forensics Secur. 2017, 13, 648–661. [Google Scholar] [CrossRef]
- Zhang, S.; Sun, L.; Mao, X.; Hu, C.; Liu, P. Review on EEG-Based Authentication Technology. Comput. Intell. Neurosci. 2021, 2021, 5229576. [Google Scholar] [CrossRef] [PubMed]
- Stassen, H.H. Computerized Recognition of Persons by EEG Spectral Patterns. Electroencephalogr. Clin. Neurophysiol. 1980, 49, 190–194. [Google Scholar] [CrossRef]
- Maiorana, E. Deep Learning for EEG-Based Biometric Recognition. Neurocomputing 2020, 410, 374–386. [Google Scholar] [CrossRef]
- Jin, X.; Tang, J.; Kong, X.; Peng, Y.; Cao, J.; Zhao, Q.; Kong, W. CTNN: A Convolutional Tensor-Train Neural Network for Multi-Task Brainprint Recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 29, 103–112. [Google Scholar] [CrossRef]
- Debie, E.; Moustafa, N.; Vasilakos, A. Session Invariant EEG Signatures Using Elicitation Protocol Fusion and Convolutional Neural Network. IEEE Trans. Dependable Secur. Comput. 2021, 9, 2488–2500. [Google Scholar] [CrossRef]
- Bidgoly, A.J.; Bidgoly, H.J.; Arezoumand, Z. Towards a Universal and Privacy Preserving EEG-Based Authentication System. Sci. Rep. 2022, 12, 1–12. [Google Scholar] [CrossRef]
- Alsumari, W.; Hussain, M.; Alshehri, L.; Aboalsamh, H.A. EEG-Based Person Identification and Authentication Using Deep Convolutional Neural Network. Axioms 2023, 12, 74. [Google Scholar] [CrossRef]
- Fallahi, M.; Strufe, T.; Arias-Cabarcos, P. BrainNet: Improving Brainwave-Based Biometric Recognition with Siamese Networks. In Proceedings of the 2023 IEEE International Conference on Pervasive Computing and Communications (PerCom), Atlanta, GA, USA, 13–17 March 2023; pp. 53–60. [Google Scholar]
- Lawhern, V.J.; Solon, A.J.; Waytowich, N.R.; Gordon, S.M.; Hung, C.P.; Lance, B.J. EEGNet: A Compact Convolutional Neural Network for EEG-Based Brain–Computer Interfaces. J. Neural Eng. 2018, 15, 056013. [Google Scholar] [CrossRef] [PubMed]
- Fraschini, M.; Hillebrand, A.; Demuru, M.; Didaci, L.; Marcialis, G.L. An EEG-Based Biometric System Using Eigenvector Centrality in Resting State Brain Networks. IEEE Signal Process. Lett. 2014, 22, 666–670. [Google Scholar] [CrossRef]
- Kaur, B.; Singh, D.; Roy, P.P. A Novel Framework of EEG-Based User Identification by Analyzing Music-Listening Behavior. Multimed. Tools Appl. 2017, 76, 25581–25602. [Google Scholar] [CrossRef]
- Kawabata, N. A Nonstationary Analysis of the Electroencephalogram. IEEE Trans. Biomed. Eng. 1973, 444–452. [Google Scholar] [CrossRef]
- Kumari, P.; Vaish, A. Brainwave Based User Identification System: A Pilot Study in Robotics Environment. Robot. Auton. Syst. 2015, 65, 15–23. [Google Scholar] [CrossRef]
- Ting, W.; Guo-Zheng, Y.; Bang-Hua, Y.; Hong, S. EEG Feature Extraction Based on Wavelet Packet Decomposition for Brain Computer Interface. Measurement 2008, 41, 618–625. [Google Scholar] [CrossRef]
- Yang, Z.; Zhu, L.; Wu, Y.; Yang, Y. Gated Channel Transformation for Visual Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13 June 2020; pp. 11794–11803. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21 July 2017; pp. 1251–1258. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Hendrycks, D.; Gimpel, K. Gaussian Error Linear Units (Gelus). arXiv 2016, arXiv:1606.08415. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar] [CrossRef]
- Yao, Y.; Rosasco, L.; Caponnetto, A. On Early Stopping in Gradient Descent Learning. Constr. Approx. 2007, 26, 289–315. [Google Scholar] [CrossRef]
- Lin, C.; Kumar, A. A CNN-Based Framework for Comparison of Contactless to Contact-Based Fingerprints. IEEE Trans. Inf. Forensics Secur. 2018, 14, 662–676. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef]
- Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.-K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef]
- Snodgrass, J.G.; Vanderwart, M. A Standardized Set of 260 Pictures: Norms for Name Agreement, Image Agreement, Familiarity, and Visual Complexity. J. Exp. Psychol. Hum. Learn. Mem. 1980, 6, 174. [Google Scholar] [CrossRef]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A Convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18 June 2022; pp. 11976–11986. [Google Scholar]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training Data-Efficient Image Transformers & Distillation through Attention. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11 October 2021; pp. 10012–10022. [Google Scholar]
- Agarap, A.F. Deep Learning Using Rectified Linear Units (Relu). arXiv 2018, arXiv:1803.08375. [Google Scholar] [CrossRef]
- Elfwing, S.; Uchibe, E.; Doya, K. Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning. Neural Netw. 2018, 107, 3–11. [Google Scholar] [CrossRef]
- Cui, J.; Yuan, L.; Wang, Z.; Li, R.; Jiang, T. Towards Best Practice of Interpreting Deep Learning Models for EEG-Based Brain Computer Interfaces. arXiv 2022, arXiv:2202.06948. [Google Scholar] [CrossRef] [PubMed]
- Van der Maaten, L.; Hinton, G. Visualizing Data Using T-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Wang, M.; El-Fiqi, H.; Hu, J.; Abbass, H.A. Convolutional Neural Networks Using Dynamic Functional Connectivity for EEG-Based Person Identification in Diverse Human States. IEEE Trans. Inf. Forensics Secur. 2019, 14, 3259–3272. [Google Scholar] [CrossRef]
- Fraschini, M.; Pani, S.M.; Didaci, L.; Marcialis, G.L. Robustness of Functional Connectivity Metrics for EEG-Based Personal Identification over Task-Induced Intra-Class and Inter-Class Variations. Pattern Recognit. Lett. 2019, 125, 49–54. [Google Scholar] [CrossRef]
Transformation Mapping | Block Layers | #Kernel/Size | Output | Options | Learnable Parameters |
---|---|---|---|---|---|
Input | - | 32 × 128 | 0 | ||
Preprocessing | - | 5 × 32 × 128 | 0 | ||
GCT | - | 5 × 32 × 128 | 15 | ||
Conv2D | 64/1 × 64 | 64 × 32 × 128 | Padding = same | 20,480 | |
BatchNorm | - | 64 × 32 × 128 | 512 | ||
Depthwise Conv2D | D × 64/C × 1 | 64 × 1 × 128 | D = 1 C = 32 | 2048 | |
BatchNorm | - | 64 × 1 × 128 | 512 | ||
GELU | - | 64 × 1 × 128 | 0 | ||
Average Pooling2D | 1 × 4 | 64 × 1 × 32 | 0 | ||
Dropout | 0.5 | 64 × 1 × 32 | 0 | ||
Separable Conv. | 128/1 × 16 | 128 × 1 × 32 | 9216 | ||
BatchNorm | - | 128 × 1 × 32 | 128 | ||
GELU | - | 128 × 1 × 32 | 0 | ||
Average Pooling 2D | 1 × 8 | 128 × 1 × 4 | 0 | ||
Dropout | 0.5 | 128 × 1 × 4 | 0 | ||
GAP Layer | - | 128 | 0 | ||
FC + Softmax | - | 236 | 30,444 | ||
Total Parameters | 62,764 |
Experiment # 1 | ||||
Dataset | Training States | Testing States | CRR | EER |
DEAP | LL, HH | LH, HL | 99.99 ± 0.04 | 0.0215 ± 0.0183 |
LL, HL | LH, HH | 99.98 ± 0.05 | 0.0272 ± 0.0253 | |
LL, LH | HL, HH | 99.96 ± 0.06 | 0.0283 ± 0.0106 | |
HH, HL | LL, LH | 99.93 ± 0.09 | 0.1079 ± 0.0216 | |
HH, LH | LL, HL | 99.98 ± 0.05 | 0.0860 ± 0.0597 | |
LH, HL | LL, HH | 100.00 ± 0.00 | 0.0523 ± 0.0061 | |
PhysioNet | EO, EC | PHY, IMA | 88.72 ± 1.12 | 0.0514 ± 0.0105 |
PHY, IMA | EO, EC | 97.83 ± 1.66 | 0.0047 ± 0.0023 | |
EEG UCI | Alcoholic | Non-Alcoholic | 84.25 ± 0.83 | 0.0087 ± 0.0015 |
Non-Alcoholic | Alcoholic | 77.47 ± 0.56 | 0.0041 ± 0.0008 | |
Experiment # 2 | ||||
Datasets | CRR | EER | ||
DEAP | 100.00 ± 0.00 | 0.0004 ± 0.0008 | ||
PhysioNet | 98.90 ± 0.48 | 0.0043 ± 0.0014 | ||
EEG UCI | 99.25 ± 0.91 | 0.0009 ± 0.0016 | ||
Combined | 99.23 ± 0.50 | 0.0014 ± 0.0008 |
Dataset | # Sub | Method | # Ch | TL (sec.) | CRR (%) | EER (%) | Parameters | |
---|---|---|---|---|---|---|---|---|
Sun et al. [13]—2019 | PhysioNet | 109 | CNN, LSTM | 16 | 1 | 99.58 | 0.41 | 505,281,566 |
Wilaiprasitporn et al. [14]—2019 | DEAP | 32 | CNN, LSTM CNN, GRU | 5 | 10 | >99 | - | 324,032 496,384 |
Jin et al. [24]—2020 | MTED | 20 | CTNN | 7 | 1 | 99 | 0.1 | 4600 |
Bidgoly et al. [26] 2022 | PhysioNet | 109 | CNN, Cosine | 3 | 1 | 98.04 | 1.96 | NA |
Alsumari et al. [27]—2023 | PhysioNet | 109 | CNN, L1 | 2 | 5 | 99.05 | 0.187 | 74,071 |
Fallahi et al. [28]—2023 | ERP CORE | 40 | Siamese NW, L2 | 30 | 0.10 | 95.63 | 1.37 | NA |
Brain Invaders | 41 | 32 | 99.92 | 0.14 | ||||
Proposed approach | DEAP | 32 | GCT–EEGNET, Cosine | 32 | 1 | 100.00 | 0.0004 | 35,900 |
PhysioNet | 109 | 98.90 | 0.0043 | 45,000 | ||||
UCI | 122 | 99.25 | 0.0009 | 62,100 | ||||
Combined | 263 | 99.23 | 0.0014 | 62,800 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Alshehri, L.; Hussain, M. A Lightweight GCT-EEGNet for EEG-Based Individual Recognition Under Diverse Brain Conditions. Mathematics 2024, 12, 3286. https://doi.org/10.3390/math12203286
Alshehri L, Hussain M. A Lightweight GCT-EEGNet for EEG-Based Individual Recognition Under Diverse Brain Conditions. Mathematics. 2024; 12(20):3286. https://doi.org/10.3390/math12203286
Chicago/Turabian StyleAlshehri, Laila, and Muhammad Hussain. 2024. "A Lightweight GCT-EEGNet for EEG-Based Individual Recognition Under Diverse Brain Conditions" Mathematics 12, no. 20: 3286. https://doi.org/10.3390/math12203286