Latent Prototype-Based Clustering: A Novel Exploratory Electroencephalography Analysis Approach
<p>Different periods of electroencephalography (EEG) signals of an epileptic. a–e denote different time points.</p> "> Figure 2
<p>Schematic of EEG clustering solution based on latent prototypes. CWT, continuous wavelet transform. DFM, deep feature map. <b><span class="html-italic">e</span></b><sub>query</sub>, latent space representation of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mi mathvariant="bold-italic">k</mi> </mrow> </msub> </mrow> </semantics></math>, latent prototype of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math>, scalogram of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>x</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, baseline scalogram of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math><sub>,</sub> deep feature map of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, baseline deep feature map of the <span class="html-italic">k</span>th cluster. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>α</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> are weights.</p> "> Figure 3
<p>Latent distribution defined as Gaussian mixture distribution and distribution of generated data and that of real data. Suppose there are three clusters in the dataset. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> can be regarded as the latent prototypes of the three clusters.</p> "> Figure 4
<p>Network architecture of W-SLOGAN. The latent distribution is defined as Gaussian mixture distribution. Assume the number of Gaussian components is 3. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">z</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the latent vectors sampled from latent space. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the encoded vectors of the scalograms calculated by the encoder. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the mean vectors of the three Gaussian components, corresponding to the latent prototypes of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>d</mi> </mrow> <mrow> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> denotes the output of the discriminator.</p> "> Figure 5
<p>Three levels of similarity for clustering. Assume the number of Gaussian components is 3. DFM: deep feature map. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">μ</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the mean vectors of the three Gaussian components, corresponding to the latent prototypes of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">e</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the latent representation of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the baseline scalograms of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">x</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the scalogram of the query signal. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> denote the baseline deep feature maps of the three clusters. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> <mi>F</mi> <mi>M</mi> </mrow> <mrow> <mi mathvariant="normal">q</mi> <mi mathvariant="normal">u</mi> <mi mathvariant="normal">e</mi> <mi mathvariant="normal">r</mi> <mi mathvariant="normal">y</mi> </mrow> </msub> </mrow> </semantics></math> denotes the deep feature map of the query signal.</p> "> Figure 6
<p>Clustering results and intra-class diversity. (<b>A1</b>–<b>A3</b>) show the probability density functions for samples belonging to Cluster 1, Cluster 2, and Cluster 3, respectively. (<b>B</b>) shows the probability density function of Class AB samples clustered into Cluster 1, several high-probability samples with their scalograms (in the upper row), and several low-probability samples with their respective scalograms (in the lower row). (<b>C</b>) shows the probability density functions of Class CD samples clustered into Cluster 3, several high-probability samples with their scalograms, and several low-probability samples with their scalograms (in the lower row).</p> "> Figure 7
<p>Purity, ARI, and NMI of the results of clustering on four groups of EEG/intracranial EEG (iEEG) data of the Bonn dataset separately using different kinds of similarities.</p> "> Figure 8
<p>Purity, ARI, and NMI of the results of clustering on three epileptic subjects of ECoG data of the HUP dataset separately using different kinds of similarities.</p> "> Figure 9
<p>Impact of the iteration number during training W-SLOGAN on the clustering performance on four groups of EEG data of the Bonn dataset separately evaluated with Purity, ARI, and NMI.</p> "> Figure 10
<p>Impact of the iteration number duringtraining W-SLOGAN on the clustering performance on three epileptic subjects of ECoG data of the HUP dataset separately evaluated with Purity, ARI and NMI.</p> "> Figure 11
<p>Typical kinds of epileptiform waveforms were found by clustering the ictal iEEG data of the Bonn dataset. In each row are displayed the characteristic waveform of a type of epileptiform discharge, three epileptiform waves of that type that were clustered into a same cluster found from the iEEG recordings by our approach, as well as the baseline scalogram of that cluster.</p> "> Figure 12
<p>Class labels and clustering results of several samples in group AB_CD_E of the Bonn dataset. Samples on each row belong to a same class and those on each column are clustered into a same cluster. Each grid displays four samples. Row 1 and column 1 both correspond to Class AB, i.e., healthy; Row 2 and column 2 both correspond to Class CD, i.e., inter-ictal, epileptic; Row 3 and column 3 both correspond to Class E, i.e., ictal, epileptic.</p> ">
Abstract
:1. Introduction
2. Materials
2.1. Bonn Dataset
2.2. HUP IEEG Epilepsy Dataset
3. Methods
3.1. Schematic of Latent Prototype-Based Clustering
3.2. Gaussian Mixture Distribution in Latent Space
3.3. W-SLOGAN
3.3.1. Network Architecture
3.3.2. Objective Functions
3.3.3. Optimization Algorithm of Latent Distribution Parameters
3.3.4. Training Process
3.4. Compositive Similarity Metric
3.5. External Clustering Indexes
3.6. Experimental Setup and Running Environment
4. Results
4.1. Clustering Results
4.2. Clustering Results from Different Similarity Metrics
4.3. W-SLOGAN’s Training
4.3.1. Impact of the Number of Iterations in Training W-SLOGAN
4.3.2. Reproducibility of the Results
4.4. Exploratory EEG Analysis
4.4.1. Discovery of Different Types of Epileptiform Waves
4.4.2. Multiple Labels of EEG Data
5. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chakravarthi, B.; Ng, S.-C.; Ezilarasan, M.R.; Leung, M.-F. EEG-based emotion recognition using hybrid CNN and LSTM classification. Front. Comput. Neurosci. 2022, 16, 1019776. [Google Scholar] [CrossRef] [PubMed]
- Luo, T.-J. Dual regularized spatial-temporal features adaptation for multi-source selected cross-subject motor imagery EEG classification. Expert Syst. Appl. 2024, 255, 124673. [Google Scholar] [CrossRef]
- Hassan, A.R.; Subasi, A.; Zhang, Y.C. Epilepsy seizure detection using complete ensemble empirical mode decomposition with adaptive noise. Knowl.-Based Syst. 2020, 191, 105333. [Google Scholar] [CrossRef]
- Foong, R.; Ang, K.K.; Quek, C.; Guan, C.T.; Phua, K.S.; Kuah, C.W.K.; Deshmukh, V.A.; Yam, L.H.L.; Rajeswaran, D.K.; Tang, N.; et al. Assessment of the Efficacy of EEG-Based MI-BCI With Visual Feedback and EEG Correlates of Mental Fatigue for Upper-Limb Stroke Rehabilitation. IEEE Trans. Biomed. Eng. 2020, 67, 786–2795. [Google Scholar] [CrossRef]
- Yu, H.T.; Lei, X.Y.; Song, Z.X.; Liu, C.; Wang, J. Supervised Network-Based Fuzzy Learning of EEG Signals for Alzheimer’s Disease Identification. IEEE Trans. Fuzzy Syst. 2020, 28, 60–71. [Google Scholar] [CrossRef]
- Jayaram, V.; Widmann, N.; Förster, C.; Fomina, T.; Hohmann, M.; Hagen, J.M.V.; Synofzik, M.; Schölkopf, B.; Schöls, L.; Grosse-Wentrup, M. Brain-Computer Interfacing in Amyotrophic Lateral Sclerosis: Implications of a Resting-State EEG Analysis. In Proceedings of the 37th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: Milan, Italy, 2015; pp. 6979–6982. [Google Scholar]
- Xia, K.J.; Ni, T.G.; Yin, H.S.; Chen, B. Cross-Domain Classification Model with Knowledge Utilization Maximization for Recognition of Epileptic EEG Signals. IEEE-Acm Trans. Comput. Biol. Bioinform. 2021, 18, 53–61. [Google Scholar] [CrossRef]
- Dai, C.L.; Wu, J.; Monaghan, J.J.M.; Li, G.H.; Peng, H.; Becker, S.I.; McAlpine, D. Semi-Supervised EEG Clustering with Multiple Constraints. IEEE Trans. Knowl. Data Eng. 2023, 35, 8529–8544. [Google Scholar] [CrossRef]
- Pimentel, B.A.; de Carvalho, A. A Meta-learning approach for recommending the number of clusters for clustering algorithms. Knowl.-Based Syst. 2020, 195, 105682. [Google Scholar] [CrossRef]
- Deng, J.Z.; Guo, J.P.; Wang, Y. A Novel K-medoids clustering recommendation algorithm based on probability distribution for collaborative filtering. Knowl.-Based Syst. 2019, 175, 96–106. [Google Scholar] [CrossRef]
- Bouveyron, C.; Girard, S.; Schmid, C. High-dimensional data clustering. Comput. Stat. Data Anal. 2007, 52, 502–519. [Google Scholar] [CrossRef]
- Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef] [PubMed]
- Gao, T.F.; Chen, D.; Tang, Y.B.; Du, B.; Ranjan, R.; Zomaya, A.Y.; Dustdar, S. Adaptive density peaks clustering: Towards exploratory EEG analysis. Knowl.-Based Syst. 2022, 240, 108123. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 2672–2680. [Google Scholar]
- Ben-Yosef, M.; Weinshall, D.J. Gaussian Mixture Generative Adversarial Networks for Diverse Datasets, and the Unsupervised Clustering of Images. arXiv 2018, arXiv:1808.10356. [Google Scholar]
- Gurumurthy, S.; Sarvadevabhatla, R.K.; Babu, R.V. DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA; pp. 4941–4949. [Google Scholar]
- Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
- Larsen, A.B.L.; Sonderby, S.K.; Larochelle, H.; Winther, O. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016. [Google Scholar]
- Hwang, U.; Kim, H.; Jung, D.; Jang, H.; Lee, H.; Yoon, S. Stein Latent Optimization for Generative Adversarial Networks. arXiv 2021, arXiv:2106.05319. [Google Scholar]
- Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finitedimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef]
- Rashed-Al-Mahfuz, M.; Moni, M.A.; Uddin, S.; Alyami, S.A.; Summers, M.A.; Eapen, V. A Deep Convolutional Neural Network Method to Detect Seizures and Characteristic Frequencies Using Epileptic Electroencephalogram (EEG) Data. IEEE J. Transl. Eng. Health Med. 2021, 9, 2000112. [Google Scholar] [CrossRef]
- Li, M.Y.; Chen, W.Z.; Zhang, T. Automatic epilepsy detection using wavelet-based nonlinear analysis and optimized SVM. Biocybern. Biomed. Eng. 2016, 36, 708–718. [Google Scholar] [CrossRef]
- Bernabei, B.M.; Li, A.; Revell, A.Y.; Smith, R.J.; Gunnarsdottir, K.M.; Ong, I.Z.; Davis, K.A.; Sinha, N.; Sarma, S.; Litt, B. HUP iEEG Epilepsy Dataset. 2023. Available online: https://openneuro.org/datasets/ds004100/versions/1.1.3 (accessed on 2 May 2024).
- Albaqami, H.; Hassan, G.M.; Datta, A. MP-SeizNet: A multi-path CNN Bi-LSTM Network for seizure-type classification using EEG. Biomed. Signal Process. Control. 2023, 84, 104780. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Yang, X.L.; Liu, L.P.; Li, Z.W.; Xia, Y.X.; Fan, Z.P.; Zhou, J.Y. Semi-Supervised Seizure Prediction Model Combining Generative Adversarial Networks and Long Short-Term Memory Networks. Appl. Sci. 2023, 13, 11631. [Google Scholar] [CrossRef]
- Wei, Z.C.; Zou, J.Z.; Zhang, J.; Xu, J.Q. Automatic epileptic EEG detection using convolutional neural network with improvements in time-domain. Biomed. Signal Process. Control. 2019, 53, 101551. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Truong, N.D.; Kuhlmann, L.; Bonyadi, M.R.; Querlioz, D.; Zhou, L.P.; Kavehei, O.M. Epileptic Seizure Forecasting with Generative Adversarial Networks. IEEE Access 2019, 7, 143999–144009. [Google Scholar] [CrossRef]
- Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar] [CrossRef]
- Zhang, H.; Ho, T.B.; Zhang, Y.; Lin, M.-S. Unsupervised feature extraction for time series clustering using orthogonal wavelet transform. Informatica 2006, 30, 305–319. [Google Scholar]
- McCallan, N.; Davidson, S.; Ng, K.Y.; Biglarbeigi, P.; Finlay, D.; Lan, B.L.; McLaughlin, J. Epileptic multi-seizure type clas-sification using electroencephalogram signals from the Temple University Hospital Seizure Corpus: A review. Expert Syst. Appl. 2023, 234, 121040. [Google Scholar] [CrossRef]
Subjects | Set A | Set B | Set C | Set D | Set E | |
---|---|---|---|---|---|---|
Healthy Volunteers | Epileptic Patients | |||||
Patient state | Eyes open | Eyes closed | Inter-ictal | Inter-ictal | Ictal | |
Electrode types | Surface | Surface | Intracranial | Intracranial | Intracranial | |
Electrode placement | International 10/20 systems | International 10/20 systems | Opposite epileptogenic zone | Within epileptogenic zone | Within epileptogenic zone | |
No. of samples | 100 | 100 | 100 | 100 | 100 | |
Sampling points | 4096 | 4096 | 4096 | 4096 | 4096 |
Patients | Gender | Age | Target | Therapy | Electrode |
---|---|---|---|---|---|
HUP65 | M | 36 | Temporal | Resection | RG 11-Ref |
HUP88 | F | 35 | Temporal | Resection | LMST 02-Ref |
HUP89 | M | 29 | Temporal | Resection | AD 04-Ref |
Network | Layer (Type) | Maps | Size | Kernel Size | Activation | BN a Layer |
---|---|---|---|---|---|---|
Generator | Input_1 | None | 100 | None | None | None |
Dense | None | 8192 | None | None | None | |
Reshape | 512 | 4 × 4 | None | ReLU | yes | |
ConvTranspose2D | 256 | 8 × 8 | 5 × 5 | ReLU | yes | |
ConvTranspose2D | 128 | 16 × 16 | 5 × 5 | ReLU | yes | |
ConvTranspose2D | 64 | 32 × 32 | 5 × 5 | ReLU | yes | |
ConvTranspose2D | 3 | 64 × 64 | 5 × 5 | ReLU | yes | |
Discriminator | Input_2 | 3 | 64 × 64 | None | None | None |
Conv2D | 64 | 32 × 32 | 5 × 5 | LeakyReLU | None | |
Conv2D | 128 | 16 × 16 | 5 × 5 | LeakyReLU | None | |
Conv2D | 256 | 8 × 8 | 5 × 5 | LeakyReLU | None | |
Conv2D | 512 | 4 × 4 | 5 × 5 | LeakyReLU | None | |
Flatten | None | 8192 | None | None | None | |
Dense | None | 1 | None | None | None | |
Encoder | Input_3 | 3 | 64 × 64 | None | None | None |
Conv2D | 64 | 32 × 32 | 5 × 5 | ReLU | yes | |
Conv2D | 128 | 16 × 16 | 5 × 5 | ReLU | yes | |
Conv2D | 256 | 8 × 8 | 5 × 5 | ReLU | yes | |
Conv2D | 512 | 4 × 4 | 5 × 5 | ReLU | yes | |
GAP b | None | 512 | None | None | None | |
Dense | None | 100 | None | None | None |
Models | Reparameterization Form | Trainable Parameters | Characteristics of Gradient Estimation |
---|---|---|---|
AEVB [29] DeLiGAN [16] GM-GAN [15] | Explicit | Unbiased; high variance | |
SLOGAN [19] | Implicit | Unbiased; low variance |
Group | Set | Description | # Class | # Cluster | Class Ratio |
---|---|---|---|---|---|
CD_E | Sets C and D versus Set E | Inter-ictal and ictal | 2 | 2 | 3000:1500 |
AB_CD | Sets A and B versus Sets C and D | Healthy and inter-ictal | 2 | 2 | 3000:3000 |
ABCD_E | Sets A, B, C, and D versus Set E | Non-seizure and seizure | 2 | 2 | 6000:1500 |
AB_CD_E | Sets A and B versus Sets C and D versus Set E | Healthy, inter-ictal, and ictal | 3 | 3 | 3000:3000:1500 |
Case | Description | # Class | # Cluster | # Pre-Ictal | # Ictal | # Inter-Ictal |
---|---|---|---|---|---|---|
HUP65 | pre-ictal, inter-ictal, and ictal | 3 | 3 | 348 | 592 | 251 |
HUP88 | pre-ictal, inter-ictal, and ictal | 3 | 3 | 348 | 592 | 724 |
HUP89 | pre-ictal, inter-ictal, and ictal | 3 | 3 | 348 | 592 | 252 |
Parameters | Initialization | Optimizer | Learning Rate |
---|---|---|---|
Generator | Random | Adam | 0.0001 |
Discriminator | Random | Adam | 0.0004 |
Encoder | Random | Adam | 0.0001 |
SGD | 0.04 | ||
SGD | 0.004 | ||
SGD | 0.004 | ||
10 | None | None | |
1 | None | None | |
Batch size | 64 | ||
Iterations | 18,000 |
Group | Criteria | Latent Representation Similarity | Latent Representation + Image Similarity | Latent Representation + Image + DFM Similarity |
---|---|---|---|---|
CD_E # Cluster:2 # Class:2 | Purity | 0.9033 ± 0.0200 | 0.9620 ± 0.0030 | 0.9633 ± 0.0015 |
ARI | 0.6410 ± 0.0680 | 0.8518 ± 0.0115 | 0.8568 ± 0.0056 | |
NMI | 0.5704 ± 0.0472 | 0.7510 ± 0.0137 | 0.7592 ± 0.0089 | |
AB_CD # Cluster:2 # Class:2 | Purity | 0.7798 ± 0.0147 | 0.7778 ± 0.0161 | 0.7768 ± 0.0172 |
ARI | 0.3139 ± 0.0335 | 0.3096 ± 0.0365 | 0.3076 ± 0.0389 | |
NMI | 0.2503 ± 0.0265 | 0.2475 ± 0.0277 | 0.2466 ± 0.0288 | |
ABCD_E # Cluster:2 # Class:2 | Purity | 0.9494 ± 0.0043 | 0.9644 ± 0.0081 | 0.9638 ± 0.0089 |
ARI | 0.7694 ± 0.0199 | 0.8382 ± 0.0377 | 0.8354 ± 0.0412 | |
NMI | 0.6396 ± 0.0199 | 0.7199 ± 0.0408 | 0.7162 ± 0.0442 | |
AB_CD_E # Cluster:3 # Class:3 | Purity | 0.8925 ± 0.0048 | 0.8977 ± 0.0032 | 0.9015 ± 0.0020 |
ARI | 0.6882 ± 0.0124 | 0.7003 ± 0.0088 | 0.7102 ± 0.0055 | |
NMI | 0.6341 ± 0.0125 | 0.6491 ± 0.0090 | 0.6613 ± 0.0031 | |
Avg Purity Avg Purity Rank # Best Purity | 0.8813 2.5 1 | 0.9005 1.75 1 | 0.9014 1.75 2 | |
Avg ARI Avg ARI Rank # Best ARI | 0.6031 2.5 1 | 0.6750 1.75 1 | 0.6775 1.75 2 | |
Avg NMI Avg NMI Rank # Best NMI | 0.5236 2.5 1 | 0.5919 1.75 1 | 0.5958 1.75 2 |
Case | Criteria | Latent Representation Similarity | Latent Representation + Image Similarity | Latent Representation + Image + DFM Similarity |
---|---|---|---|---|
HUP65 # Cluster:3 # Class:3 | Purity | 0.7834 ± 0.0086 | 0.8013 ± 0.0152 | 0.8044 ± 0.0195 |
ARI | 0.4774 ± 0.0067 | 0.5368 ± 0.0222 | 0.5421 ± 0.0336 | |
NMI | 0.4893 ± 0.0096 | 0.5356 ± 0.0064 | 0.5375 ± 0.0148 | |
HUP88 # Cluster:3 # Class:3 | Purity | 0.9804 ± 0.0042 | 0.9982 ± 0.0017 | 0.9982 ± 0.0015 |
ARI | 0.9471 ± 0.0144 | 0.9956 ± 0.0041 | 0.9956 ± 0.0036 | |
NMI | 0.9180 ± 0.0175 | 0.9904 ± 0.0078 | 0.9905 ± 0.0074 | |
HUP89 # Cluster:3 # Class:3 | Purity | 0.8249 ± 0.0039 | 0.8333 ± 0.0024 | 0.8686 ± 0.0020 |
ARI | 0.5483 ± 0.0076 | 0.5627 ± 0.0046 | 0.6344 ± 0.0054 | |
NMI | 0.5408 ± 0.0069 | 0.5460 ± 0.0036 | 0.6253 ± 0.0052 | |
Avg Purity Avg Purity Rank # Best Purity | 0.8629 3.0 0 | 0.8776 1.6667 1 | 0.8904 1.0 3 | |
Avg ARI Avg ARI Rank # Best ARI | 0.6576 3.0 0 | 0.6984 1.6667 1 | 0.7240 1.0 3 | |
Avg NMI Avg NMI Rank # Best NMI | 0.6494 3.0 0 | 0.6907 2.0 0 | 0.7178 1.0 3 |
Model | |||
---|---|---|---|
0.3869 | 0.3969 | 0.373 | |
0.448 | 0.439 | 0.4627 | |
0.1651 | 0.1641 | 0.1643 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, S.; Zhang, P.; Chen, H. Latent Prototype-Based Clustering: A Novel Exploratory Electroencephalography Analysis Approach. Sensors 2024, 24, 4920. https://doi.org/10.3390/s24154920
Zhou S, Zhang P, Chen H. Latent Prototype-Based Clustering: A Novel Exploratory Electroencephalography Analysis Approach. Sensors. 2024; 24(15):4920. https://doi.org/10.3390/s24154920
Chicago/Turabian StyleZhou, Sun, Pengyi Zhang, and Huazhen Chen. 2024. "Latent Prototype-Based Clustering: A Novel Exploratory Electroencephalography Analysis Approach" Sensors 24, no. 15: 4920. https://doi.org/10.3390/s24154920