Skip to main content
Yunjun Nam
  • Niiza, Japan

Yunjun Nam

Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject... more
Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods
Artifacts are electrical activities that are detected along the scalp by an electroencephalography (EEG) but that originate from non-cerebral origin, which often need to be eliminated before further processing of EEG signals.... more
Artifacts are electrical activities that are detected along the scalp by an electroencephalography (EEG) but that originate from non-cerebral origin, which often need to be eliminated before further processing of EEG signals. Glossokinetic potentials are artifacts related to tongue movements. In this paper we use these glossokinetic artifacts (instead of eliminating them) to automatically detect and classify tongue positions, which is important in developing a tongue-machine interface. We observe that with a specific selection of a few electrode positions, glossokinetic potentials show contralateral patterns, so that the magnitude of potentials is linearly proportional to the tongue positions flicking at the left to the right inside of cheek. We design a simple linear model based on principal component analysis (PCA) to translate glossokinetic potentials into tongue positions. Experiments on cursor control confirm the validity of our method for tongue position detection using glossokinetic potentials.