[go: up one dir, main page]

CN118094317A - A motor imagery EEG signal classification system based on TimesNet and convolutional neural network - Google Patents

A motor imagery EEG signal classification system based on TimesNet and convolutional neural network Download PDF

Info

Publication number
CN118094317A
CN118094317A CN202410184880.6A CN202410184880A CN118094317A CN 118094317 A CN118094317 A CN 118094317A CN 202410184880 A CN202410184880 A CN 202410184880A CN 118094317 A CN118094317 A CN 118094317A
Authority
CN
China
Prior art keywords
module
timesnet
neural network
dimensional
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410184880.6A
Other languages
Chinese (zh)
Inventor
姜彬
任福戴
程思宇
廖茂宇
刘愈涵
唐其楷
周炳君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202410184880.6A priority Critical patent/CN118094317A/en
Publication of CN118094317A publication Critical patent/CN118094317A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analogue processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physiology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Psychology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Power Engineering (AREA)
  • Fuzzy Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

本发明公开了一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,涉及人体生理信号解码技术领域。本发明包括脑电信号预处理模块,训练数据准备模块、神经网络搭建模块和模型训练与测试模块;脑电信号预处理模块用于对数据集中的原始脑电信号进行滤波处理;训练数据准备模块用于对预处理后的数据划分为训练集、验证集与测试集,进行数据截取、归一化以及输入矩阵的构建;网络模型构建模块用于构建卷积神经网络模型。本发明利用TimesNet模块实现脑电时序信息表征,使得网络模型可以考虑到脑电信号的时序特征信息,并使用通道注意力机制以对特征矩阵进行加权,突出高贡献度特征,提高了对想象运动脑电信号的分类准确率。

The present invention discloses a motor imagery EEG signal classification system based on TimesNet and convolutional neural network, and relates to the technical field of human physiological signal decoding. The present invention comprises an EEG signal preprocessing module, a training data preparation module, a neural network building module and a model training and testing module; the EEG signal preprocessing module is used to filter the original EEG signal in the data set; the training data preparation module is used to divide the preprocessed data into a training set, a validation set and a test set, and perform data interception, normalization and input matrix construction; the network model construction module is used to construct a convolutional neural network model. The present invention uses the TimesNet module to realize the EEG time series information representation, so that the network model can take into account the time series feature information of the EEG signal, and use the channel attention mechanism to weight the feature matrix, highlight the high contribution features, and improve the classification accuracy of the imagined motor EEG signal.

Description

Motor imagery electroencephalogram signal classification system based on TimesNet and convolutional neural network
Technical Field
The invention relates to the technical field of human physiological signal decoding, in particular to a motor imagery electroencephalogram signal classification system based on TimesNet and a convolutional neural network.
Background
Electroencephalogram detection, which is commonly referred to by people, is the process of observing brain wave activity through electrodes placed on the scalp according to certain rules. Electroencephalogram is the overall reflection of brain nerve cell electrophysiological activity on the surface of the cerebral cortex or scalp. Brain electrical signals are generated by thousands of neurons of the brain issuing electrical signals simultaneously. The electrical activity between these neurons produces weak currents that can be captured by the electrodes.
Wherein the brain electrical activity has more different brain activity states and behavioral functions or pathological differences are divided into different frequency bands:
Delta rhythm is mainly located in the frequency range of 0.5Hz to 3.5Hz, which is characteristic of the deep sleep stage when the person is in infancy or is under the treatment of immature development and the adult is in extreme fatigue and coma or anesthesia.
The θ rhythm is mainly in the 3.5 Hz-7.5 Hz frequency band, and this wave is extremely prominent in adult physician frustration or depression and psychotic patients, and is thought to be associated with different brain activity disturbances, but is a major component in the electroencephalogram of teenagers, which strengthen during sleep, playing an important role in the brain electrical activity of infants and children.
The alpha rhythm is mainly located in the frequency range of 7.5 Hz-12.5 Hz, and is the basic rhythm of normal human brain waves, and the frequency is quite constant if no external stimulus is applied.
The beta rhythm is mainly located in the frequency range of 12.5 Hz-30.5 Hz, the wave appears when the nerve is stressed and the emotion is excited or is in high, when the person wakes up from nightmare, the original slow wave rhythm can be replaced by the rhythm immediately, and the central area and the forehead area are the most obvious.
The characteristics of these frequency bands may be associated with different cognitive, emotional and brain functional states, but specific associations and explanations are still under investigation. The extraction method of the brain electrical signal comprises the following steps: preparing, namely determining electrode arrangement and sampling rate; signal acquisition, namely, signal acquisition, connecting electrodes and acquiring signals by using an electroencephalogram amplifier; signal preprocessing, filtering processing and artifact removal; signal analysis, time domain analysis, frequency domain analysis and time-frequency domain analysis; data interpretation and application, event-related potential and spectrum analysis, and brain-computer interface application.
The brain-computer interface originated in the seventies of the last century, and early BCI was mainly used for medical services, and was generally designed for critical patients with nerve or muscle disabilities, such as brain-controlled wheelchairs, brain-controlled text input devices, brain-controlled prostheses and mechanical arms, and so forth. With the development of artificial intelligence technology, deep learning is widely applied in industries such as bioinformatics, engineering, intelligent manufacturing and the like. In daily life, machine equipment is also gradually a family and a partner in life, has skills for executing daily operation tasks, and can complete basic interaction with robots through instruction control. The improvement of the computing capability of the computer promotes the development of deep learning to a great extent, so that various man-machine interaction modes, such as computer vision, natural voice, sensor control and physiological signal control, appear.
The brain electrical signal contains abundant physiological information about nerve cells, clinically presents disease physiological information for doctors to analyze, can provide the basis for diagnosing certain brain diseases, and simultaneously provides a safe and effective treatment means for certain brain diseases. The traditional Chinese medicine composition has wide clinical application, and is an important auxiliary means for diagnosing psychosis, epilepsy, brain trauma, sleep disorder, early treatment of encephalitis and the like clinically. In the application aspect of the biological signal control field, people can realize brain-computer interface technology (BCI) through brain electrical signals, and the brain-computer interface technology is used as a medium to realize a certain brain movement control purpose by utilizing brain electrical differences of human brain to sensing, movement or cognitive behaviors and operating exoskeleton machinery or intelligent equipment. For cost and portability reasons, such BCIs typically used to acquire brain electrical signals using non-invasive methods
Imagination exercise is a common electroencephalogram research paradigm, and the physiological basis is that limb exercise of a person can induce energy change of exercise rhythms in brain sense exercise areas, the phenomenon can occur in actual exercise, and a tested person with normal exercise function can also generate the phenomenon in the process of imagination exercise.
The conventional method of motor imagery classification mainly includes a co-space mode CSP method. The co-space mode is an algorithm for extracting spatial filtering characteristics under two classification tasks, and can extract spatial distribution components of each class from multi-channel brain-computer interface data. The basic principle of the common space mode algorithm is to find a group of optimal space filters for projection by utilizing diagonalization of a matrix, so that variance value difference of two types of signals is maximized, and thus, a feature vector with higher distinction degree is obtained. CSP ignores video features and focuses only on spatial features, resulting in susceptibility to non-stationarity effects of electroencephalogram signals, and easy overfitting on small datasets.
In recent years, development of artificial intelligence technology has driven research in various disciplines of deep learning. The signal-to-noise ratio of EEG is low because the measurement of brain electrical signals is often disturbed by sources of specific noise such as multiple environments, physiology and devices to mask the noise with a higher complexity, known as 'artifacts'. To overcome the above difficulties, deep learning methods are naturally applied to the processing of EEG, which can achieve better performance in task by allowing automatic end-to-end learning preprocessing, feature extraction and classification components. Among them are the representative methods of deep learning, ATCNet, proposed by Altaheri et al in "Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification.", and EEGNet, proposed by LawhernVernon et al in EEGNet: a compact convolutional neural network for EEG-based brain-computur interfaces. The two methods adopt time convolution and space convolution, and after the characteristics are obtained through convolution operation, the characteristics are processed by a processing unit and then sent into a convolution classifier to realize classification. Because the existing neural network method uses a single time convolution kernel in the time domain convolution, and because of the limitation of the one-dimensional convolution kernel of the time convolution module, modeling can only be carried out between adjacent time points for a time sequence, and long-term dependence cannot be realized;
there is therefore a need to propose a new solution to the above problems.
Disclosure of Invention
The invention aims to provide a motor imagery electroencephalogram signal classification system based on TimesNet and a convolutional neural network, which utilizes a TimesNet module to realize electroencephalogram sequence information representation, so that a network model can consider sequence characteristic information of electroencephalogram signals, a channel attention mechanism is used for weighting a characteristic matrix, high contribution degree characteristics are highlighted, and classification accuracy of the imagery motor electroencephalogram signals is improved.
In order to achieve the above purpose, the present invention provides the following technical solutions: a motor imagery electroencephalogram signal classification system based on TimesNet and a convolutional neural network comprises an electroencephalogram signal preprocessing module, a training data preparation module, a neural network building module and a model training and testing module;
The electroencephalogram signal preprocessing module is used for carrying out filtering processing on original electroencephalogram signals in a data set, and a band-pass filter with band-pass filtering frequency of 1 Hz-40 Hz is used for eliminating high-frequency noise and low-frequency noise in the original electroencephalogram signals;
The training data preparation module is used for dividing the preprocessed data into a training set, a verification set and a test set, wherein 40% of the preprocessed data is divided into the training set, 10% of the preprocessed data is divided into the verification set, 50% of the preprocessed data is divided into the test set, and data interception, normalization and construction of an input matrix are carried out;
the network model construction module is used for constructing a convolutional neural network model;
The model training, verifying and testing module is used for inputting the data processed by the training data preparation module into the convolutional neural network for training and parameter updating, and completing performance testing by using the testing set.
Preferably, the convolutional neural network model is used for extracting and classifying characteristics of an input three-dimensional brain electrical signal matrix;
the convolutional neural network model comprises a feature extraction module and a fully-connected classification module;
The characteristic extraction module is used for automatically extracting characteristics of the input three-dimensional brain electrical signals through a convolutional neural network.
The fully-connected classification module is used for classifying and judging the motor imagery category corresponding to the input three-dimensional brain electrical matrix of the convolutional neural network model.
Preferably, the feature extraction module comprises TimesNet module, space-time convolution module and channel attention module;
the TimesNet module is used for realizing better characteristic characterization on the input matrix;
the space-time convolution module is used for extracting the time characteristics and the space characteristics of the output characteristics of the TimesNet module;
the channel attention module is used for distributing weights to all channels of the output characteristics of the space-time convolution module.
Preferably, the TimesNet module converts the one-dimensional time sequence feature into a two-dimensional tensor for analysis to obtain a better time sequence representation, the TimesNet module comprises a TimesBlock module, timesBlock module for extracting period of input one-dimensional time sequence data, and a one-dimensional time sequence with a time length of T and a channel dimension of CThe periodicity can be calculated from the fast fourier transform (Fast Fourier Transform, FFT) of the time dimension, namely:
A=Avg(Amp(FFT(X1D))),
Wherein, The intensity of each frequency distribution in X 1D is represented, the k frequencies { f 1,…,fk } with the greatest intensity corresponding to the most significant k period lengths { p 1,…,pk }. We have the above process abbreviated as:
A,{f1,…,fk},{p1,…,p2}=Period(X1D)
Next, it is converted into a two-dimensional tensor to represent a two-dimensional time sequence variation, we can stack the original sequence based on the selected period, and this process can be formulated as:
Where Padding () is appended 0 at the end so that the sequence length can be divided by p i. By the above operation we obtain a set of two-dimensional tensors Corresponding to the two-dimensional timing variation dominated by period p i.
For two-dimensional tensors, we can extract two-dimensional time-series variation characterization using classical Inception model, namely:
for extracted timing features, we convert them into one-dimensional space for information aggregation:
Finally, we calculate the one-dimensional representation and the intensity of the corresponding frequency through Softmax, and then carry out weighted summation to obtain the final output:
through the process, timesNet is completed, two-dimensional time sequence changes are respectively extracted from a plurality of periods, and then a time sequence change modeling process of self-adaptive fusion is performed.
Preferably, the space-time convolution module obtains the characteristics of the electroencephalogram signal by passing the output characteristics of the TimesNet module through a time convolution layer and a space convolution layer, and the application of the space-time convolution module at least comprises the following steps:
The time-space convolution module is sequentially connected with the time convolution layer, the space convolution layer and the activation function, wherein the time convolution layer is provided with convolution kernels with the selection size of 1 x 125, and the number of the convolution kernels is 12;
In the space convolution layer, the size of the space convolution kernel is set to be C1, the number of the space convolution kernel is equal to the number of channels of the electroencephalogram data, and the number of the space convolution kernel is set to be twice the number of the convolution kernel of the time convolution layer;
And adding a square activation function after the time convolution layer and the space convolution layer, and then connecting an average pooling layer in series, wherein the convolution kernel of the layer is 1 x 37, and then adding a nonlinear factor by utilizing a logarithmic activation function to enhance the expression capacity of the neural network to the model, wherein the convolution module is used for normalizing in batches finally, so that the training process of the network is accelerated, and the accuracy of the model is improved.
Preferably, the channel attention module averages the values of each characteristic channel through a global average pooling layer by the output characteristics of the space-time convolution module to generate a vector with the same channel number, then captures local cross-channel interaction information for each channel and their neighbors by using one-dimensional convolution operation, multiplies each component of the weight sequence by corresponding channel original data to obtain a characteristic diagram after the channel attention mechanism is acted, and the characteristic diagram is used as the output characteristics of the characteristic extraction module.
Preferably, the feature vector output by the fully-connected classification module is processed by a softmax function to obtain the score belonging to each classification.
Preferably, in the model training and testing module, the electroencephalogram data of each tested person is independently trained and tested.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, by using TimesNet modules for obtaining better electroencephalogram signal characterization and introducing a space-time convolution module for extracting electroencephalogram time local features and space (i.e. channel) global features, the quality of the features extracted by the network model is ensured.
2. According to the invention, through the TimesNet module, one-dimensional time sequence data can be expanded to a two-dimensional space for analysis, one-dimensional time sequences are stacked based on a plurality of periods, a plurality of two-dimensional tensors can be obtained, and the columns and rows of each two-dimensional tensor respectively reflect time sequence changes in the periods and in the periods, so that the two-dimensional time sequence changes are obtained.
3. According to the invention, the contribution degree of each channel of the characteristic diagram is weighted by introducing the channel attention mechanism, so that the high contribution degree characteristic is strengthened, the low contribution degree characteristic is weakened, the capturing capability of the network model to the effective characteristic is improved, and the classification accuracy of the network is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a system flow diagram of the present invention;
FIG. 2 is a diagram of a neural network model in accordance with the present invention;
FIG. 3 is a block diagram of TimesNet of the present invention;
FIG. 4 is a diagram of the ECA-Net attention module of the present invention;
FIG. 5 is a histogram of accuracy at BCICIV a for various methods of the present invention;
FIG. 6 is a histogram of accuracy of the results of the different methods of the present invention performed on the average of BCICIV b dataset;
FIG. 7 is a schematic diagram of an ablation experiment of TimesNet modules and ECANet structures in accordance with the present invention;
FIG. 8 is a visual representation of the output characteristics of subject S2 in BCICIV a during the training phase and the testing phase with the addition of TimesNet module and the removal of TimesNet module;
FIG. 9 is a schematic diagram of visualization of spatial information of an electroencephalogram data input network through a convolution module;
FIG. 10 is a time domain activation graph of an electroencephalogram data network through a spatial convolution layer;
FIG. 11 is a class activation diagram showing the phenomenon of ERD/ERS as it occurs when left and right hand motions are envisioned.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Referring to fig. 1-4, a motor imagery electroencephalogram classification system based on TimesNet and convolutional neural networks.
The present example uses BCI-Competition-IV-2a to disclose an electroencephalogram dataset of the dataset, which data total 25 channels, of which 22 channels are EEG and 3 channels are EOG, of which the data of the three channels of EOG do not participate in classification, the data is sampled using 250Hz, 4 motor imagery classifications of 9 subjects are collected, two trials are performed, each trial is repeated 72 times, two groups of trials total 576 trials are performed each time, 8s are performed.
The electroencephalogram signal preprocessing module is used for performing operations such as filtering on original electroencephalogram signals in the data set;
In the electroencephalogram signal preprocessing module, filtering means that the original electroencephalogram signal of the BCI-Competition-IV-2a dataset is subjected to band-pass filtering, and a band-pass filter with the band-pass filtering frequency of 1-40 Hz eliminates high-frequency and low-frequency noise in the original electroencephalogram signal;
the training data preparation module is used for dividing the preprocessed electroencephalogram data into a training set, a verification set and a test set, normalizing, enhancing data, constructing an input matrix and the like;
firstly, the brain electrical signal of the BCI-Competition-IV-2a dataset processed by the brain electrical signal preprocessing module is subjected to two groups of experiments of 4 motor imagination of 9 testers in total, wherein the first group of experiments are selected as a training set and a verification set, the second group of experiments are selected as a test set, standard deviation is quoted for the training set for normalization, and the mean value and variance of the training set are used for carrying out the same processing on the verification set and the test set, so that the verification set and the test set meet the probability distribution of the training set data, the dimension difference of each channel of the brain electrical data is eliminated, and the network convergence is more stable during training; then cutting data of the training set, the verification set and the test set, intercepting the time of the imagination movement process of 3 s-7 s from the time of 8s, and finally obtaining a single sample with the size of 22 x 1000; finally, the obtained electroencephalogram signals with the size of 22 x 1000 are expanded to the size of 1 x 22 x 1000 to construct a three-dimensional myoelectric matrix, and the three-dimensional myoelectric matrix is input into a model training and testing module for training.
The network model building module is used for building a convolutional neural network model;
the convolutional neural network (the structure of which is shown in fig. 2) is used for extracting and classifying the characteristics of the input three-dimensional electroencephalogram matrix, and comprises a special TimesNet module, a space-time characteristic extraction module and a fully-connected classification module.
TimesNet the module comprises a TimesBlock module, timesBlock extracting period of inputted one-dimensional time sequence data, and for one-dimensional time sequence with time length T and channel dimension CThe periodicity can be calculated from the fast fourier transform (Fast Fourier Transform, FFT) of the time dimension, namely:
A=Avg(FFT(X1D))),
{f1,…,f2}=argTopk(A),
Wherein, The intensity of each frequency distribution in X 1D is represented, the k frequencies { f 1,…,fk } with the greatest intensity corresponding to the most significant k period lengths { p 1,…,pk }. We have the above process abbreviated as:
A,{f1,…,fk},{p1,…,p2}=Preiod(X1D)
Next, it is converted into a two-dimensional tensor to represent a two-dimensional time sequence variation, we can stack the original sequence based on the selected period, and this process can be formulated as:
Where Padding () is appended 0 at the end so that the sequence length can be divided by p i. By the above operation we obtain a set of two-dimensional tensors Corresponding to the two-dimensional timing variation dominated by period p i.
For two-dimensional tensors, we can extract two-dimensional time-series variation characterization using classical Inception model, namely:
for extracted timing features, we convert them into one-dimensional space for information aggregation:
Finally, we calculate the one-dimensional representation and the intensity of the corresponding frequency through Softmax, and then carry out weighted summation to obtain the final output:
Through the above process, timesNet is completed, two-dimensional time sequence changes are extracted respectively in a plurality of periods, and then a time sequence change modeling process of self-adaptive fusion is performed, and fig. 3 is a block diagram of the module.
The space-time convolution module comprises a time convolution layer, a space convolution layer and an activation function which are sequentially connected, wherein the time convolution layer is provided with convolution kernels with the selection size of 1 x 125, and the number of the convolution kernels is 12; in the space convolution layer, the size of the space convolution kernel is set to be C1, the number of the space convolution kernel is equal to the number of channels of the electroencephalogram data, and the number of the space convolution kernel is set to be twice the number of the convolution kernel of the time convolution layer; adding a square activation function after a time convolution layer and a space convolution layer, then connecting an average pooling layer in series, wherein the size of a convolution kernel of the layer is 1 x 37, then adding a nonlinear factor by utilizing a logarithmic activation function, enhancing the expression capacity of a neural network to a model, and finally carrying out batch normalization on a convolution module to accelerate the training process of the network and improve the precision of the model;
the channel attention module is used for distributing weights of channels of the output characteristics of the space-time convolution module, and fig. 4 is a structure diagram of the module;
Firstly, processing output characteristics of a multi-scale fusion convolution module through a global average pooling layer to generate a one-dimensional sequence with the length of C2; then, a one-dimensional convolution with a kernel size of 15 (padding size of 7) is used to learn the weight of each channel on a one-dimensional sequence; finally, the learned weight sequence is modified by a sigmoid activation function to produce a final weight sequence. Each component of the final weight sequence is multiplied by the corresponding raw data for each channel to generate a feature map that is processed by the channel attention mechanism, which is used as an output feature of the feature extraction module. The channel attention technology enables the network model to automatically score the contribution of each channel in the feature map, and then weights the features according to the scores so as to improve the capability of the network model for capturing effective features;
The fully-connected classification module is used for carrying out feature map dimension reduction on the output features of the channel attention module, firstly flattening the output features (the size is 24 x 1 x 43) of the attention module into one-dimensional features with the length of 1032, and sending the one-dimensional features into the fully-connected network. The method comprises the steps of sequentially performing full-connection layers (128 neurons), nonlinear activation layers and random inactivation layers (the ratio is 0.5) in a full-connection network, outputting feature vectors with the length of 4 through the full-connection layers (4 neurons), inputting the output feature vectors into a Softmax layer, and processing the feature vectors by using a Softmax function to obtain vectors with the length of 4, wherein each class of the vectors is a score belonging to each imagination movement, and corresponds to 4 classes in a BCI-Competition-IV-2a dataset.
Batch normalization in TimesNet module uses LayerNormalization, batch normalization in spatio-temporal convolution module and fully connected module uses BatchNormalization to speed up network convergence; the non-linear activation layers all use a ReLU activation function to enhance the non-linear expression capabilities of the network.
The model training and testing module is used for inputting the data processed by the training data preparation module into the convolutional neural network model for training and parameter tuning, and completing performance testing by using the testing set.
The model training and testing module is used for training and testing the electroencephalogram data of 9 testers independently, training 1000 iteration cycles are carried out on each tester, a training set is divided into a plurality of latches with the size of 64 in each iteration cycle, each latch is sequentially input into a lightweight convolutional neural network to obtain a classification result, the classification result is compared with a label, a cross entropy loss function is used for calculating loss, then an Adam optimizer is used for carrying out back propagation to update trainable parameters in the network, the initial learning rate is determined to be 0.0002, and an automatic learning rate adjustment strategy of cosine annealing is adopted. And in each iteration period, after the data of the training set are completely trained, inputting the verification set into the convolutional neural network to calculate the classification accuracy, adjusting the super-parameters of the network to ensure that the loss of the model is minimum, storing the model parameters at the moment, and finally testing the parameters with the minimum loss of the model by using a test set, wherein the accuracy result on the test set is the classification accuracy of the motor imagery electroencephalogram classification system based on the convolutional neural network.
Experimental results
1. Comparative test
To verify the superiority of the EEGTimes-ECA-Net algorithm herein in re-MI-EEG signal feature recognition, we have also conducted extensive trial-related experiments, and we selected some excellent algorithm as the baseline algorithm to compare on both public data sets. The Baseline (Baseline) algorithm is described as follows:
FBCSP [1]: FBCSP is that the spatial extraction method based on CSP is implemented by slicing the frequency band and adding a feature selection algorithm, specifically, firstly slicing the frequency band, then CSP filtering each sub-frequency band, and finally selecting features of the filtered features and classifying the results.
EEGNet [2]: a compact convolutional neural network that utilizes a convolutional kernel to learn a temporal filter to capture motion-related frequency information, and a separable convolutional network to learn a spatial filter, including deep convolution (DEPTHWISE CONVOLUTION) and point convolution (Pointwise Convolution), reduces training parameters of the model.
ShallowConvNet [3]: the data transformation performed by ShallowConvNet is similar to the transformation of FBCSP, with the time-rolling and spatial filters followed by a squared non-linear, mean-pooling layer and logarithmic activation function to extract deep features.
EEG-TCNet [4]: EEG-TCNet was proposed as an extension of EEGNet, applying the TCN structure after EEGNet feature extraction to further extract time information.
EEG-ITNet [5]: the model uses Inception modules and causal convolution with dilation to extract rich spectral, spatial and temporal information from the multi-channel brain electrical signal.
MBSTCNN-ECA-LightGBM [6]: this is a multi-branch convolutional network model with channel attention and LightGBM, which is built to learn time-frequency domain features, then add channel attention mechanisms to get more features, and finally decode classification tasks using lightweight structure LightGBM.
C2CM [7]: the model is classified with the CNN architecture by modifying the filter bank co-spatial pattern to generate a new data temporal representation, and the CNN is optimized for the new representation.
EEGConformer [8]: this is a compact transducer structure, encapsulating both local and global features extracted from the data in the same EEG classification framework. The method comprises the steps of learning local features of one-dimensional time and space by using a convolution module, and then connecting a self-attention module to extract global correlation in the local time features.
LMDANet [9]: the model combines two attention modules specially designed for electroencephalogram signals, and can integrate characteristics of multiple dimensions.
EEG-CDILNet [10]: the model provides a cyclic expansion convolution network (CDIL), which is a symmetrical structure, different from TCN, and can be used for classifying ultra-long sequence data by utilizing the expansion convolution multiplied expansion receptive field.
1.2 Results on BCICIV a
We compared the proposed model with the most advanced MIEEG classification methods using multiple indices, including classification accuracy, cohen score, classification accuracy, recall, F1 score.
Table 1 shows the classification accuracy and Kappa score for each subject for the most advanced deep-learning MI-EEG classification method in the set, using individual-specific training methods .EEGNet,ShallowConvNet,EEG-TCNet,EEG-ITNet,MBSTCNN-ECA-LightGBM,C2CM,EEGConformer,LMDANet,EEG-DCILNet and EEGTimes-ECA-Net as comparative methods on the data set. The accuracy of the proposed model EEGTimes-ECA-Net was 80.45% and fixed parameters and super parameters were used for all individuals. Firstly, the accuracy of the structure of the device is obviously improved by 12.7 percent compared with the traditional FBCSP, and the result also shows that the performances of other deep learning methods such as ShallowConvNet, EEGNet and the like are also better than FBCSP, which indicates that the CNN-based method has strong characteristic characterization capability, but has limited receptive field, and the methods only focus on local characteristics and neglect global correlation, which possibly can be related to the decoding accuracy of the electroencephalogram signal sequence; recently proposed EEGConformer introduces a transducer structure to obtain global dependency information of the electroencephalogram signals, but the attention mechanism has difficulty in finding reliable dependency from discrete time points. The C2CM effectively combines the traditional artificial feature and deep learning ideas, but although the model parameter of each tested is finely tuned, the model parameter of each tested model is still unable to defeat us except for the tested model 7, and the accuracy of the network is 6% higher than that of the MBSTCNN-ECA-LightGBM network which also uses ECANet modules, so that the model can prove that the characteristic extraction capability of the module is stronger than that of a multi-branch network, and the model has the characteristic extraction performance of a non-input multi-branch structure.
Furthermore, the proposed model accuracy is at least 1.8% higher than the attention mechanism based network EEGConformer. This means that the TimesNet structure in the system is more suitable than the transducer structure for extracting long-term correlation features of the timing signal and the model structure is simpler. Table 2 summarizes the accuracy and recall of the proposed model for each class, and we also provide an average of accuracy, recall and F1 score. See fig. 5.
Table 1 comparison between methods on BCICIV a dataset
Methods S01S02S03S04S05S06S07S08S09Avg.(kappa)
FBCSP[1] 76.0056.5081.2561.0055.0045.2582.7581.2570.7567.75(0.57)
EEGNet[2] 84.3454.0687.5463.5967.3954.8888.8076.7574.2472.40(0.63)
ShallowConvNet[3] 79.5156.2588.8980.9057.2953.8291.6781.2579.1774.31(0.66)
EEG-TCNet[4] 85.7765.0294.5164.9175.0061.4087.3683.7678.0377.35(0.71)
EEG-ITNet[5] 84.3862.8589.9369.1074.3157.6488.5483.6880.2176.74
MBSTCNN-ECA-LightGBM[6] 82.0061.0089.0063.0071.0064.072.0079.0084.0074.00(0.65)
C2CM[7] 87.5065.2890.2866.6762.5045.4989.5883.3379.5174.46(0.66)
EEGConformer[8] 88.1961.4693.4078.1352.0865.2892.3688.1988.8978.66(0.71)
LMDANet[9] 86.5067.4091.7077.4065.6061.1091.3083.385.4078.80(0.71)
EEG-CDILNet[10] 85.4061.8094.4073.6066.0061.8093.1091.7088.9079.63(0.72)
EEGTimes-ECA-Net 85.3171.8891.2578.7575.0063.1388.4487.8182.5080.45(0.74)
Meter 2Precision recall and F1 Score on the BCICIV2a dataset using EEGTimesNet
1.2 Comparison on BCICIV b
BCICIV2b is a data set with low complexity of three-channel two-classification, and the model proposed by us achieves the highest classification accuracy. Our model also achieved the best effect on this dataset, while we also compared some methods. Unlike BCICIV a, FBCSP achieves higher accuracy on the BCICIV b dataset, and the deep learning method EEGNet, shallowConvNet and the like also exhibit higher classification performance, which also shows that the deep learning method has stronger feature extraction capability than the conventional method. The EEGTimes-ECA-Net performed best in predicting volatility beyond LMDANet and EEGConformer, with average accuracy of the model exceeding 86% (0.74 kappa). In the case of a small number of channels, the conventional method such as FBCSP is not advantageous, possibly because it is difficult for the spatial filter to learn enough spatial characteristics. Deep learning models such as EEGConformer with excessive parameter amounts also exhibit limitations in classification performance. Again we do not specifically optimize the hyper parameters of EEGTimes-ECA-Net for BCICIV b. The classification performance in BCICIV b reflects good robustness of LMDA-Net. See fig. 6.
Table 3 method comparisons on BCICIV b dataset
Meter 4Precision recall and F1 Score on the BCICIV2b dataset using EEGTimesNet
2. Ablation experiments
Referring to FIG. 7, no ECANet represents EEGTimes-ECA-Net without ECA attention channel mechanism;
no TimesNet denotes EEGTimes-ECA-Net without TimesNet structure.
Key to the improvement of EEGTimes-ECA-Net over CNN-based approaches is the addition of TimesNet-based modules to learn the global representation. At the same time, channel attention mechanisms may also contribute to the final result. Thus, we performed ablation experiments in the dataset described above to further verify the effectiveness of TimesNet and ECANet modules in EEGTimes-ECA-Net, we removed TimesNet and ECANet modules in turn, and compared them to a reference network. It can be seen that after removal of TimesNet modules, network performance had degraded to some extent on both datasets and on BCICIV a dataset, test 7 was degraded by 3.12%, test 8 was similarly degraded by 3.12% and the average accuracy was degraded by 4.45% (p < 0.01). The overall performance average accuracy is improved by 2.99% (p < 0.01) compared to the case without ECANet modules. On BCICIV b dataset, the drop in amplitude was small after removal of TimesNet module, test 2 dropped by 1.56% accuracy, test 8 dropped by 2.19% accuracy, and the average accuracy dropped by 1.37% (p < 0.01). Notably, in BCICIV a dataset, classification accuracy was reduced for better discrimination test 1 after addition of TimesNet module, while improvement was significant reaching 17.81% and 32.82% for test 2 and test 5, which were otherwise poorly performing. The fact that the TimesNet module adopted by the method can acquire the characteristics of the electroencephalogram time sequence is also proved, and the robustness and the characteristic extraction capability of the model are enhanced.
3. Visualization of
Visualization of output characteristics of subject S2 in fig. 8BCICIV a during the training phase and the testing phase with the addition of TimesNet module and the removal of TimesNet module. The first row represents feature visualizations of the non-added and added modules during the training phase, and the second row represents feature visualizations under both conditions during the testing phase.
Fig. 9 visualization of spatial information of an electroencephalogram data input network through a convolution module. The Without data shows that Without TimesBlock, most of the brain motor sensory area was activated and the network model was able to capture throughout the experiment; the second line shows that with the TimesBlock modules, our model can focus more on the motor sensory area based on the original focus, which is reflected in the effect of darker color in the motion area and lighter color in the other areas.
Fig. 10 is a time domain activation graph of an electroencephalogram data network through a spatial convolution layer. The data show that we have different concerns over different ranges of the time domain throughout the experiment.
The class 11 activation diagram shows the ERD/ERS phenomenon that occurs when left and right hand motions are envisioned.
The visualization method should preserve the global and local features of the data, so we further illustrate TimesNet's interpretability through two different angles, including deep feature Distribution (Distribution) of UMAP and the global dependency Representation (presentation) implied by the class activation graph.
(1) Characteristic distribution: in order to preliminarily explain the influence of the features extracted by the TimesBlock module introduced into the model on the classification performance, a new statistical degradation and visual chemistry technology is used, UMAP [12], and compared with the t-SNE algorithm, the UMAP algorithm reserves more global structures and more sufficient features in the aspect of visual quality. After training adequately in two ways, including TimesNet and TimesNet, we choose BCICIV a dataset to test 2 with the feature distribution shown in the figure, we can see that for training data, the distances between different classes are close without TimesNet modules, while the intra-class spacing of the features extracted by TimesNet modules is smaller and the inter-class spacing is larger; for the test data, the distance between the features in the class is large and the aliasing between the classes is very obvious without TimesNet, while with the help of TimesNet, the aliasing between the classes is improved greatly, although the class boundaries are not clear enough, considering the poor resolution of the data of the tested 2.
(2) The characteristic is represented as follows: to verify that the proposed model was able to achieve a better representation of the electroencephalogram features by introducing TimesNet modules, we next verified the validity of the modules by further more intuitive visualizations, whereupon we used gradient weighted class activation mapping (gradCAM) to demonstrate that our model learns the global feature representation through the BCICIV a dataset as shown in fig. 9, which represents a separate representation of the average of all training trial experiments for each subject. We use class activation graphs (CAMs) to monitor the network's interest in different areas of time period as shown in fig. 10, where we have only chosen the average sample data for the first second of each test due to space constraints. It can be seen that different attentiveness was presented in different time domains, indicating that the test was attentive due to fatigue during the course of the experiment and that the motor awareness was presenting a certain delay. Next, by looking at each of the left and right hand trial class activation charts of fig. 11, it is evident that event-related synchronization and de-synchronization, and that apparent contralateral activation and ipsilateral inhibition were observed in trial 1,3,7,8 et al. However, the event correlation characteristics of the two persons tested 2 and 6 are not obvious, and the classification effect of the two persons is poor.
Reference is made to:
[1]KAI KENG A,ZHENG YANG C,HAIHONG Z,et al.Filter Bank Common Spatial Pattern(FBCSP)in Brain-Computer Interface[C]//Filter Bank Common Spatial Pattern(FBCSP)in Brain-Computer Interface.2008IEEE International Joint Conference on Neural Networks(IEEE World Congress on Computational Intelligence),1-8June 2008.2390-2397.
[2]LAWHERN V J,SOLON A J,WAYTOWICH N R,et al.2018.EEGNet:a compact convolutional neural network for EEG-based brain-computer interfaces[J].JNeural Eng,15(5):056013.
[3]SCHIRRMEISTER R T,SPRINGENBERG J T,FIEDERER L D J,et al.2017.Deep learning with convolutional neural networks for EEG decoding and visualization[J].Hum Brain Mapp,38(11):5391-5420.
[4]INGOLFSSON T,HERSCHE M,WANG X,et al.EEG-TCNet:An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain–Machine Interfaces[M].City,2020.
[5]SALAMI A,ANDREU-PEREZ J,GILLMEISTER H.2022.EEG-ITNet:An Explainable Inception Temporal Convolutional Network for Motor Imagery Classification[J].IeeeAccess,10:36672-36685.
[6]JIA H,YU S,YIN S,et al.2023.A Model Combining Multi Branch Spectral-Temporal CNN,Efficient Channel Attention,and LightGBM For MI-BCI Classification[J].IEEE Trans Neural Syst Rehabil Eng,PP:1311-1320.
[7]WANG G,CERF M.2022.Brain-Computer Interface using neural network and temporal-spectral features[J].FrontNeuroinform,16:952474.
[8]SONG Y,ZHENG Q,LIU B,et al.2022.EEG Conformer:Convolutional Transformer for EEG Decoding and Visualization[J].IEEE Trans Neural Syst Rehabil Eng,PP:710-719.
[9]MIAO Z,ZHAO M,ZHANG X,et al.2023.LMDA-Net:A lightweight multi-dimensional attention network for general EEG-based brain-computer interfaces and interpretability[J].Neuroimage,276:120209.
[10]LIANG T,YU X,LIU X,et al.2023.EEG-CDILNet:a lightweight and accurate CNN network using circular dilated convolution for motor imagery classification[J].J Neural Eng,20(4).
[11]ZHAO H,ZHENG Q,MA K,et al.2021.Deep Representation-Based Domain Adaptation for Nonstationary EEG Classification[J].IEEE Trans Neural Netw Learn Syst,32(2):535-545.
[12]MCINNES L,HEALY J.2018.UMAP:Uniform Manifold Approximation and Projection for Dimension Reduction[J].ArXiv,abs/1802.03426.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (8)

1.一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:包括脑电信号预处理模块,训练数据准备模块、神经网络搭建模块和模型训练与测试模块;1. A motor imagery EEG signal classification system based on TimesNet and convolutional neural network, characterized by: comprising an EEG signal preprocessing module, a training data preparation module, a neural network building module and a model training and testing module; 所述脑电信号预处理模块用于对数据集中的原始脑电信号进行滤波处理,使用带通滤波频率为1Hz—40Hz的带通滤波器消除原始脑电信号当中的高频和低频噪声;The EEG signal preprocessing module is used to filter the original EEG signals in the data set, using a bandpass filter with a bandpass filter frequency of 1 Hz-40 Hz to eliminate high-frequency and low-frequency noise in the original EEG signals; 所述训练数据准备模块用于对预处理后的数据划分为训练集、验证集与测试集,其中40%划分为训练集,10%划分为验证集,50%划分为测试集并,进行数据截取、归一化以及输入矩阵的构建;The training data preparation module is used to divide the preprocessed data into a training set, a validation set and a test set, wherein 40% is divided into a training set, 10% is divided into a validation set, and 50% is divided into a test set, and perform data interception, normalization and input matrix construction; 所述网络模型构建模块用于构建卷积神经网络模型;The network model building module is used to build a convolutional neural network model; 所述模型训练、验证与测试模块用于将训练数据准备模块处理的数据输入到卷积神经网络中进行训练与参数更新,并使用测试集完成性能测试。The model training, verification and testing module is used to input the data processed by the training data preparation module into the convolutional neural network for training and parameter updating, and use the test set to complete performance testing. 2.根据权利要求1所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述卷积神经网络模型用于对输入的三维脑电信号矩阵进行特征提取和分类;2. The motor imagery EEG signal classification system based on TimesNet and convolutional neural network according to claim 1, characterized in that: the convolutional neural network model is used to extract features and classify the input three-dimensional EEG signal matrix; 所述卷积神经网络模型包括特征提取模块、全连接分类模块;The convolutional neural network model includes a feature extraction module and a fully connected classification module; 所述特征提取模块用于对输入的三维脑电信号就诊通过卷积神经网络进行特征的自动提取。The feature extraction module is used to automatically extract features of the input three-dimensional EEG signal through a convolutional neural network. 所述全连接分类模块用于对所述卷积神经网络模型的输入三维脑电矩阵所对应运动想象类别进行分类判别。The fully connected classification module is used to classify and distinguish the motor imagery category corresponding to the input three-dimensional EEG matrix of the convolutional neural network model. 3.根据权利要求2所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述特征提取模块包括TimesNet模块、时空卷积模块和通道注意力模块;3. The motor imagery EEG signal classification system based on TimesNet and convolutional neural network according to claim 2, characterized in that: the feature extraction module includes a TimesNet module, a spatiotemporal convolution module and a channel attention module; 所述TimesNet模块用于对输入矩阵实现更好的特征表征;The TimesNet module is used to achieve better feature representation of the input matrix; 所述时空卷积模块用于对所述TimesNet模块的输出特征进行时间特征和空间特征提取;The spatiotemporal convolution module is used to extract temporal features and spatial features from the output features of the TimesNet module; 所述通道注意力模块用于对所述时空卷积模块的输出特征各通道进行权重的分配。The channel attention module is used to assign weights to each channel of the output features of the spatiotemporal convolution module. 4.根据权利要求3所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述TimesNet模块将一维时序特征转换为二维张量进行分析,得到更好的时序表征,所述TimesNet模块由一个TimesBlock组成,TimesBlock对输入的一维时序数据提取周期,对于一个时间长度为T、通道维度为C的一维时间序列其周期性可以由时间维度的快速傅里叶变化(Fast Fourier Transform,FFT)计算得到,即:4. According to claim 3, a motor imagery EEG signal classification system based on TimesNet and convolutional neural network is characterized in that: the TimesNet module converts the one-dimensional time series features into a two-dimensional tensor for analysis to obtain a better time series representation, and the TimesNet module is composed of a TimesBlock, which extracts the cycle of the input one-dimensional time series data. For a one-dimensional time series with a time length of T and a channel dimension of C Its periodicity can be calculated by the Fast Fourier Transform (FFT) of the time dimension, namely: A=Avg(Amp(FFT(X1D))),A=Avg(Amp(FFT( X1D ))), 其中,代表了X1D中每个频率分布的强度,强度最大的k个频率{f1,…,fk}对应着最显著的k个周期长度{p1,…,pk}。我们上述过程简记为:in, represents the intensity of each frequency distribution in X1D . The k frequencies with the largest intensity { f1 , ..., fk } correspond to the k most significant period lengths { p1 , ..., pk }. We can abbreviate the above process as: A,{f1,…,fk},{p1,…,p2}=Period(X1D)A,{f 1 ,…,f k },{p 1 ,…,p 2 }=Period(X 1D ) 接下来将之转化为二维张量表示二维时序变化,我们可以基于选定的周期对原始序列进行堆叠,该过程可以公式化为:Next, convert it into a two-dimensional tensor to represent the two-dimensional time series changes. We can stack the original sequence based on the selected period. The process can be formulated as: 其中,Padding()在末尾补0,使得序列长度可以被pi整除。通过上述操作,我们得到了一组二维张量 对应着由周期pi主导的二维时序变化。Among them, Padding() adds 0 at the end so that the sequence length can be divided by pi . Through the above operations, we get a set of two-dimensional tensors It corresponds to the two-dimensional time series changes dominated by the period p i . 对于二维张量具有二维局部性,因此,我们可以使用经典的Inception模型提取二维时序变化表征,即:For a two-dimensional tensor, it has two-dimensional locality. Therefore, we can use the classic Inception model to extract a two-dimensional temporal change representation, namely: 对于提取的时序特征,我们将其转化为一维空间以便进行信息聚合:For the extracted time series features, we transform them into a one-dimensional space for information aggregation: 最后我们通过Softmax计算得到一维表征及其对应频率的强度进行加权求和,得到最终输出:Finally, we use Softmax to calculate the one-dimensional representation and the intensity of its corresponding frequency for weighted summation to get the final output: 通过上述过程,TimesNet完成了多个周期分别提取二维时序变化,再进行自适应融合的时序变化建模过程。Through the above process, TimesNet completes the process of extracting two-dimensional time series changes in multiple cycles and then performing adaptive fusion time series change modeling. 5.根据权利要求4所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述时空卷积模块将所述的TimesNet模块的输出特征通过时间卷积层和空间卷积层得到脑电信号的特征,所述时空卷积模块的应用至少包括以下步骤:5. The motor imagery EEG signal classification system based on TimesNet and convolutional neural network according to claim 4 is characterized in that: the spatiotemporal convolution module obtains the features of the EEG signal by using the output features of the TimesNet module through a temporal convolution layer and a spatial convolution layer, and the application of the spatiotemporal convolution module at least comprises the following steps: 时空卷积模块依次连接的时间卷积层和空间卷积层,以及激活函数,该时间卷积层中设置卷积核选取大小为1*125,卷积核的数量为12个;The spatiotemporal convolution module sequentially connects the temporal convolution layer and the spatial convolution layer, as well as the activation function. The convolution kernel size in the temporal convolution layer is set to 1*125, and the number of convolution kernels is 12; 在空间卷积层中,空间卷积核大小设置为C*1,C与脑电数据的通道数相同,空间卷积核数量设置为时间卷积层卷积核的数量的两倍;In the spatial convolution layer, the size of the spatial convolution kernel is set to C*1, where C is the same as the number of channels of the EEG data, and the number of spatial convolution kernels is set to twice the number of convolution kernels in the temporal convolution layer; 在时间卷积层和空间卷积层之后加入平方激活函数,其次串联平均池化层,该层卷积核的大小为1*37,然后利用对数激活函数加入非线性因素,增强神经网络对模型的表达能力,卷积模块最后是批归一化,加速网络的训练过程,提高模型的精度。A square activation function is added after the temporal convolution layer and the spatial convolution layer, followed by a series of average pooling layers with a convolution kernel size of 1*37. The logarithmic activation function is then used to add nonlinear factors to enhance the neural network’s ability to express the model. The convolution module is then batch normalized to accelerate the network training process and improve the accuracy of the model. 6.根据权利要求5所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述通道注意力模块将所述时空卷积模块的输出特征通过全局平均池化层将每个特征通道的数值取平均,生成一个通道数相同的向量,然后使用一维卷积操作对每个通道以及它们的邻居来捕获局部跨通道交互信息,将该权重序列每个分量乘以对应的各通道原始数据以得到经过通道注意力机制作用后的特征图,作为所述特征提取模块的输出特征。6. According to claim 5, a motor imagery EEG signal classification system based on TimesNet and convolutional neural network is characterized in that: the channel attention module averages the output features of the spatiotemporal convolution module through a global average pooling layer to generate a vector with the same number of channels, and then uses a one-dimensional convolution operation on each channel and their neighbors to capture local cross-channel interaction information, and multiplies each component of the weight sequence by the corresponding original data of each channel to obtain a feature map after the channel attention mechanism, which is used as the output feature of the feature extraction module. 7.根据权利要求2所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述全连接分类模块所述全连接模块输出的特征向量通过softmax函数处理得到属于各个分类的分数。7. According to claim 2, a motor imagery EEG signal classification system based on TimesNet and convolutional neural network is characterized in that the feature vector output by the fully connected classification module is processed by a softmax function to obtain scores belonging to each classification. 8.根据权利要求1所述的一种基于TimesNet和卷积神经网络的运动想象脑电信号分类系统,其特征在于:所述模型训练与测试模块中,将每个被试者的脑电数据单独进行训练与测试。8. The motor imagery EEG signal classification system based on TimesNet and convolutional neural network according to claim 1 is characterized in that: in the model training and testing module, the EEG data of each subject is trained and tested separately.
CN202410184880.6A 2024-02-19 2024-02-19 A motor imagery EEG signal classification system based on TimesNet and convolutional neural network Pending CN118094317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410184880.6A CN118094317A (en) 2024-02-19 2024-02-19 A motor imagery EEG signal classification system based on TimesNet and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410184880.6A CN118094317A (en) 2024-02-19 2024-02-19 A motor imagery EEG signal classification system based on TimesNet and convolutional neural network

Publications (1)

Publication Number Publication Date
CN118094317A true CN118094317A (en) 2024-05-28

Family

ID=91151047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410184880.6A Pending CN118094317A (en) 2024-02-19 2024-02-19 A motor imagery EEG signal classification system based on TimesNet and convolutional neural network

Country Status (1)

Country Link
CN (1) CN118094317A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118568420A (en) * 2024-07-31 2024-08-30 北京工业大学 A method for issuing continuous motion trajectory control instructions for brain-computer interface based on magnetic brain signals
CN118735082A (en) * 2024-09-03 2024-10-01 南京信息工程大学 A correction method for sub-seasonal temperature forecast based on 3D-TimesNet
CN119046820A (en) * 2024-07-31 2024-11-29 南京林业大学 Concentration degree classification method and system based on single-channel EEG (electro-magnetic EEG) electroencephalogram signals
CN119235314A (en) * 2024-10-31 2025-01-03 天津大学 Depression auxiliary diagnosis system based on EEG and speech and its data analysis method
CN119439236A (en) * 2025-01-13 2025-02-14 吉林大学 A method for earthquake event detection based on CDIL-CNN combined with convolutional attention mechanism
CN119475109A (en) * 2025-01-16 2025-02-18 杭州电子科技大学 A motor imagery EEG signal classification and recognition method and system based on data enhancement and multimodal feature fusion
CN119622609A (en) * 2025-02-13 2025-03-14 苏州声学产业技术研究院有限公司 A method for detecting motors based on the Transformer model of UMAP
CN120236705A (en) * 2025-06-03 2025-07-01 浙江大学 EEG report generation method combining hybrid neural network and analytical experience algorithm
CN120337411A (en) * 2025-04-24 2025-07-18 西南林业大学 A hybrid electric vehicle NH₃ emission prediction method based on TCN+Fecam+Transformer
CN120724268A (en) * 2025-08-29 2025-09-30 长春大学 A training method and training device for a brain cognitive state classification model
CN121167432A (en) * 2025-11-19 2025-12-19 常熟市第一人民医院 An interpretable EEG decoding method based on adaptive fuzzy convolution and TSK-guided attention

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266276A (en) * 2021-12-25 2022-04-01 北京工业大学 Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN116755547A (en) * 2023-05-16 2023-09-15 重庆理工大学 Surface EMG signal gesture recognition system based on lightweight convolutional neural network
CN117272196A (en) * 2023-08-23 2023-12-22 浙江工业大学 An anomaly detection method for industrial time series data based on spatiotemporal graph attention network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114266276A (en) * 2021-12-25 2022-04-01 北京工业大学 Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN116755547A (en) * 2023-05-16 2023-09-15 重庆理工大学 Surface EMG signal gesture recognition system based on lightweight convolutional neural network
CN117272196A (en) * 2023-08-23 2023-12-22 浙江工业大学 An anomaly detection method for industrial time series data based on spatiotemporal graph attention network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘汉东等: "岩体力学参数优选理论及应用", 31 August 2006, 黄河水利出版社, pages: 49 - 53 *
牛超: "CCF优博丛书 物联网数据安全可信的共享技术研究", 31 January 2023, 机械工业出版社, pages: 111 - 113 *
王宇韬等: "Python大数据分析与机器学习商业案例实战", 30 June 2020, 机械工业出版社 , pages: 268 - 271 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119046820A (en) * 2024-07-31 2024-11-29 南京林业大学 Concentration degree classification method and system based on single-channel EEG (electro-magnetic EEG) electroencephalogram signals
CN118568420A (en) * 2024-07-31 2024-08-30 北京工业大学 A method for issuing continuous motion trajectory control instructions for brain-computer interface based on magnetic brain signals
CN118735082A (en) * 2024-09-03 2024-10-01 南京信息工程大学 A correction method for sub-seasonal temperature forecast based on 3D-TimesNet
CN119235314A (en) * 2024-10-31 2025-01-03 天津大学 Depression auxiliary diagnosis system based on EEG and speech and its data analysis method
CN119439236A (en) * 2025-01-13 2025-02-14 吉林大学 A method for earthquake event detection based on CDIL-CNN combined with convolutional attention mechanism
CN119439236B (en) * 2025-01-13 2025-04-08 吉林大学 A method for earthquake event detection based on CDIL-CNN combined with convolutional attention mechanism
CN119475109B (en) * 2025-01-16 2025-04-08 杭州电子科技大学 A motor imagery EEG signal classification and recognition method and system based on data enhancement and multimodal feature fusion
CN119475109A (en) * 2025-01-16 2025-02-18 杭州电子科技大学 A motor imagery EEG signal classification and recognition method and system based on data enhancement and multimodal feature fusion
CN119622609A (en) * 2025-02-13 2025-03-14 苏州声学产业技术研究院有限公司 A method for detecting motors based on the Transformer model of UMAP
CN119622609B (en) * 2025-02-13 2025-06-03 苏州声学产业技术研究院有限公司 Method for detecting motor based on UMAP transducer model
CN120337411A (en) * 2025-04-24 2025-07-18 西南林业大学 A hybrid electric vehicle NH₃ emission prediction method based on TCN+Fecam+Transformer
CN120236705A (en) * 2025-06-03 2025-07-01 浙江大学 EEG report generation method combining hybrid neural network and analytical experience algorithm
CN120724268A (en) * 2025-08-29 2025-09-30 长春大学 A training method and training device for a brain cognitive state classification model
CN121167432A (en) * 2025-11-19 2025-12-19 常熟市第一人民医院 An interpretable EEG decoding method based on adaptive fuzzy convolution and TSK-guided attention

Similar Documents

Publication Publication Date Title
CN118094317A (en) A motor imagery EEG signal classification system based on TimesNet and convolutional neural network
Aslan et al. Automatic Detection of Schizophrenia by Applying Deep Learning over Spectrogram Images of EEG Signals.
Jemal et al. An interpretable deep learning classifier for epileptic seizure prediction using EEG data
CN114533086B (en) A motor imagery EEG decoding method based on spatial feature time-frequency transformation
CN110876626B (en) Depression detection system based on multi-lead EEG optimal lead selection
CN111568446B (en) Portable brain depression detection system combined with demographic attention mechanism
Pan et al. ST-SCGNN: a spatio-temporal self-constructing graph neural network for cross-subject EEG-based emotion recognition and consciousness detection
CN110969108B (en) Limb action recognition method based on autonomic motor imagery electroencephalogram
CN106886792B (en) An EEG Emotion Recognition Method Based on Hierarchical Mechanism to Build a Multi-Classifier Fusion Model
CN114139573B (en) Identification method based on electroencephalogram signal multispectral image sequence
CN114145745B (en) Graph-based multitasking self-supervision emotion recognition method
Kokate et al. Classification of upper arm movements from eeg signals using machine learning with ica analysis
Han et al. EEG emotion recognition based on the TimesNet fusion model
CN109247917A (en) A kind of spatial hearing induces P300 EEG signal identification method and device
CN113128384B (en) Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning
CN108280414A (en) A kind of recognition methods of the Mental imagery EEG signals based on energy feature
Bugeja et al. A novel method of EEG data acquisition, feature extraction and feature space creation for early detection of epileptic seizures
Abibullaev et al. A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIs
Saini et al. Discriminatory features based on wavelet energy for effective analysis of electroencephalogram during mental tasks
Sun et al. Meeg-transformer: transformer network based on multi-domain eeg for emotion recognition
Al-Hamadani et al. Normalized deep learning algorithms based information aggregation functions to classify motor imagery EEG signal
Liu et al. Enhanced electroencephalogram signal classification: A hybrid convolutional neural network with attention-based feature selection
Ming-Ai et al. Feature extraction and classification of mental EEG for motor imagery
Wu et al. Learning multiband-temporal-spatial eeg representations of emotions using lightweight temporal convolution and 3d convolutional neural network
Kavitha et al. Optimizing EEG-based emotion recognition with a multi-modal ensemble approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20240528

RJ01 Rejection of invention patent application after publication