CN119970034A - Sentiment monitoring method based on online independent component analysis - Google Patents
Sentiment monitoring method based on online independent component analysis Download PDFInfo
- Publication number
- CN119970034A CN119970034A CN202411984853.3A CN202411984853A CN119970034A CN 119970034 A CN119970034 A CN 119970034A CN 202411984853 A CN202411984853 A CN 202411984853A CN 119970034 A CN119970034 A CN 119970034A
- Authority
- CN
- China
- Prior art keywords
- signals
- signal
- artifact
- component analysis
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2134—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24317—Piecewise classification, i.e. whereby each classification requires several discriminant rules
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2123/00—Data types
- G06F2123/02—Data types in the time domain, e.g. time-series data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Veterinary Medicine (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Computational Linguistics (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Physiology (AREA)
- Psychology (AREA)
- Fuzzy Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Social Psychology (AREA)
Abstract
The invention discloses an emotion monitoring method based on online independent component analysis, which comprises the following steps of firstly filtering, removing bad conduction and removing base line operation on an electroencephalogram signal obtained from signal acquisition equipment, secondly setting a sliding window of 1s, setting the step length to be 0.5s, dividing the electroencephalogram signal, sending divided data into a generated countermeasure network model, generating artifact signals, inputting the artifact signals and the divided signals into an online independent component analysis to fully remove the artifacts, thirdly extracting space and time characteristics of the electroencephalogram signal from the processed data respectively by using a space mapping technology and a time sequence modeling mode, and fourthly carrying out depth fusion on the time and space characteristics and inputting the time and space characteristics into a trained softmax classifier for emotion monitoring. The invention carries out real-time anti-fake operation on the real-time electroencephalogram signals, and ensures that the anti-fake delay is low and the anti-fake quality is kept at a higher level.
Description
Technical Field
The invention relates to the technical field of real-time electroencephalogram signal processing, in particular to an emotion monitoring method for processing signals based on an online ICA algorithm.
Background
In recent years, people pay more attention to health and emotion, and emotion monitoring technology is used as an emerging biological signal processing method and is widely studied and applied in multiple fields of psychology, medicine, man-machine interaction and the like. The identification of the emotion state not only can help people understand the emotion change of the people, but also can better understand the health condition of the people.
The brain electrical signals (EEG, electroencephalography) are used as biological signals of brain activity and can well reflect the emotional state of a person. By collecting real-time brain electrical signals of a human body and analyzing waveform characteristics, the emotion state of the human can be accurately judged. Meanwhile, there are many interferences of exogenous and endogenous signals, such as an eye electrical signal (EOG) and an electromyographic signal (EMG), etc., on the acquisition of the electroencephalogram signal. More efficient processing is required to remove these interfering signals.
Independent component analysis (Independent ComponentAnalysis, ICA) is a powerful blind source separation algorithm, and can effectively remove interference signals in brain electrical signals so as to accurately judge emotion.
However, the current ICA algorithm relies on an offline mode, and usually processes after the electroencephalogram signal is collected, so that there is a delay, and emotion monitoring cannot be achieved. In practice, the real-time monitoring can reflect the human body state more timely, so that the method can be used in places with higher emotion monitoring requirements, such as medical treatment, man-machine interaction and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the emotion monitoring method based on the online independent component analysis, which can perform real-time signal processing and emotion monitoring while acquiring the electroencephalogram signals, eliminates the delay problem of the traditional ICA processing mode, and can well remove artifacts and extract the required electroencephalogram signals by combining with the ICA algorithm after GAN optimization, thereby effectively improving the accuracy and the robustness of emotion monitoring.
In order to achieve the above purpose, the technical scheme of the invention comprises the following steps:
S1, acquiring a user real-time brain signal, and primarily preprocessing, including filtering, removing bad guides and removing base lines;
S2, segmenting the preprocessed brain signals according to a sliding window, sending the segmented data into a generated countermeasure network model, generating similar artifact signals, and carrying out independent component analysis and artifact removal together with the segmented data;
S3, extracting the spatial and temporal characteristics of the brain electrical signals from the obtained data by using a convolutional neural network and a self-attention mechanism respectively;
s4, carrying out deep fusion on the extracted space and time characteristics, and inputting the deep fusion into a softmax classifier for emotion classification, so as to realize online real-time emotion monitoring.
Preferably, step S1, the preliminary pretreatment comprises the following:
(1) Filtering by adopting a band-pass filter with the frequency of 1-45 Hz;
(2) Judging based on the amplitude and waveform stability of the brain electrical signal, and marking bad leads;
(3) Selecting a signal in a certain section of static state, calculating a baseline signal, and subtracting the baseline value;
T is the time of a certain resting state and the subsequent electroencephalogram signal is subtracted by the baseline value.
Preferably, in step S2, the size of the sliding window is set to 1S, the step size is set to 0.5S, and the segmented electroencephalogram signal:
X={x1,x2,...,xt}
t represents the T-th sliding window;
Generating an artifact signal similar to the real signal in statistics through a training generator, comparing the generated artifact signal with the real EEG signal, performing the de-oculography on the signals by using ICA, performing the de-artifact processing on the myoelectric signal, and improving the quality and efficiency of de-artifact.
Preferably, in step S3, the EEG signal x= { X 1,x2,...,xt }, where X t represents the t-th sliding window, processed in step S2 is first determined according to the electrode position of the electroencephalogram signal acquisition apparatus;
Inputting the segmented electroencephalogram signals into a generating countermeasure Network model (GENERATIVE ADVERSARIAL Network), enabling a Generator (Generator) to generate artifact signals as far as possible, distinguishing the artifact signals generated by the Generator from real electroencephalogram signals by a discriminator (Discriminator), and continuously optimizing the Generator to generate the artifact signals close to the real artifact signals on the basis. And the generated artifact signals and the segmented original electroencephalogram signals are subjected to independent component analysis, so that the signals after ICA processing can effectively remove endogenous signals (such as electrooculogram, electromyogram signals and the like) and other interference signals. The independent component analysis algorithm after GAN optimization can further extract the real brain electrical signals and improve the robustness.
Preferably, in step S3, the one-dimensional electroencephalogram signal processed in step two is mapped into a two-dimensional matrix according to the electrode position of the electroencephalogram signal acquisition device:
m represents an mth electrode;
Spatial feature extraction of two-dimensional signal frame X t using convolutional neural networks, using a 3X 3 convolutional kernel and a 2X 2 convolutional kernel, without pooling operations to preserve more spatial feature information:
F1=Conv(X,W1,b1)
F2=Conv(X,W2,b2)
W is a convolution layer, b is a bias term, and F is an output feature map;
Performing time feature extraction on the EEG signals processed in the second step by using a self-attention mechanism, and performing weighted summation to obtain final time feature vectors Ft, wherein the EEG signals are mapped to three different matrixes through different weights:
Query matrix (Q) q=x twq;
a key matrix (K) k=x twk;
a value matrix (V) v=x twv;
Wherein w q,wk,wv is a weight matrix obtained by learning;
The similarity of the sliding window, i.e. the attention score, is then calculated:
and carrying out normalization operation:
And finally, carrying out weighted summation on the output matrix V to obtain a final time feature vector:
Output=Attention Weights×V。
preferably, step S3, the first convolution kernel:
F1=ReLU(W1*X(t)+b1)
wherein, represents convolution operation, reLU is an activation function, F 1 is a feature matrix after convolution, and b 1 is a bias term;
A second convolution kernel:
F2=ReLU(W2*F1+b2)
And meanwhile, the pooling operation is not used, so that the spatial characteristics are kept to the maximum extent, and finally, the extracted spatial characteristic matrix F 2 is flattened into a one-dimensional vector F s.
Preferably, in step S4, the spatial and temporal features extracted from CNN and the self-attention mechanism are respectively F s and F t, and the temporal feature F t of the EEG signal is extracted by the self-attention mechanism, and the spatial and temporal features are spliced to form a new fusion feature vector F f,Ff=[Fs||Ft, ||represents the splicing operation of the features;
the fusion characteristics are expressed as:
Ff=concat(Fs,Ft)
Finally, F f is input into a softmax classifier for emotion classification:
θ k is the weight of the kth emotion, k is the category number, y k is the prediction probability of the kth emotion, and finally, the emotion with the highest prediction probability is selected as the output.
The technical principle and the beneficial effects of the invention are as follows:
The on-line ICA algorithm provided by the invention can be used for carrying out real-time signal processing and emotion monitoring while acquiring the brain electrical signals, so that the delay problem of the traditional ICA processing mode is eliminated, and meanwhile, by combining with the ICA algorithm after GAN optimization, artifacts can be well removed and the required brain electrical signals can be extracted, and the accuracy and the robustness of emotion monitoring can be effectively improved.
Drawings
FIG. 1 is a flow chart of an algorithm of the present invention;
fig. 2 is a schematic diagram of the framework of the GAN-based optimization ICA algorithm of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only preferred embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Examples
As shown in fig. 1, the present invention comprises the following steps:
S1, acquiring a user real-time brain signal, and primarily preprocessing, including filtering, removing bad guides and removing base lines;
S2, segmenting the preprocessed brain signals according to a sliding window, sending the segmented data into a generated countermeasure network model, generating similar artifact signals, and carrying out independent component analysis and artifact removal together with the segmented data;
S3, extracting the spatial and temporal characteristics of the brain electrical signals from the obtained data by using a convolutional neural network and a self-attention mechanism respectively;
s4, carrying out deep fusion on the extracted space and time characteristics, and inputting the deep fusion into a softmax classifier for emotion classification, so as to realize online real-time emotion monitoring.
Preferably, step S1, the preliminary pretreatment comprises the following:
(1) Filtering by adopting a band-pass filter with the frequency of 1-45 Hz;
(2) Judging based on the amplitude and waveform stability of the brain electrical signal, and marking bad leads;
(3) Selecting a signal in a certain section of static state, calculating a baseline signal, and subtracting the baseline value;
T is the time of a certain resting state and the subsequent electroencephalogram signal is subtracted by the baseline value.
Preferably, in step S2, the size of the sliding window is set to 1S, the step size is set to 0.5S, and the segmented electroencephalogram signal:
X={x1,x2,...,xt}
t represents the T-th sliding window;
Generating an artifact signal similar to the real signal in statistics through a training generator, comparing the generated artifact signal with the real EEG signal, performing the de-oculography on the signals by using ICA, performing the de-artifact processing on the myoelectric signal, and improving the quality and efficiency of de-artifact.
Preferably, in step S3, the EEG signal x= { X 1,x2,...,xt }, where X t represents the t-th sliding window, processed in step S2 is first determined according to the electrode position of the electroencephalogram signal acquisition apparatus;
Inputting the segmented electroencephalogram signals into a generating countermeasure Network model (GENERATIVE ADVERSARIAL Network), enabling a Generator (Generator) to generate artifact signals as far as possible, distinguishing the artifact signals generated by the Generator from real electroencephalogram signals by a discriminator (Discriminator), and continuously optimizing the Generator to generate the artifact signals close to the real artifact signals on the basis. And the generated artifact signals and the segmented original electroencephalogram signals are subjected to independent component analysis, so that the signals after ICA processing can effectively remove endogenous signals (such as electrooculogram, electromyogram signals and the like) and other interference signals. The independent component analysis algorithm after GAN optimization can further extract the real brain electrical signals and improve the robustness.
Preferably, in step S3, the one-dimensional electroencephalogram signal processed in step two is mapped into a two-dimensional matrix according to the electrode position of the electroencephalogram signal acquisition device:
m represents an mth electrode;
Spatial feature extraction of two-dimensional signal frame X t using convolutional neural networks, using a 3X 3 convolutional kernel and a 2X 2 convolutional kernel, without pooling operations to preserve more spatial feature information:
F1=Conv(X,W1,b1)
F2=Conv(x,W2,b2)
W is a convolution layer, b is a bias term, and F is an output feature map;
Performing time feature extraction on the EEG signals processed in the second step by using a self-attention mechanism, and performing weighted summation to obtain a final time feature vector F t, wherein the EEG signals are mapped to three different matrixes through different weights:
Query matrix (Q) q=x twq;
a key matrix (K) k=x twk;
a value matrix (V) v=x twv;
Wherein w q,wk,wv is a weight matrix obtained by learning;
The similarity of the sliding window, i.e. the attention score, is then calculated:
and carrying out normalization operation:
And finally, carrying out weighted summation on the output matrix V to obtain a final time feature vector:
Output=AttentionWeights×V。
preferably, step S3, the first convolution kernel:
F1=ReLU(W1*X(t)+b1)
wherein, represents convolution operation, reLU is an activation function, F 1 is a feature matrix after convolution, and b 1 is a bias term;
A second convolution kernel:
F2=ReLU(W2*F1+b2)
And meanwhile, the pooling operation is not used, so that the spatial characteristics are kept to the maximum extent, and finally, the extracted spatial characteristic matrix F 2 is flattened into a one-dimensional vector F s.
Preferably, in step S4, the spatial and temporal features extracted from the CNN and the self-attention mechanism are respectively F S and F t, and the temporal feature Ft of the EEG signal is extracted by the self-attention mechanism, and the spatial and temporal features are spliced to form a new fusion feature vector F f,Ff=[Fs||Ft ], ||represents the splicing operation of the features;
the fusion characteristics are expressed as:
Ff=concat(FS,Ft)
Finally, F f is input into a softmax classifier for emotion classification:
θ k is the weight of the kth emotion, k is the category number, y k is the prediction probability of the kth emotion, and finally, the emotion with the highest prediction probability is selected as the output.
Implementation flow
The emotion monitoring method based on the online ICA algorithm comprises the following specific steps:
And S1, acquiring real-time brain electrical signals and performing preliminary pretreatment. The method specifically comprises the following steps:
The acquired real-time electroencephalogram signal EEG t is set to have a signal amplitude x t at each time point, where t represents the time point, and further, preliminary pretreatment is required:
S11, a band-pass filter H (omega) with the frequency of 1-45 Hz is used, and the frequency response is defined as:
Y(ω)=X(ω)·H(ω)
where X (ω) is the spectrum of the original EEG signal and Y (ω) is the filtered signal spectrum.
And S12, judging based on the amplitude and waveform stability of the electroencephalogram signals, and marking bad leads. If the amplitude of the signal is greater than the threshold e for a period of time, it can be marked as bad guide:
||x(t)||∞≥∈。
S13, baseline correction, namely collecting a baseline signal b of a user in a resting state, wherein the corrected signal is as follows:
x′t,x'(t)=x(t)-b
Step S2, generating an antagonism network (GAN) optimization ICA algorithm, referring to a GAN-based optimization ICA algorithm framework schematic diagram in FIG. 2, specifically:
S21, in order to ensure the efficient segmentation of the real-time signal flow and the quality of the segmented signals, a sliding window mode is adopted. However, the size of the sliding window and the selection of the step size are also important, the too small window may not capture enough signal characteristics, so that the separation effect is poor, the too large window may increase the calculation load and has the problem of signal delay, the step size is too small, the calculation complexity of the algorithm is increased, and if the step size is too large, the signal segmentation is discontinuous, so that the final emotion classification is influenced.
Finally, a size of:
Twin=1s
The step length is as follows:
Δt=0.5s。
The final split signal is then:
Xt={x(t),x(t+1),...,x(t+Twin-1)},t∈Z
Where X t is denoted as the signal of the t-th window.
S22, generating an countermeasure network comprises a generator G and a discriminator D, wherein the generator is used for generating an artifact signal S (t) (such as electrooculogram, electromyogram signals and the like) which is close to reality.
S(t)=G(z)
Where G (z) represents the output of the generator and z represents random noise.
The purpose of the discriminator is to distinguish the generated artifact signal from the real signal, the discriminator will make a binary decision for each signal x (t), and output a result D (x) representing the probability that the signal is a real signal.
Where W D represents the weight of the arbiter and b D is the bias term.
At the same time, there is a problem in that training of GAN is a challenge process involving continuous iterations of the generator and arbiter, which typically requires a lot of computational resources and time. In order to ensure the real-time performance of the algorithm, the GAN model needs to be pre-trained on a large-scale data set in advance, so that training from zero is not needed. Only a fine tuning loss function is needed on the basis of realizing signals, so that the training process of the GAN can be converged in a short time, and the following loss functions of the GAN are obtained:
LG=-E[logD(G(z))]
LD=-E[logD(x(t)))]-E[log(1-D(G(z)))]
And S23, in the GAN training process, the generated artifact signal S (t) and the original electroencephalogram signal are input into an ICA algorithm, and the artifact signal is removed through blind source separation, so that an optimized independent component S optimizied (t) is obtained.
X(t)=Woptimized·Soptimizied(t)
Where W optimized is denoted as the optimized unmixed matrix. The blind source separation process of the ICA algorithm can be effectively optimized through the countermeasure training generator and the discriminator, the artifact interference in the electroencephalogram signal is removed, and the accuracy and the robustness of emotion monitoring are improved.
S3, extracting spatial features of the electroencephalogram signals by using CNN respectively, and extracting time features of the electroencephalogram signals by using a self-attention mechanism.
S31, extracting spatial features, namely, assuming that m electrodes of equipment worn by a user acquire brain electrical signals, mapping the brain electrical signals x (t) to corresponding electrode positions to form a two-dimensional matrix:
Two convolution kernels K 1=3×3,K2 =2×2 are used to extract spatial features of different scales, while no pooling layer is used in order to preserve more spatial information. With the ReLU activation function, the final feature matrix F spatial is:
Fspatial=ReLU(Xconv)
finally, a leveling operation is needed to be carried out to convert the two-dimensional characteristic vector into a one-dimensional vector F spatial
Fspatial=Flatten(Fspatial)
S32, extracting time features, namely extracting the time features by using a self-attention mechanism. Mapping the signal x (t) into a query matrix Q, a key matrix K and a value matrix V:
Q=xtwq
K=xtwk
V=xtwv
where w q、wk、wv is a weight matrix obtained by learning in advance. The Attention Score is then calculated and normalized and weighted summed to yield the final temporal feature F time,
Ftime=Attention Weights×V
S4, carrying out depth fusion on the time and space features to obtain a final feature vector F,
F=[Fspatial||Ftime]
Where || denotes the concatenation of features. Finally, inputting the feature vector F into a Softmax classifier to obtain the prediction probability P (K) of each emotion category:
Wherein θ k is expressed as weight of kth emotion, K is the number of categories, and P (K) is the pre-prediction of kth emotion
And (3) measuring probability, and finally, selecting emotion category y k with the highest probability as emotion monitoring results:
yk=arg maxkP(K)
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (7)
1. The emotion monitoring method based on the online independent component analysis is characterized by comprising the following steps of:
S1, acquiring a user real-time brain signal, and primarily preprocessing, including filtering, removing bad guides and removing base lines;
S2, segmenting the preprocessed brain signals according to a sliding window, sending the segmented data into a generated countermeasure network model, generating similar artifact signals, and carrying out independent component analysis and artifact removal together with the segmented data;
S3, extracting the spatial and temporal characteristics of the brain electrical signals from the obtained data by using a convolutional neural network and a self-attention mechanism respectively;
s4, carrying out deep fusion on the extracted space and time characteristics, and inputting the deep fusion into a softmax classifier for emotion classification, so as to realize online real-time emotion monitoring.
2. The emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
step S1, preliminary pretreatment comprises the following steps:
(1) Filtering by adopting a band-pass filter with the frequency of 1-45 Hz;
(2) Judging based on the amplitude and waveform stability of the brain electrical signal, and marking bad leads;
(3) Selecting a signal in a certain section of static state, calculating a baseline signal, and subtracting the baseline value;
T is the time of a certain resting state and the subsequent electroencephalogram signal is subtracted by the baseline value.
3. The emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
step S2, the size of the sliding window is set to be 1S, the step length is set to be 0.5S, and the segmented brain electrical signals are obtained:
X={x1,x2,...,xt}
t represents the T-th sliding window;
Generating an artifact signal similar to the real signal in statistics through a training generator, comparing the generated artifact signal with the real EEG signal, performing the de-oculography on the signals by using ICA, performing the de-artifact processing on the myoelectric signal, and improving the quality and efficiency of de-artifact.
4. The emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
Step S3, firstly, according to the electrode position of the electroencephalogram signal acquisition equipment, the EEG signal X= { X 1,x2,...,xt }, wherein X t represents a t-th sliding window, which is processed in the step S2;
Inputting the segmented electroencephalogram signals into a generating countermeasure Network model (GENERATIVE ADVERSARIAL Network), enabling a Generator (Generator) to generate artifact signals as far as possible, distinguishing the artifact signals generated by the Generator from real electroencephalogram signals by a discriminator (Discriminator), and continuously optimizing the Generator to generate the artifact signals close to the real artifact signals on the basis. And the generated artifact signals and the segmented original electroencephalogram signals are subjected to independent component analysis, so that the signals after ICA processing can effectively remove endogenous signals (such as electrooculogram, electromyogram signals and the like) and other interference signals. The independent component analysis algorithm after GAN optimization can further extract the real brain electrical signals and improve the robustness.
5. The emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
Step S3, mapping the one-dimensional brain electrical signals processed in the step two into a two-dimensional matrix according to the electrode positions of the brain electrical signal acquisition equipment:
m represents an mth electrode;
Spatial feature extraction of two-dimensional signal frame X t using convolutional neural networks, using a 3X 3 convolutional kernel and a 2X 2 convolutional kernel, without pooling operations to preserve more spatial feature information:
F1=Conv(X,W1,b1)
F2=Conv(X,W2,b2)
W is a convolution layer, b is a bias term, and F is an output feature map;
Performing time feature extraction on the EEG signals processed in the second step by using a self-attention mechanism, and performing weighted summation to obtain a final time feature vector F t, wherein the EEG signals are mapped to three different matrixes through different weights:
Query matrix (Q) q=x twq;
a key matrix (K) k=x twk;
a value matrix (V) v=x twv;
Wherein w q,wk,wv is a weight matrix obtained by learning;
The similarity of the sliding window, i.e. the attention score, is then calculated:
and carrying out normalization operation:
And finally, carrying out weighted summation on the output matrix V to obtain a final time feature vector:
Output=Attention Weights×V。
6. the emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
Step S3, a first convolution kernel:
F1=ReLU(W1*X(t)+b1)
wherein, represents convolution operation, reLU is an activation function, F 1 is a feature matrix after convolution, and b 1 is a bias term;
A second convolution kernel:
F2=ReLU(W2*F1+b2)
And meanwhile, the pooling operation is not used, so that the spatial characteristics are kept to the maximum extent, and finally, the extracted spatial characteristic matrix F 2 is flattened into a one-dimensional vector F s.
7. The emotion monitoring method based on online independent component analysis according to claim 1, characterized by:
Step S4, the spatial and temporal features extracted from the CNN and the self-attention mechanism are F s and F t respectively, the temporal feature F t of the EEG signal is extracted through the self-attention mechanism, the spatial and temporal features are spliced to form a new fusion feature vector F f,Ff=[Fs||Ft, and I represents the splicing operation of the features;
the fusion characteristics are expressed as:
Ff=concat(Fs,Ft)
Finally, F f is input into a softmax classifier for emotion classification:
θ k is the weight of the kth emotion, k is the category number, y k is the prediction probability of the kth emotion, and finally, the emotion with the highest prediction probability is selected as the output.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411984853.3A CN119970034B (en) | 2024-12-31 | 2024-12-31 | Emotion monitoring method based on online independent component analysis |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411984853.3A CN119970034B (en) | 2024-12-31 | 2024-12-31 | Emotion monitoring method based on online independent component analysis |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119970034A true CN119970034A (en) | 2025-05-13 |
| CN119970034B CN119970034B (en) | 2025-09-26 |
Family
ID=95643618
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411984853.3A Active CN119970034B (en) | 2024-12-31 | 2024-12-31 | Emotion monitoring method based on online independent component analysis |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119970034B (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120184869A1 (en) * | 2011-01-16 | 2012-07-19 | Ching-Tai Chiang | Electroencephalogram signal processing method |
| CN114224342A (en) * | 2021-12-06 | 2022-03-25 | 南京航空航天大学 | Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network |
| CN117520826A (en) * | 2024-01-03 | 2024-02-06 | 武汉纺织大学 | Multi-mode emotion recognition method and system based on wearable equipment |
| US20240172984A1 (en) * | 2022-11-28 | 2024-05-30 | Hangzhou Dianzi University | Electroencephalogram (eeg) emotion recognition method based on spiking convolutional neural network |
| CN118797496A (en) * | 2024-06-17 | 2024-10-18 | 重庆邮电大学 | A classification method for EEG emotion recognition based on multi-scale spatiotemporal feature extraction based on CNN and Transformer |
-
2024
- 2024-12-31 CN CN202411984853.3A patent/CN119970034B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120184869A1 (en) * | 2011-01-16 | 2012-07-19 | Ching-Tai Chiang | Electroencephalogram signal processing method |
| CN114224342A (en) * | 2021-12-06 | 2022-03-25 | 南京航空航天大学 | Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network |
| US20240172984A1 (en) * | 2022-11-28 | 2024-05-30 | Hangzhou Dianzi University | Electroencephalogram (eeg) emotion recognition method based on spiking convolutional neural network |
| CN117520826A (en) * | 2024-01-03 | 2024-02-06 | 武汉纺织大学 | Multi-mode emotion recognition method and system based on wearable equipment |
| CN118797496A (en) * | 2024-06-17 | 2024-10-18 | 重庆邮电大学 | A classification method for EEG emotion recognition based on multi-scale spatiotemporal feature extraction based on CNN and Transformer |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119970034B (en) | 2025-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111709267B (en) | Electroencephalogram signal emotion recognition method of deep convolutional neural network | |
| CN110353702A (en) | A method and system for emotion recognition based on shallow convolutional neural network | |
| CN111329474B (en) | EEG identification method, system and information update method based on deep learning | |
| CN111523601B (en) | Potential emotion recognition method based on knowledge guidance and generation of countermeasure learning | |
| CN105894039A (en) | Emotion recognition modeling method, emotion recognition method and apparatus, and intelligent device | |
| CN109924990A (en) | A kind of EEG signals depression identifying system based on EMD algorithm | |
| CN115581467A (en) | A recognition method of SSVEP based on time, frequency and time-frequency domain analysis and deep learning | |
| CN110826527A (en) | Electroencephalogram negative emotion recognition method and system based on aggressive behavior prediction | |
| CN107133612A (en) | Based on image procossing and the intelligent ward of speech recognition technology and its operation method | |
| CN117520826A (en) | Multi-mode emotion recognition method and system based on wearable equipment | |
| CN114129163B (en) | Emotion analysis method and system for multi-view deep learning based on electroencephalogram signals | |
| CN110399846A (en) | A Gesture Recognition Method Based on Correlation of Multi-channel EMG Signals | |
| CN113486752B (en) | Emotion recognition method and system based on electrocardiosignal | |
| CN113143261A (en) | Electromyographic signal-based identity recognition system, method and equipment | |
| CN112438741A (en) | Driving state detection method and system based on electroencephalogram feature transfer learning | |
| Bhalerao et al. | ESSDM: An enhanced sparse swarm decomposition method and its application in multi-class motor imagery–based EEG-BCI system | |
| CN116035577A (en) | Electroencephalogram emotion recognition method combining attention mechanism and CRNN | |
| CN118576206A (en) | EEG emotion recognition method and system based on multi-task and attention mechanism | |
| CN112464711A (en) | MFDC-based electroencephalogram identity identification method, storage medium and identification device | |
| CN114190953A (en) | Training method and system for EEG signal noise reduction model for EEG acquisition equipment | |
| Ahmed et al. | Effective hybrid method for the detection and rejection of electrooculogram (EOG) and power line noise artefacts from electroencephalogram (EEG) mixtures | |
| CN115919334A (en) | A Method of EMG Denoising and Active Segment Detection Based on Generative Adversarial Networks | |
| CN119970034B (en) | Emotion monitoring method based on online independent component analysis | |
| CN113974627A (en) | Emotion recognition method based on brain-computer generated confrontation | |
| Gao et al. | Emotion recognition from EEG signal using CA-GCN |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |