CN112836593B - An emotion recognition method and system integrating prior and automatic EEG features - Google Patents
An emotion recognition method and system integrating prior and automatic EEG features Download PDFInfo
- Publication number
- CN112836593B CN112836593B CN202110052255.2A CN202110052255A CN112836593B CN 112836593 B CN112836593 B CN 112836593B CN 202110052255 A CN202110052255 A CN 202110052255A CN 112836593 B CN112836593 B CN 112836593B
- Authority
- CN
- China
- Prior art keywords
- feature
- eeg
- emotional
- features
- prior
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 29
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000002996 emotional effect Effects 0.000 claims abstract description 118
- 230000008451 emotion Effects 0.000 claims abstract description 81
- 239000011159 matrix material Substances 0.000 claims abstract description 58
- 230000004913 activation Effects 0.000 claims abstract description 16
- 239000000284 extract Substances 0.000 claims abstract description 8
- 239000013598 vector Substances 0.000 claims description 64
- 230000004927 fusion Effects 0.000 claims description 19
- 230000009466 transformation Effects 0.000 claims description 19
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 230000003595 spectral effect Effects 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000037007 arousal Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
本发明公开了一种融合先验与自动脑电特征的情绪识别方法及系统,该方法首先从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,然后将其映射为差分熵矩阵并拼接为空频特征矩阵后作为数学驱动的先验情绪特征;同时通过缩放卷积层自动提取原始脑电信号中的时频域信息作为数据驱动的自动脑电情绪特征;再将先验情绪特征与自动脑电情绪特征进行变换后拼接,再进行融合,提取其高阶语义特征,最后送入柔性最大值激活函数对情绪进行识别分类。本发明提供了一种能结合先验知识与数据驱动、可同时建模时空频域信息、提高情绪识别的智能与普适性的情绪识别的方法。The invention discloses an emotion recognition method and system that integrates prior and automatic EEG features. The method first extracts the differential entropy features of different frequencies related to emotions from the original EEG signal, and then maps it into a differential entropy matrix. And concatenated into a space-frequency feature matrix as a priori emotion feature driven by mathematics; at the same time, the time-frequency domain information in the original EEG signal is automatically extracted through the scaling convolution layer as a data-driven automatic EEG emotion feature; and then the prior emotion The features are transformed and spliced with the automatic EEG emotional features, and then fused to extract their high-order semantic features, and finally sent to the soft maximum activation function to recognize and classify emotions. The invention provides an emotion recognition method that can combine prior knowledge and data drive, can simultaneously model time-space-frequency domain information, and improve the intelligence and universality of emotion recognition.
Description
技术领域technical field
本发明属于生物信息识别技术领域,具体涉及一种融合先验与自动脑电特征的情绪识别方法及系统。The invention belongs to the technical field of biological information recognition, and in particular relates to an emotion recognition method and system that integrates prior and automatic EEG features.
背景技术Background technique
如何从脑活动数据中发现具有生物学意义的知识及规律并加以利用,正成为当今生物信息识别领域理论与实践研究的难点与热点。基于脑电信号的情绪识别及其脑机制的关联研究,已经成为神经工程和人工智能领域的热门课题。近年来,机器学习技术的进步为基于脑电信号的情绪识别研究提供了可用的技术方法。机器学习的各种应用中,手工设计的特征仍处于主导地位,它主要依靠的是设计者的先验知识,传统的手工特征较好地利用了先验知识,但也会损失原始脑电信号中的其他信息,难以充分地利用数据提供的各种信息,并且手工特征的设计依赖于强假设性,导致其可移植性不强。深度学习可以从数据中自动学习特征的表示,但是这种方法提取的特征类型单一,并且缺少先验知识。How to discover biologically meaningful knowledge and laws from brain activity data and make use of them is becoming a difficult and hot spot in the theoretical and practical research in the field of biological information recognition. EEG signal-based emotion recognition and its association research has become a hot topic in the fields of neural engineering and artificial intelligence. In recent years, the advancement of machine learning technology has provided available technical methods for the research of emotion recognition based on EEG signals. In various applications of machine learning, hand-designed features are still dominant, and it mainly relies on the prior knowledge of the designer. Traditional hand-crafted features make good use of prior knowledge, but also lose the original EEG signal. It is difficult to make full use of the various information provided by the data, and the design of manual features relies on strong assumptions, resulting in poor portability. Deep learning can automatically learn the representation of features from data, but the type of features extracted by this method is single and lacks prior knowledge.
发明内容Contents of the invention
针对现有技术中的缺陷和不足,本发明提供了一种融合先验与自动脑电特征的情绪识别方法及系统,该方法能改善脑电情绪识别过程中缺乏先验脑电情绪知识、强假设性和保留的任务相关特征不鲁棒的问题。Aiming at the defects and deficiencies in the prior art, the present invention provides an emotion recognition method and system that integrates prior and automatic EEG features, which can improve the lack of prior EEG emotion knowledge and strong Hypothetical and preserved task-related features are not robust to the problem.
为达到上述目的,本发明采取如下的技术方案:To achieve the above object, the present invention takes the following technical solutions:
一种融合先验与自动脑电特征的情绪识别方法,该方法首先从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵作为数学驱动的先验情绪特征;同时通过缩放卷积层自动提取原始脑电信号中的时频域信息作为数据驱动的自动脑电情绪特征;将数学驱动的先验情绪特征与数据驱动的自动脑电情绪特征进行变换后拼接,再进行融合,提取其高阶语义特征,最后送入柔性最大值激活函数对情绪进行识别分类。An emotion recognition method that integrates prior and automatic EEG features. This method first extracts the differential entropy features of different frequencies related to emotions from the original EEG signal, maps them to a differential entropy matrix, and then converts the differential entropy features of different frequency bands The entropy matrix is spliced into a space-frequency feature matrix as a priori emotional feature driven by mathematics; at the same time, the time-frequency domain information in the original EEG signal is automatically extracted through the scaling convolution layer as the automatic EEG emotional feature driven by data; The experimental emotional features are transformed and spliced with the data-driven automatic EEG emotional features, and then fused to extract their high-order semantic features, and finally sent to the soft maximum activation function to identify and classify emotions.
本发明还包括如下技术特征:The present invention also includes following technical characteristics:
具体的,该方法包括以下步骤:Specifically, the method includes the following steps:
步骤一,构建先验情绪特征:
从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,依据电极空间位置将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵,将其作为数学驱动的先验情绪特征;Extract the differential entropy features of different frequencies related to emotions from the original EEG signal, map it into a differential entropy matrix according to the electrode spatial position, and then stitch the differential entropy matrix of different frequency bands into a space-frequency feature matrix, which is used as a mathematical drive prior emotional characteristics;
步骤二,构建自动脑电情绪特征:Step 2, construct automatic EEG emotion features:
将原始的脑电信号逐通道送入独立的缩放卷积层,所有通道的输出结果拼接为一个三维张量作为以数据驱动的自动脑电情绪特征;The original EEG signal is sent to an independent scaling convolution layer channel by channel, and the output results of all channels are spliced into a three-dimensional tensor as a data-driven automatic EEG emotional feature;
步骤三,特征融合:Step 3, feature fusion:
对步骤一和步骤二所得到的先验情绪特征和自动脑电情绪特征进行特征变换,分别得到数学驱动的先验情绪特征向量和数据驱动的自动脑电情绪特征向量;将先验情绪特征向量和脑电情绪特征向量进行深度融合,提取高阶语义特征;Perform feature transformation on the prior emotional features and automatic EEG emotional features obtained in
步骤四,分类识别:Step 4, classification recognition:
将步骤三提取到的高阶语义特征送入柔性最大值激活函数对情绪进行识别分类。Send the high-order semantic features extracted in step 3 into the soft maximum activation function to identify and classify emotions.
具体的,所述步骤一包括如下步骤:Specifically, said step one includes the following steps:
步骤1.1,利用傅里叶变换逐通道将原始脑电信号划分为(4-7Hz)、(8-15Hz)、(16-32Hz)和(33-45Hz)四个频段;Step 1.1, using Fourier transform to divide the original EEG signal into four frequency bands (4-7Hz), (8-15Hz), (16-32Hz) and (33-45Hz) channel by channel;
步骤1.2,逐通道计算步骤1.1中四个频段上的功率谱密度;Step 1.2, calculate the power spectral density on the four frequency bands in step 1.1 channel by channel;
步骤1.3,对步骤1.2中四个频段上的功率谱密度进行对数函数非线性变换,得到其差分熵特征;Step 1.3, performing logarithmic function nonlinear transformation to the power spectral density on the four frequency bands in step 1.2, to obtain its differential entropy feature;
步骤1.4,按照脑电采集设备电极空间位置定义信息,将步骤1.3中同一频段上每一个脑电通道的差分熵特征与一个二维特征矩阵中欧氏距离最近的元素进行对应,其余没有脑电通道与之对应的二维特征矩阵元素统一设置为零;四个频段上每个脑电通道的差分熵特征映射得到四个频段的差分熵特征矩阵;In step 1.4, according to the definition information of the electrode spatial position of the EEG acquisition equipment, the differential entropy feature of each EEG channel on the same frequency band in step 1.3 is corresponding to the element with the closest Euclidean distance in a two-dimensional feature matrix, and there are no other EEG channels The corresponding two-dimensional feature matrix elements are uniformly set to zero; the differential entropy feature map of each EEG channel on the four frequency bands obtains the differential entropy feature matrix of the four frequency bands;
步骤1.5,将步骤1.4中所映射的四个频段的差分熵特征矩阵,按照从左至右、从上至下的方式依次拼接得到先验情绪特征。In step 1.5, the differential entropy feature matrices of the four frequency bands mapped in step 1.4 are spliced sequentially from left to right and from top to bottom to obtain prior emotion features.
具体的,所述步骤二包括如下步骤:Specifically, the second step includes the following steps:
步骤2.1,将原始的脑电信号逐通道送入独立的缩放卷积层,每个通道得到一张具有时频信息的二维类时频图;In step 2.1, the original EEG signal is sent to an independent scaling convolution layer channel by channel, and each channel obtains a two-dimensional time-frequency map with time-frequency information;
步骤2.2,在通道维度上将所有的类时频图进行堆叠拼接,得到一个三维张量作为以数据驱动的自动脑电情绪特征。In step 2.2, all the time-frequency graphs are stacked and stitched in the channel dimension, and a three-dimensional tensor is obtained as a data-driven automatic EEG emotional feature.
具体的,所述步骤三包括如下步骤:Specifically, the third step includes the following steps:
步骤3.1,通过独立的三层卷积神经网络层和一层最大池化层,分别对先验情绪特征和数据驱动的自动脑电情绪特征进行特征变换,得到先验情绪特征向量和脑电情绪特征向量;Step 3.1, through the independent three-layer convolutional neural network layer and one layer of maximum pooling layer, respectively perform feature transformation on the prior emotion feature and the data-driven automatic EEG emotion feature, and obtain the prior emotion feature vector and EEG emotion Feature vector;
步骤3.2,将先验情绪特征向量和脑电情绪特征向量沿着列的方向拼接在一起,得到组合后的具有脑电时空频域信息的情绪特征向量;Step 3.2, splicing the prior emotion feature vector and the EEG emotion feature vector together along the column direction to obtain the combined emotion feature vector with EEG space-time-frequency domain information;
步骤3.3,将步骤3.2最终得到的情绪特征向量输入至一个全连接神经网络层,提取融合先验与自动脑电情绪特征的高阶语义特征。In step 3.3, the emotional feature vector finally obtained in step 3.2 is input into a fully connected neural network layer to extract high-order semantic features that integrate prior and automatic EEG emotional features.
具体的,所述步骤四通过线性神经网络层和柔性最大值传输函数来对提取到的高阶语义特征进行分类识别,进而识别人的情绪状态。Specifically, the fourth step uses the linear neural network layer and the soft maximum transfer function to classify and identify the extracted high-order semantic features, and then identify the emotional state of the person.
一种融合先验与自动脑电特征的情绪识别系统,包括:An emotion recognition system that integrates prior and automatic EEG features, including:
先验情绪特征构建模块,用以从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,依据电极空间位置将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵,将其作为数学驱动的先验情绪特征;A priori emotional feature building block, which is used to extract the differential entropy features of different frequencies related to emotion from the original EEG signal, map it into a differential entropy matrix according to the spatial position of the electrode, and then stitch the differential entropy matrix of different frequency bands into empty frequency feature matrix as a mathematically driven prior emotion feature;
自动脑电情绪特征构建模块,用以将原始的脑电信号逐通道送入独立的缩放卷积层,所有通道的输出结果拼接为一个三维张量作为以数据驱动的自动脑电情绪特征;The automatic EEG emotional feature building module is used to send the original EEG signal into an independent scaling convolution layer channel by channel, and the output results of all channels are spliced into a three-dimensional tensor as a data-driven automatic EEG emotional feature;
特征融合模块,用以对先验情绪特征和自动脑电情绪特征进行特征变换,分别得到数学驱动的先验情绪特征向量和数据驱动的自动脑电情绪特征向量;并将先验情绪特征向量和脑电情绪特征向量进行深度融合,提取高阶语义特征;The feature fusion module is used to perform feature transformation on the prior emotional features and the automatic EEG emotional features to obtain the mathematically-driven prior emotional feature vectors and the data-driven automatic EEG emotional feature vectors; and combine the prior emotional feature vectors and EEG emotional feature vectors are deeply fused to extract high-order semantic features;
分类识别模块,用以将取到的高阶语义特征送入柔性最大值激活函数对情绪进行识别分类。The classification identification module is used to send the obtained high-order semantic features into the soft maximum activation function to identify and classify emotions.
具体的,所述先验情绪特征构建模块中,首先利用傅里叶变换逐通道将原始脑电信号划分为(4-7Hz)、(8-15Hz)、(16-32Hz)和(33-45Hz)四个频段;再逐通道计算四个频段上的功率谱密度;对四个频段上的功率谱密度进行对数函数非线性变换,得到其差分熵特征;按照脑电采集设备电极空间位置定义信息,将同一频段上每一个脑电通道的差分熵特征与一个二维特征矩阵中欧氏距离最近的元素进行对应,其余没有脑电通道与之对应的二维特征矩阵元素统一设置为零;四个频段上每个脑电通道的差分熵特征映射得到四个频段的差分熵特征矩阵;将所映射的四个频段的差分熵特征矩阵,按照从左至右、从上至下的方式依次拼接得到先验情绪特征。Specifically, in the priori emotion feature building block, the original EEG signal is firstly divided into (4-7Hz), (8-15Hz), (16-32Hz) and (33-45Hz) channel by channel using Fourier transform ) four frequency bands; then calculate the power spectral density on the four frequency bands channel by channel; carry out the logarithmic function nonlinear transformation to the power spectral density on the four frequency bands to obtain its differential entropy feature; according to the definition of the electrode space position of the EEG acquisition equipment Information, the differential entropy feature of each EEG channel on the same frequency band corresponds to the element with the closest Euclidean distance in a two-dimensional feature matrix, and the other two-dimensional feature matrix elements that have no EEG channel corresponding to it are uniformly set to zero; four. The differential entropy feature mapping of each EEG channel on a frequency band obtains the differential entropy feature matrix of four frequency bands; the mapped differential entropy feature matrix of the four frequency bands is sequentially spliced from left to right and from top to bottom Get the prior emotion features.
具体的,所述自动脑电情绪特征构建模块中,将原始的脑电信号逐通道送入独立的缩放卷积层,每个通道得到一张具有时频信息的二维类时频图;在通道维度上将所有的类时频图进行堆叠拼接,得到一个三维张量作为以数据驱动的自动脑电情绪特征。Specifically, in the automatic EEG emotional feature building module, the original EEG signal is sent to an independent scaling convolution layer channel by channel, and each channel obtains a two-dimensional time-frequency map with time-frequency information; In the channel dimension, all time-frequency diagrams are stacked and spliced to obtain a three-dimensional tensor as a data-driven automatic EEG emotional feature.
具体的,所述特征融合模块中,通过独立的三层卷积神经网络层和一层最大池化层,分别对先验情绪特征和数据驱动的自动脑电情绪特征进行特征变换,得到先验情绪特征向量和脑电情绪特征向量;将先验情绪特征向量和脑电情绪特征向量沿着列的方向拼接在一起,得到组合后的具有脑电时空频域信息的情绪特征向量;将最终得到的情绪特征向量输入至一个全连接神经网络层,提取融合先验与自动脑电情绪特征的高阶语义特征;Specifically, in the feature fusion module, through an independent three-layer convolutional neural network layer and a layer of maximum pooling layer, the prior emotional features and the data-driven automatic EEG emotional features are respectively subjected to feature transformation to obtain a priori The emotional feature vector and the EEG emotional feature vector; the prior emotional feature vector and the EEG emotional feature vector are spliced together along the column direction to obtain the combined emotional feature vector with EEG space-time-frequency domain information; will finally get The emotional feature vector of the input to a fully connected neural network layer to extract high-order semantic features that fuse prior and automatic EEG emotional features;
所述分类识别模块通过线性神经网络层和柔性最大值传输函数来对提取到的高阶语义特征进行分类识别,进而识别人的情绪状态。The classification recognition module classifies and recognizes the extracted high-order semantic features through the linear neural network layer and the soft maximum transfer function, and then recognizes the emotional state of the person.
本发明与现有技术相比,有益的技术效果是:Compared with the prior art, the present invention has beneficial technical effects as follows:
(I)本发明通过融合脑电通道空间位置信息和频率信息所构建的脑电情绪空频特征矩阵,相较于现有的传统手工特征的提取方法,不仅包含了原始脑电信号中的空频域信息,同时具备先验的脑电情绪相关知识。(1) The EEG emotional space-frequency feature matrix constructed by fusing the spatial position information and frequency information of the EEG channel in the present invention, compared with the existing traditional manual feature extraction method, not only includes the space-frequency feature matrix in the original EEG signal Frequency domain information, as well as prior EEG emotion-related knowledge.
(II)本发明通过有效地融合先验与数据驱动的脑电情绪识别相关特征,提供了一种可以融合数学与数据驱动特征的脑电情绪识别方法,该方法可以改善脑电情绪识别过程中特征类型单一、缺乏先验脑电情绪知识、强假设性和保留的任务相关特征不鲁棒的问题。(II) The present invention provides an EEG emotion recognition method that can integrate mathematics and data-driven features by effectively integrating prior and data-driven EEG emotion recognition features, which can improve the EEG emotion recognition process. Problems with single feature types, lack of prior EEG emotion knowledge, strong assumptions, and retained task-related features are not robust.
(III)该脑电情绪识别的方法包含了脑电原始信号中的时空频域的信息,实现了信息的互补。(III) The EEG emotion recognition method includes information in the time-space-frequency domain in the original EEG signal, and realizes the complementarity of information.
(IV)本发明通过多输入神经网络解决了脑电情绪识别过程中特征信息损失,缺乏先验脑电情绪知识等遗憾,并提出一种不依赖于强假设性,可移植性强的情绪识别的方法。(IV) The present invention solves the loss of characteristic information in the EEG emotion recognition process through the multi-input neural network, lack of prior EEG emotion knowledge and other regrets, and proposes a kind of emotion recognition that does not depend on strong assumptions and has strong portability Methods.
附图说明Description of drawings
图1为先验特征空频矩阵构建过程示意图。Figure 1 is a schematic diagram of the construction process of the prior characteristic space-frequency matrix.
图2为本发明融合先验和自动脑电特征的脑电情绪识别网络示意图。Fig. 2 is a schematic diagram of an EEG emotion recognition network that integrates prior and automatic EEG features according to the present invention.
图3为本发明脑电情绪识别方法详细步骤流程图。FIG. 3 is a flow chart of the detailed steps of the EEG emotion recognition method of the present invention.
具体实施方式Detailed ways
以下对本发明的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本发明,并不用于限制本发明。Specific embodiments of the present invention will be described in detail below. It should be understood that the specific embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.
如图1、图2和图3所示,本发明提供一种融合先验与自动脑电特征的情绪识别方法及系统,该方法首先从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵作为数学驱动的先验情绪特征;同时通过缩放卷积层自动提取原始脑电信号中的时频域信息作为数据驱动的自动脑电情绪特征;接着,将数学驱动的先验情绪特征与数据驱动的自动脑电情绪特征进行变换后拼接,再进行融合,提取其高阶语义特征,最后送入柔性最大值激活函数对情绪进行识别分类。As shown in Figure 1, Figure 2 and Figure 3, the present invention provides an emotion recognition method and system that integrates prior and automatic EEG features. The method first extracts the difference of different frequencies related to emotion from the original EEG signal The entropy feature is mapped to a differential entropy matrix, and then the differential entropy matrix of different frequency bands is spliced into a space-frequency feature matrix as a priori emotional feature driven by mathematics; at the same time, the time-frequency in the original EEG signal is automatically extracted by scaling the convolutional layer The domain information is used as the data-driven automatic EEG emotional features; then, the mathematically-driven prior emotional features and the data-driven automatic EEG emotional features are transformed and spliced, and then fused to extract their high-order semantic features, and finally sent to the The softmax activation function recognizes and classifies emotions.
本发明中,通道指的是采集脑电信号的每一个导电极,一般采集脑电信号会使用多个导电极(即通道)来收集脑电信号。In the present invention, a channel refers to each conductive electrode for collecting EEG signals. Generally, multiple conductive electrodes (ie, channels) are used to collect EEG signals for collecting EEG signals.
通道维度指的是自动脑电情绪特征张量,总共有三个维度,所有的通道数构成了三维张量中的一个维度。The channel dimension refers to the automatic EEG emotional feature tensor, which has three dimensions in total, and all the channel numbers constitute a dimension in the three-dimensional tensor.
柔性最大值激活函数即softmax激活函数或归一化指数函数,柔性最大值激活函数能将在负无穷到正无穷上的预测结果转为非负数,并且总体和为1。The softmax activation function is the softmax activation function or the normalized exponential function. The softmax activation function can convert the prediction results from negative infinity to positive infinity into non-negative numbers, and the overall sum is 1.
线性神经网络层是由一系列线性神经元所构成的一个神经网络层,输出可以取任意值。The linear neural network layer is a neural network layer composed of a series of linear neurons, and the output can take any value.
该方法包括如下步骤:The method comprises the steps of:
步骤一,构建先验情绪特征:
首先从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,依据电极空间位置将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵,将其作为数学驱动的先验情绪特征;First, the differential entropy features of different frequencies related to emotions are extracted from the original EEG signal, and they are mapped into a differential entropy matrix according to the electrode spatial position, and then the differential entropy matrix of different frequency bands is spliced into a space-frequency feature matrix, which is used as a mathematical Driven a priori emotional characteristics;
步骤二,构建自动脑电情绪特征:Step 2, construct automatic EEG emotion features:
将原始的脑电信号逐通道送入独立的缩放卷积层,所有通道的输出结果拼接为一个三维张量作为以数据驱动的自动脑电情绪特征;The original EEG signal is sent to an independent scaling convolution layer channel by channel, and the output results of all channels are spliced into a three-dimensional tensor as a data-driven automatic EEG emotional feature;
步骤三,特征融合:Step 3, feature fusion:
对步骤一和步骤二所得到的先验情绪特征和自动脑电情绪特征进行特征变换,分别得到数学驱动的先验情绪特征向量和数据驱动的脑电情绪特征向量;将先验情绪特征向量和脑电情绪特征向量进行深度融合,提取高阶语义特征;Perform feature transformation on the prior emotional features and automatic EEG emotional features obtained in
步骤四,分类识别:Step 4, classification recognition:
将步骤三提取到的高阶语义特征送入柔性最大值激活函数对情绪进行识别分类。Send the high-order semantic features extracted in step 3 into the soft maximum activation function to identify and classify emotions.
本实施例中,步骤一构建先验情绪特征,如图1所示,其具体步骤如下:In this embodiment,
步骤1.1,利用傅里叶变换逐通道将原始脑电信号划分为(4-7Hz)、(8-15Hz)、(16-32Hz)和(33-45Hz)四个频段;Step 1.1, using Fourier transform to divide the original EEG signal into four frequency bands (4-7Hz), (8-15Hz), (16-32Hz) and (33-45Hz) channel by channel;
步骤1.2,逐通道计算步骤1.1中四个频段上的功率谱密度;Step 1.2, calculate the power spectral density on the four frequency bands in step 1.1 channel by channel;
步骤1.3,对步骤1.2中四个频段上的功率谱密度进行对数函数非线性变换,得到其差分熵特征;Step 1.3, performing logarithmic function nonlinear transformation to the power spectral density on the four frequency bands in step 1.2, to obtain its differential entropy feature;
步骤1.4,按照脑电采集设备电极空间位置定义信息,将步骤1.3中同一频段上每一个脑电通道的差分熵特征与一个二维特征矩阵中欧氏距离最近的元素进行对应,其余没有脑电通道与之对应的二维特征矩阵元素统一设置为零;四个频段上每个脑电通道的差分熵特征映射得到四个频段的差分熵特征矩阵;In step 1.4, according to the definition information of the electrode spatial position of the EEG acquisition equipment, the differential entropy feature of each EEG channel on the same frequency band in step 1.3 is corresponding to the element with the closest Euclidean distance in a two-dimensional feature matrix, and there are no other EEG channels The corresponding two-dimensional feature matrix elements are uniformly set to zero; the differential entropy feature map of each EEG channel on the four frequency bands obtains the differential entropy feature matrix of the four frequency bands;
步骤1.5,将步骤1.4中所映射的四个频段的差分熵特征矩阵,按照从左至右、从上至下的方式依次拼接得到先验情绪特征(即图1中的空频特征矩阵);Step 1.5, the differential entropy feature matrix of the four frequency bands mapped in step 1.4 is spliced successively from left to right and from top to bottom to obtain the prior emotion feature (i.e. the space-frequency feature matrix in Figure 1);
步骤二构建自动脑电情绪特征,具体步骤如下:Step 2 Construct automatic EEG emotion features, the specific steps are as follows:
步骤2.1,将原始的脑电信号逐通道送入独立的缩放卷积层,每个通道得到一张具有时频信息的二维类时频图;In step 2.1, the original EEG signal is sent to an independent scaling convolution layer channel by channel, and each channel obtains a two-dimensional time-frequency map with time-frequency information;
步骤2.2,在通道维度上将所有的类时频图进行堆叠拼接,得到一个三维张量作为以数据驱动的自动脑电情绪特征(自动脑电时频特征张量)。In step 2.2, all the time-frequency graphs are stacked and spliced in the channel dimension to obtain a three-dimensional tensor as a data-driven automatic EEG emotional feature (automatic EEG time-frequency feature tensor).
步骤三特征融合,如图2所示,其融合的具体步骤为:Step 3 feature fusion, as shown in Figure 2, the specific steps of the fusion are:
步骤3.1,通过独立的三层卷积神经网络层和一层最大池化层,分别对先验情绪特征和数据驱动的自动脑电情绪特征进行特征变换,得到先验情绪特征向量和脑电情绪特征向量;Step 3.1, through the independent three-layer convolutional neural network layer and one layer of maximum pooling layer, respectively perform feature transformation on the prior emotion feature and the data-driven automatic EEG emotion feature, and obtain the prior emotion feature vector and EEG emotion Feature vector;
步骤3.2,将先验情绪特征向量和脑电情绪特征向量沿着列的方向拼接在一起,得到组合后的具有脑电时空频域信息的情绪特征向量;Step 3.2, splicing the prior emotion feature vector and the EEG emotion feature vector together along the column direction to obtain the combined emotion feature vector with EEG space-time-frequency domain information;
步骤3.3,将步骤3.2最终得到的情绪特征向量输入至一个全连接神经网络层,提取融合先验与自动脑电情绪特征的高阶语义特征。In step 3.3, the emotional feature vector finally obtained in step 3.2 is input into a fully connected neural network layer to extract high-order semantic features that integrate prior and automatic EEG emotional features.
步骤四中对情绪进行分类识别,通过线性神经网络层和柔性最大值传输函数来对提取到的高阶语义特征进行分类识别,进而识别人的情绪状态。更具体的,将提取到的高阶语义特征送入线性神经网络层(这一层有很多个神经元,每一个神经元使用的激活函数是softmax函数),通过softmax激活函数将预测概率大的情绪类别作为最终情绪识别输出的结果(即若积极的概率大于消极的概率则输出积极,若消极的概率大于积极的概率则输出消极)。In the fourth step, the emotions are classified and recognized, and the extracted high-order semantic features are classified and recognized through the linear neural network layer and the soft maximum transfer function, and then the emotional state of the person is recognized. More specifically, the extracted high-order semantic features are sent to the linear neural network layer (there are many neurons in this layer, and the activation function used by each neuron is a softmax function), and the prediction probability is high through the softmax activation function. The emotion category is the result of the final emotion recognition output (that is, if the positive probability is greater than the negative probability, the output is positive, and if the negative probability is greater than the positive probability, the output is negative).
一种融合先验与自动脑电特征的情绪识别系统,包括:An emotion recognition system that integrates prior and automatic EEG features, including:
先验情绪特征构建模块,用以从原始脑电信号中提取与情绪相关的不同频率的差分熵特征,依据电极空间位置将其映射为差分熵矩阵,然后将不同频带的差分熵矩阵拼接为空频特征矩阵,将其作为数学驱动的先验情绪特征;A priori emotional feature building block, which is used to extract the differential entropy features of different frequencies related to emotion from the original EEG signal, map it into a differential entropy matrix according to the spatial position of the electrode, and then stitch the differential entropy matrix of different frequency bands into empty frequency feature matrix as a mathematically driven prior emotion feature;
自动脑电情绪特征构建模块,用以将原始的脑电信号逐通道送入独立的缩放卷积层,所有通道的输出结果拼接为一个三维张量作为以数据驱动的自动脑电情绪特征;The automatic EEG emotional feature building module is used to send the original EEG signal into an independent scaling convolution layer channel by channel, and the output results of all channels are spliced into a three-dimensional tensor as a data-driven automatic EEG emotional feature;
特征融合模块,用以对先验情绪特征和自动脑电情绪特征进行特征变换,分别得到数学驱动的先验情绪特征向量和数据驱动的自动脑电情绪特征向量;并将先验情绪特征向量和脑电情绪特征向量进行深度融合,提取高阶语义特征;The feature fusion module is used to perform feature transformation on the prior emotional features and the automatic EEG emotional features to obtain the mathematically-driven prior emotional feature vectors and the data-driven automatic EEG emotional feature vectors; and combine the prior emotional feature vectors and EEG emotional feature vectors are deeply fused to extract high-order semantic features;
分类识别模块,用以将取到的高阶语义特征送入柔性最大值激活函数对情绪进行识别分类。The classification identification module is used to send the obtained high-order semantic features into the soft maximum activation function to identify and classify emotions.
具体的,所述先验情绪特征构建模块中,首先利用傅里叶变换逐通道将原始脑电信号划分为(4-7Hz)、(8-15Hz)、(16-32Hz)和(33-45Hz)四个频段;再逐通道计算四个频段上的功率谱密度;对四个频段上的功率谱密度进行对数函数非线性变换,得到其差分熵特征;按照脑电采集设备电极空间位置定义信息,将同一频段上每一个脑电通道的差分熵特征与一个二维特征矩阵中欧氏距离最近的元素进行对应,其余没有脑电通道与之对应的二维特征矩阵元素统一设置为零;四个频段上每个脑电通道的差分熵特征映射得到四个频段的差分熵特征矩阵;将所映射的四个频段的差分熵特征矩阵,按照从左至右、从上至下的方式依次拼接得到先验情绪特征。Specifically, in the priori emotion feature building block, the original EEG signal is firstly divided into (4-7Hz), (8-15Hz), (16-32Hz) and (33-45Hz) channel by channel using Fourier transform ) four frequency bands; then calculate the power spectral density on the four frequency bands channel by channel; carry out the logarithmic function nonlinear transformation to the power spectral density on the four frequency bands to obtain its differential entropy feature; according to the definition of the electrode space position of the EEG acquisition equipment Information, the differential entropy feature of each EEG channel on the same frequency band corresponds to the element with the closest Euclidean distance in a two-dimensional feature matrix, and the other two-dimensional feature matrix elements that have no EEG channel corresponding to it are uniformly set to zero; four. The differential entropy feature mapping of each EEG channel on a frequency band obtains the differential entropy feature matrix of four frequency bands; the mapped differential entropy feature matrix of the four frequency bands is sequentially spliced from left to right and from top to bottom Get the prior emotion features.
具体的,所述自动脑电情绪特征构建模块中,将原始的脑电信号逐通道送入独立的缩放卷积层,每个通道得到一张具有时频信息的二维类时频图;在通道维度上将所有的类时频图进行堆叠拼接,得到一个三维张量作为以数据驱动的自动脑电情绪特征。Specifically, in the automatic EEG emotional feature building module, the original EEG signal is sent to an independent scaling convolution layer channel by channel, and each channel obtains a two-dimensional time-frequency map with time-frequency information; In the channel dimension, all time-frequency diagrams are stacked and spliced to obtain a three-dimensional tensor as a data-driven automatic EEG emotional feature.
具体的,所述特征融合模块中,通过独立的三层卷积神经网络层和一层最大池化层,分别对先验情绪特征和数据驱动的自动脑电情绪特征进行特征变换,得到先验情绪特征向量和脑电情绪特征向量;将先验情绪特征向量和脑电情绪特征向量沿着列的方向拼接在一起,得到组合后的具有脑电时空频域信息的情绪特征向量;将最终得到的情绪特征向量输入至一个全连接神经网络层,提取融合先验与自动脑电情绪特征的高阶语义特征;Specifically, in the feature fusion module, through an independent three-layer convolutional neural network layer and a layer of maximum pooling layer, the prior emotional features and the data-driven automatic EEG emotional features are respectively subjected to feature transformation to obtain a priori The emotional feature vector and the EEG emotional feature vector; the prior emotional feature vector and the EEG emotional feature vector are spliced together along the column direction to obtain the combined emotional feature vector with EEG space-time-frequency domain information; will finally get The emotional feature vector of the input to a fully connected neural network layer to extract high-order semantic features that fuse prior and automatic EEG emotional features;
所述分类识别模块通过线性神经网络层和柔性最大值传输函数来对提取到的高阶语义特征进行分类识别,进而识别人的情绪状态。The classification recognition module classifies and recognizes the extracted high-order semantic features through the linear neural network layer and the soft maximum transfer function, and then recognizes the emotional state of the person.
实施例1Example 1
本实施例以DEAP脑电信号情感数据集为例进行验证,输入一段32通道的原始脑电信号,具体包括如下步骤:In this embodiment, the DEAP EEG signal emotion data set is taken as an example for verification, and a section of 32-channel original EEG signal is input, which specifically includes the following steps:
步骤一,构建先验情绪特征:
首先从32通道的原始脑电信号中提取与情绪相关的不同频率的差分熵特征,依据电极空间位置将其映射为9×9大小的差分熵矩阵,然后将不同频带的差分熵矩阵拼接为18×18的空频特征矩阵;First, extract the differential entropy features of different frequencies related to emotions from the original EEG signals of 32 channels, map them into a 9×9 differential entropy matrix according to the spatial position of the electrodes, and then stitch the differential entropy matrices of different frequency bands into 18 ×18 space-frequency characteristic matrix;
步骤二,构建自动脑电情绪特征:Step 2, construct automatic EEG emotion features:
将32通道的原始的脑电信号逐通道送入32个独立的缩放卷积层,再将所有通道的输出结果拼接为一个三维张量作为以数据驱动的自动脑电情绪特征;The 32-channel original EEG signal is sent to 32 independent scaling convolution layers channel by channel, and then the output results of all channels are spliced into a three-dimensional tensor as a data-driven automatic EEG emotional feature;
步骤三,特征融合:Step 3, feature fusion:
对步骤一和步骤二所得到的先验情绪特征和自动脑电情绪特征进行特征变换,分别得到数学驱动的先验情绪特征向量和数据驱动的脑电情绪特征向量;将先验情绪特征向量和脑电情绪特征向量进行深度融合,提取高阶语义特征;Perform feature transformation on the prior emotional features and automatic EEG emotional features obtained in
步骤四,分类识别:Step 4, classification recognition:
将步骤三提取到的高阶语义特征送入柔性最大值激活函数对情绪进行识别分类。Send the high-order semantic features extracted in step 3 into the soft maximum activation function to identify and classify emotions.
最终在DEAP脑电信号情感数据集上的实验结果如表1所示,同时融合先验与自动脑电特征的情绪识别方法在唤醒、效价和支配度三个维度上的情感识别结果分别为71.33%、71.25%和71.10%,相较于仅使用先验情绪特征或自动脑电情绪特征,其效果更优。The final experimental results on the DEAP EEG signal emotion data set are shown in Table 1. The emotion recognition results of the emotion recognition method in the three dimensions of arousal, valence and dominance by simultaneously integrating prior and automatic EEG features are as follows: 71.33%, 71.25% and 71.10%, compared with only using prior emotion features or automatic EEG emotion features, the effect is better.
表1融合先验与自动脑电特征的情绪识别结果Table 1 Emotion recognition results of fusion of prior and automatic EEG features
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110052255.2A CN112836593B (en) | 2021-01-15 | 2021-01-15 | An emotion recognition method and system integrating prior and automatic EEG features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110052255.2A CN112836593B (en) | 2021-01-15 | 2021-01-15 | An emotion recognition method and system integrating prior and automatic EEG features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112836593A CN112836593A (en) | 2021-05-25 |
CN112836593B true CN112836593B (en) | 2023-06-20 |
Family
ID=75928234
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110052255.2A Active CN112836593B (en) | 2021-01-15 | 2021-01-15 | An emotion recognition method and system integrating prior and automatic EEG features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112836593B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114224342B (en) * | 2021-12-06 | 2023-12-15 | 南京航空航天大学 | A multi-channel EEG signal emotion recognition method based on spatiotemporal fusion feature network |
CN114298216A (en) * | 2021-12-27 | 2022-04-08 | 杭州电子科技大学 | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer |
CN119989160B (en) * | 2025-04-16 | 2025-07-04 | 西北大学 | EEG emotion recognition method and system based on multi-branch feature fusion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190128978A (en) * | 2018-05-09 | 2019-11-19 | 한국과학기술원 | Method for estimating human emotions using deep psychological affect network and system therefor |
CN112200016A (en) * | 2020-09-17 | 2021-01-08 | 东北林业大学 | Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110897648A (en) * | 2019-12-16 | 2020-03-24 | 南京医科大学 | Emotion recognition classification method based on electroencephalogram signal and LSTM neural network model |
CN110946576A (en) * | 2019-12-31 | 2020-04-03 | 西安科技大学 | A Breadth Learning-Based Visual Evoked Potential Recognition Approach to Emotions |
-
2021
- 2021-01-15 CN CN202110052255.2A patent/CN112836593B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190128978A (en) * | 2018-05-09 | 2019-11-19 | 한국과학기술원 | Method for estimating human emotions using deep psychological affect network and system therefor |
CN112200016A (en) * | 2020-09-17 | 2021-01-08 | 东北林业大学 | Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost |
Non-Patent Citations (2)
Title |
---|
Fusing highly dimensional energy and connectivity features to identify affective states from EEG signals;Pablo Arnau-González;《Neurocomputing》;全文 * |
深度学习在脑电情感识别方面的应用研究进展;尹旺;李惠媛;;计算机时代(08);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112836593A (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112836593B (en) | An emotion recognition method and system integrating prior and automatic EEG features | |
CN108596039B (en) | Bimodal emotion recognition method and system based on 3D convolutional neural network | |
CN116645716B (en) | Expression recognition method based on local features and global features | |
CN105956560B (en) | A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization | |
CN108038466B (en) | Multi-channel human eye closure recognition method based on convolutional neural network | |
CN113749657B (en) | Brain electricity emotion recognition method based on multi-task capsule | |
CN108090472B (en) | Pedestrian re-identification method and system based on multi-channel consistency characteristics | |
CN112801146A (en) | Target detection method and system | |
CN107995628A (en) | A Cognitive Wireless Network Multi-user Cooperative Spectrum Sensing Method Based on Deep Learning | |
CN111723239B (en) | Video annotation method based on multiple modes | |
CN108710906A (en) | Real-time point cloud model sorting technique based on lightweight network LightPointNet | |
CN110689523A (en) | Personalized image information evaluation method based on meta-learning and information data processing terminal | |
CN115311595B (en) | Video feature extraction method and device and electronic equipment | |
CN111353583B (en) | Deep learning network based on group convolution characteristic topological space and training method thereof | |
CN113505719A (en) | Gait recognition model compression system and method based on local-integral joint knowledge distillation algorithm | |
CN114743251B (en) | Drama character facial expression recognition method based on shared integrated convolutional neural network | |
CN108830254A (en) | A kind of detection of fine granularity vehicle and recognition methods based on data balancing strategy and intensive attention network | |
CN116704506A (en) | A Cross-Context Attention-Based Approach to Referential Image Segmentation | |
CN119128690A (en) | Bearing fault diagnosis method based on parallel multi-fusion deep learning network model | |
US20250224710A1 (en) | Industrial design measurement and control device and method of application | |
CN115392302B (en) | A method for EEG emotion recognition based on fused graph convolutional network | |
CN111046213B (en) | A Knowledge Base Construction Method Based on Image Recognition | |
CN116204719A (en) | Knowledge enhancement multitask recommendation method under hyperbolic space | |
CN117523685B (en) | Dual-mode biological feature recognition method and system based on asymmetric comparison fusion | |
CN113627391A (en) | Cross-mode electroencephalogram signal identification method considering individual difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |