CN111616721A - Emotion recognition system and application based on deep learning and brain-computer interface - Google Patents
Emotion recognition system and application based on deep learning and brain-computer interface Download PDFInfo
- Publication number
- CN111616721A CN111616721A CN202010481477.1A CN202010481477A CN111616721A CN 111616721 A CN111616721 A CN 111616721A CN 202010481477 A CN202010481477 A CN 202010481477A CN 111616721 A CN111616721 A CN 111616721A
- Authority
- CN
- China
- Prior art keywords
- emotional
- eeg
- brain
- layer
- emotion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 230000008451 emotion Effects 0.000 claims abstract description 70
- 210000004556 brain Anatomy 0.000 claims abstract description 54
- 230000006870 function Effects 0.000 claims abstract description 20
- 230000002996 emotional effect Effects 0.000 claims description 135
- 238000013136 deep learning model Methods 0.000 claims description 39
- 238000013527 convolutional neural network Methods 0.000 claims description 17
- 238000011176 pooling Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 14
- 238000000034 method Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 9
- 230000000638 stimulation Effects 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 8
- 238000004891 communication Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 8
- 238000010200 validation analysis Methods 0.000 claims description 7
- 239000012634 fragment Substances 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000007935 neutral effect Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 125000004122 cyclic group Chemical group 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 239000011159 matrix material Substances 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 230000000717 retained effect Effects 0.000 claims 1
- 230000001568 sexual effect Effects 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000004458 analytical method Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000006397 emotional response Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 208000028173 post-traumatic stress disease Diseases 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Physiology (AREA)
- Social Psychology (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种情绪识别系统。特别是涉及一种基于深度学习和脑机接口的情绪识别系统及应用。The present invention relates to an emotion recognition system. In particular, it relates to an emotion recognition system and application based on deep learning and brain-computer interface.
背景技术Background technique
情绪是人受到外界事物刺激时的反应,是一种广泛而复杂的生理和心理状态。人的情绪状态种类繁多,心理学家Ekman将其大致定义为高兴、生气、吃惊、恐惧、厌恶和悲伤六种。事实上,还有很多其他的情绪状态,如羞愧,傲慢,失望,焦虑等。人的情绪状态往往反应了一个人对外在事物的态度与看法,通过情绪识别可以帮助人们提高设备使用安全性,分析日常行为潜在情感因素。例如,在康复医疗方面,可以帮助医生诊断以及预防如抑郁症、创伤后应激障碍等问题。同时,根据患者的情绪反应可以方便医护人员提供更好的医疗护理,以辅助患者的康复。在交通运输方面,可以通过情绪检测分析驾驶员的精神状态,如是否疲劳,清醒程度如何,是否紧张焦虑等,来确保驾驶安全。在教育行业,可以通过学生对不同科目的情绪反应开发学生特长,进行更有针对性的辅导。同时,当学生大脑负荷较重,显示出疲劳情绪时,提示教师活跃课堂或者短暂休息,确保课堂效率。对情绪的精确识别可以从很多方面提升我们的生活质量。Emotion is the reaction of people when they are stimulated by external things, and it is a broad and complex physiological and psychological state. There are many kinds of human emotional states, which are roughly defined by psychologist Ekman as happiness, anger, surprise, fear, disgust and sadness. In fact, there are many other emotional states such as shame, arrogance, disappointment, anxiety, etc. A person's emotional state often reflects a person's attitude and views on external things. Emotion recognition can help people improve the safety of device use and analyze potential emotional factors in daily behavior. For example, in rehabilitation medicine, it can help doctors diagnose and prevent problems such as depression and post-traumatic stress disorder. At the same time, according to the patient's emotional response, it is convenient for medical staff to provide better medical care to assist the patient's recovery. In terms of transportation, emotional detection can be used to analyze the mental state of the driver, such as whether it is fatigued, how awake, whether it is nervous and anxious, etc., to ensure driving safety. In the education industry, students' strengths can be developed through students' emotional responses to different subjects, and more targeted tutoring can be carried out. At the same time, when the students' brains are under heavy load and show fatigue, teachers are prompted to activate the classroom or take a short break to ensure classroom efficiency. Accurate recognition of emotions can improve our quality of life in many ways.
人的情绪可以通过各种途径反映出来,例如面部表情、语音语调、动作眼神等等。现阶段情绪识别可以利用包括表情、语音、动作以及生理信号等模态。其中,生理信号特别是脑电信号(Electroencephalogram,EEG)不容易受到人为因素的影响。由于脑电信号的稳定性,基于脑电信号的情绪分类工作得到广泛关注。然而在实际应用中,利用脑电信号进行情绪分析还存在着很多挑战。首先,脑电采集过程比较困难,想要获得稳定的脑电信号对采集设备、被试者、外部环境都有一定的要求。尽管植入式和半植入式脑电电极能获得更稳定的信号,但其操作复杂,日常使用可行性较低。目前业界普遍采用的是非植入式脑电电极,虽然有便携、操作简单等优点,但容易收到外界噪声影响。其次,脑电信号具有强度弱和背景噪声强的特点,想要得到真实有效的脑电信号需要先进的预处理技术。同时,在分类工作时,需要一些处理技术从大量信号样本中提取特征,以建立信号与情绪状态之间的对应关系,这个过程对处理方法的准确性要求很高。Human emotions can be reflected in various ways, such as facial expressions, voice intonation, eye movements and so on. At this stage, emotion recognition can use modalities including expressions, voices, actions, and physiological signals. Among them, physiological signals, especially electroencephalogram (EEG) signals are not easily affected by human factors. Due to the stability of EEG signals, EEG-based emotion classification has received extensive attention. However, there are still many challenges in using EEG signals for emotion analysis in practical applications. First of all, the EEG acquisition process is relatively difficult. To obtain stable EEG signals, there are certain requirements for acquisition equipment, subjects, and the external environment. Although implantable and semi-implantable EEG electrodes can obtain more stable signals, their operation is complicated and the feasibility of daily use is low. At present, non-implantable EEG electrodes are widely used in the industry. Although they have the advantages of portability and simple operation, they are easily affected by external noise. Secondly, the EEG signal has the characteristics of weak intensity and strong background noise. To obtain a real and effective EEG signal requires advanced preprocessing technology. At the same time, in the classification work, some processing techniques are needed to extract features from a large number of signal samples to establish the correspondence between signals and emotional states, and this process requires high accuracy of the processing methods.
最近,复杂网络理论作为一种多学科融合方法为复杂系统分析提供了新的途径。其中,多层网络由于其多尺度特性,在分析时空数据集上具有优势。人类大脑是一个复杂系统,复杂网络无疑是分析人脑信号在多空间、多尺度上的交互模式的有力工具。其中,利用脑电信号建立脑网络进行分析的方法也越来越受到关注。将脑电极被设定为节点,通过各种各样的相关性测度确定节点之间的连边以建立脑网络。通过对脑网络特性的研究可以深化对脑电信号特征的理解。Recently, complex network theory, as a multidisciplinary fusion method, provides a new approach for complex system analysis. Among them, multi-layer networks have advantages in analyzing spatiotemporal datasets due to their multi-scale characteristics. The human brain is a complex system, and complex networks are undoubtedly a powerful tool for analyzing the interaction patterns of human brain signals in multiple spaces and scales. Among them, the method of using EEG signals to establish a brain network for analysis has also attracted more and more attention. Brain electrodes are set as nodes, and the connections between nodes are determined through various correlation measures to establish a brain network. By studying the characteristics of brain networks, we can deepen the understanding of the characteristics of EEG signals.
随着计算机处理速度和计算能力的提升,使用深度学习处理大数据样本成为可能。作为人工智能领域的前沿技术,深度学习是一种端到端的学习方法,它可以自动学习并提取输入样本的抽象特征,不用进行人工的操作分析,大大提高了处理效率。脑电信号数据具有数量大,特征多,变化快的特点,使用深度学习方法分析脑电数据优势明显。其中,卷积神经网络(Convolutional Neural Network,CNN)有权值共享和局部感知的特点,有效降低了模型参数数量和复杂性,使得模型更易于训练。同时,它更接近实际的生物神经网络,具有对于平移、旋转、尺度缩放等形式的变有度的不变性,可以有效从大量样本中学习到相应的特征。Densenet作为一种基于卷积神经网络的深度学习模型,有着参数节省,计算效率高,一定抗过拟合的特性,在分析大数据样本上具有很大潜力。With the improvement of computer processing speed and computing power, it is possible to use deep learning to process large data samples. As a cutting-edge technology in the field of artificial intelligence, deep learning is an end-to-end learning method that can automatically learn and extract abstract features of input samples without manual operation analysis, which greatly improves processing efficiency. EEG signal data has the characteristics of large quantity, many features, and rapid changes. Using deep learning methods to analyze EEG data has obvious advantages. Among them, the Convolutional Neural Network (CNN) has the characteristics of weight sharing and local perception, which effectively reduces the number and complexity of model parameters, making the model easier to train. At the same time, it is closer to the actual biological neural network, has invariance to the degree of variation in the form of translation, rotation, scaling, etc., and can effectively learn the corresponding features from a large number of samples. As a deep learning model based on convolutional neural network, Densenet has the characteristics of parameter saving, high computational efficiency, and certain resistance to overfitting. It has great potential in analyzing large data samples.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是,提供一种能够实现对情绪脑电信号的准确获取、有效辨识和正确分类的基于深度学习和脑机接口的情绪识别系统及应用。The technical problem to be solved by the present invention is to provide an emotion recognition system and application based on deep learning and brain-computer interface, which can achieve accurate acquisition, effective identification and correct classification of emotional EEG signals.
本发明所采用的技术方案是:一种基于深度学习和脑机接口的情绪识别系统,包括依次连接的便携式脑电采集设备、情绪脑电分类模块和情绪分类显示模块,所述便携式脑电采集设备从被试者大脑采集情绪EEG脑电信号,所述的情绪脑电分类模块对情绪EEG脑电信号进行分析,建立情绪EEG脑电信号与情绪之间的关系模型,通过关系模型,对输入的情绪EEG脑电信号进行情绪分类,所述的情绪分类显示模块显示识别出的情绪种类,提示被试者情绪状态。The technical scheme adopted in the present invention is: an emotion recognition system based on deep learning and brain-computer interface, comprising a portable EEG acquisition device, an emotion EEG classification module and an emotion classification display module connected in sequence, the portable EEG collection The device collects emotional EEG signals from the subject's brain, and the emotional EEG classification module analyzes the emotional EEG signals to establish a relationship model between the emotional EEG signals and emotions, and through the relationship model, the input The emotional EEG electroencephalogram signal is used for emotional classification, and the emotional classification display module displays the recognized emotional types, and prompts the emotional state of the subjects.
一种基于深度学习和脑机接口的情绪识别系统的应用,包括如下步骤:An application of an emotion recognition system based on deep learning and brain-computer interface, comprising the following steps:
1)使用者观看不同的情感视频,在视频刺激下产生三种不同的情绪,分别是积极、中性和消极;1) The user watches different emotional videos and generates three different emotions under the stimulation of the video, namely positive, neutral and negative;
2)使用便携式脑电采集设备采集使用者三种不同情绪下的情绪EEG脑电信号,根据情绪EEG脑电信号和相应的情绪标签训练情绪分类深度学习模型;2) Use the portable EEG acquisition device to collect the emotional EEG signals of the user under three different emotions, and train the emotion classification deep learning model according to the emotional EEG EEG signals and the corresponding emotional labels;
3)使用便携式脑电采集设备采集使用者待识别的新的情绪EEG脑电信号,并输入到用于情绪分类检测的双输入深度学习模型,对新的情绪EEG脑电信号进行情绪分类;3) Use a portable EEG acquisition device to collect the new emotional EEG signal of the user to be identified, and input it into a dual-input deep learning model for emotion classification and detection, and perform emotion classification on the new emotional EEG signal;
4)通过显示器显示出当前情绪种类,提示使用者的情绪状态。4) Display the current emotion type through the display, and prompt the user's emotional state.
本发明的基于深度学习和脑机接口的情绪识别系统及应用,能够实现对情绪脑电信号的准确获取、有效辨识和正确分类,并直观提示情绪状态,实现情绪状态监测的功能。本发明直接建立脑电信号和情绪种类之间的对应关系,实现对情绪的反馈。The emotion recognition system and application based on deep learning and brain-computer interface of the present invention can realize the accurate acquisition, effective identification and correct classification of emotional EEG signals, intuitively prompt emotional state, and realize the function of emotional state monitoring. The present invention directly establishes the corresponding relationship between the EEG signal and the emotion type, so as to realize the feedback of emotion.
附图说明Description of drawings
图1是本发明基于深度学习和脑机接口的情绪识别系统的构成框图;Fig. 1 is the composition block diagram of the emotion recognition system based on deep learning and brain-computer interface of the present invention;
图2是本发明中便携式脑电采集设备构成框图;Fig. 2 is a block diagram of the structure of the portable EEG acquisition device in the present invention;
图3是本发明中构建多层脑复杂网络的流程图;Fig. 3 is the flow chart of constructing multi-layer brain complex network in the present invention;
图4是本发明中双输入深度学习模型的结构示意图;4 is a schematic structural diagram of a dual-input deep learning model in the present invention;
图5是本发明基于深度学习和脑机接口的情绪识别系统的应用示意图;Fig. 5 is the application schematic diagram of the emotion recognition system based on deep learning and brain-computer interface of the present invention;
具体实施方式Detailed ways
下面结合实施例和附图对本发明的基于深度学习和脑机接口的情绪识别系统及应用做出详细说明。The emotion recognition system and application based on deep learning and brain-computer interface of the present invention will be described in detail below with reference to the embodiments and accompanying drawings.
如图1所示,本发明的基于深度学习和脑机接口的情绪识别系统,包括依次连接的便携式脑电采集设备1、情绪脑电分类模块2和情绪分类显示模块3,所述便携式脑电采集设备1从被试者大脑采集情绪EEG脑电信号,所述的情绪脑电分类模块2对情绪EEG脑电信号进行分析,建立情绪EEG脑电信号与情绪之间的关系模型,通过关系模型,对输入的情绪EEG脑电信号进行情绪分类,所述的情绪分类显示模块3显示识别出的情绪种类,提示被试者情绪状态。As shown in Figure 1, the emotion recognition system based on deep learning and brain-computer interface of the present invention includes a portable
如图2所示,本发明所述的便携式脑电采集设备1包括有:依次连接的用于采集情绪EEG脑电信号的脑电极帽及其转接线11、用于情绪EEG脑电信号放大和转换的生物电信号采集模块12、用于控制情绪EEG脑电信号的采集并通过USB通信电路14向情绪脑电分类模块2传输情绪EEG脑电信号的FPGA处理器13,以及分别连接生物电信号采集模块12和FPGA处理器13的系统供电电路15,其中,As shown in FIG. 2 , the portable
所述的脑电极帽及其转接线11中的脑电极帽采集不同脑区的情绪EEG脑电信号,通过转接线及DSUB37接口与生物电信号采集模块12相连接,用于生物电信号的采集和传输;The brain electrode cap and the brain electrode cap in the
所述的生物电信号采集模块12是由数片集成了用于接收脑电帽采集的电压信号的高共模抑制比模拟输入模块、用于进行脑电压信号放大的低噪声可编程增益放大器PGA和用于将模拟信号转换为数字信号的高分辨率同步采样模数转换器ADC的生物电信号采集芯片组成;The bioelectric
所述的FPGA处理器13用于调整生物电信号采集模块12的采集模式与参数和控制USB通信电路14将情绪EEG脑电信号数据向情绪脑电分类模块2传输;The
所述的USB通信电路14的输出连接所述情绪脑电分类模块2的输入端,USB通信电路14工作在异步FIFO模式,最高传输速率8MB/秒,在FPGA处理器13的控制下周期性地将采集到的情绪EEG脑电信号以数据包的形式发送至情绪脑电分类模块2;The output of the
所述的系统供电电路15,输入电压为5V,由USB接口进行供电,通过电压转换模块提供系统不同芯片的工作电压。The system power supply circuit 15 has an input voltage of 5V, is powered by a USB interface, and provides the operating voltages of different chips of the system through a voltage conversion module.
所述的便携式脑电采集设备1采集被试者情绪EEG脑电信号,是采集被试者对应于脑电极帽的FP1,FPZ,FP2,AF3,AF4,F7,F5,F3,F1,FZ,F2,F4,F6,F8,FT7,FC5,FC3,FC1,FCZ,FC2,FC4,FC6,FT8,T7,C5,C3,C1,CZ,C2,C4,C6,T8,TP7,CP5,CP3,CP1,CPZ,CP2,CP4,CP6,TP8,P7,P5,P3,P1,PZ,P2,P4,P6,P8,PO7,PO5,PO3,POZ,PO4,PO6,PO8,CB1,O1,OZ,O2,CB2共62个电极的情绪EEG脑电信号;脑电极帽的电极分布符合10/20国际标准导联;具体是使用视频片段作为情绪刺激源,采集被试者观看视频后产生相应情绪状态的情绪EEG脑电信号;所述的相应情绪状态为积极情绪状态,消极情绪状态和中性情绪状态,采集过程是:The portable
(1)被试者观看三种不同情绪对应的视频片段,在视频源刺激期间产生情绪EEG脑电信号;(1) The subjects watched video clips corresponding to three different emotions, and generated emotional EEG signals during the stimulation of the video source;
(2)在每次观看视频时,便携式脑电采集设备完成情绪EEG脑电信号采集,并为每一次采集到的情绪情绪EEG脑电信号根据刺激所用的视频设定标签,标签和情绪EEG脑电信号共同构成样本集。(2) Each time a video is watched, the portable EEG acquisition device completes the collection of emotional EEG signals, and sets labels, tags and emotional EEG signals for each collected emotional EEG signal according to the video used for stimulation The electrical signals together constitute the sample set.
本发明所述的情绪脑电分类模块2的运行具体包括如下步骤:The operation of the emotional
1)使用便携式脑电采集设备获取三种情绪状态的情绪EEG脑电信号c=1,2,…,62构成样本集,其中c代表电极序号,L为信号采集的长度,Xc,g表示第c个电极采样到的第g个数据点的数值;1) Use a portable EEG acquisition device to obtain emotional EEG signals of three emotional states c=1,2,...,62 constitute a sample set, where c represents the electrode serial number, L is the length of signal acquisition, X c,g represents the value of the gth data point sampled by the cth electrode;
2)对情绪情绪EEG脑电信号进行滑动窗口划分,得到情绪EEG脑电信号片段,对情绪EEG脑电信号片段进行连续小波变换,将原情绪EEG脑电信号片段分为五个频段,在五个频段δ,θ,α,β,γ内分别构建脑复杂网络Af,f=δ,θ,α,β,γ,最终得到多层脑复杂网络;2) EEG EEG signals for emotions Divide the sliding window to obtain emotional EEG signal fragments, perform continuous wavelet transform on the emotional EEG EEG signal fragments, and divide the original emotional EEG EEG signal fragments into five frequency bands. , γ to build a complex brain network A f , f = δ, θ, α, β, γ, and finally obtain a multi-layer brain complex network;
所述的五个频段分别为:δ(1-3Hz)、θ(4-7Hz)、α(8-12Hz)、β(13-30Hz),和γ(31-50Hz)五个频段,多层脑复杂网络的构建如图3所示,具体包括:The five frequency bands are: δ (1-3Hz), θ (4-7Hz), α (8-12Hz), β (13-30Hz), and γ (31-50Hz) five frequency bands, multi-layer The construction of the complex brain network is shown in Figure 3, which includes:
(2.1)对于采集到的情绪EEG脑电信号通过长度为l的无重叠滑动窗口进行数据分割,滑动窗口的滑动步长为b,得到一系列滑动窗口数据,其中第j个滑动窗口数据表示为 表示第c个频段中第j个滑动窗口中的第g1个数据点;(2.1) For the collected emotional EEG signals The data is divided by a non-overlapping sliding window of length l, and the sliding step size of the sliding window is b, and a series of sliding window data are obtained, in which the jth sliding window data is expressed as represents the g - th data point in the j-th sliding window in the c-th frequency band;
(2.2)对于每个滑动窗口得到的数据片段,使用连续小波变换将每个情绪EEG脑电信号片段划分为5个频段,得到分频后的情绪EEG脑电信号f=δ,θ,α,β,γ;(2.2) For the data segment obtained by each sliding window, use continuous wavelet transform to divide each emotional EEG signal segment into 5 frequency bands, and obtain the emotional EEG EEG signal after frequency division f = δ, θ, α, β, γ;
(2.3)对一个频段内的情绪EEG脑电信号将每个通道设置为一个节点,节点之间的连边通过皮尔逊相关系数(Pearson correlation coefficient,PCC)来推断,以此来建立脑复杂网络,包括:(2.3) EEG EEG signals in a frequency band Each channel is set as a node, and the edges between nodes are inferred by the Pearson correlation coefficient (PCC) to build a complex brain network, including:
(2.3.1)通道间皮尔逊相关系数的计算,皮尔逊相关系数能够衡量两个变量序列之间的相似性,取值范围为-1到1,两个通道间皮尔逊相关系数ρX,Y的计算如下:(2.3.1) Calculation of the Pearson correlation coefficient between channels, the Pearson correlation coefficient can measure the similarity between two variable sequences, and the value ranges from -1 to 1. The Pearson correlation coefficient between the two channels ρ X, Y is calculated as follows:
其中,σX和σY分别是通道X和通道Y的情绪EEG脑电信号的标准差,计算两两通道之间的皮尔逊相关系数,得到通道间的邻接矩阵;Among them, σ X and σ Y are the standard deviation of the emotional EEG signals of channel X and channel Y, respectively, and the Pearson correlation coefficient between the two channels is calculated to obtain the adjacency matrix between channels;
(2.3.2)对通道之间皮尔逊相关系数值进行权重排序,去除50%权重较小的连边,得到该频段内的脑复杂网络;(2.3.2) Sort the weights of the Pearson correlation coefficient values between channels, remove 50% of the edges with smaller weights, and obtain the complex brain network in this frequency band;
(2.4)在五个频段内重复第(2.3)步,得到对应五个频段δ,θ,α,β,γ的脑复杂网络Aδ,Aθ,Aα,Aβ,Aγ,进而构成多层脑复杂网络。(2.4) Step (2.3) is repeated in the five frequency bands to obtain the complex brain network A δ , A θ , A α , A β , A γ corresponding to the five frequency bands δ, θ, α, β, γ, and then form Multilayered brain complex network.
3)选取不同类别情绪脑复杂网络之间差异最大的30个节点作为关键节点,依据关键节点重构多层脑复杂网络;包括:3) Select the 30 nodes with the largest differences between different types of emotional brain complex networks as key nodes, and reconstruct the multi-layer brain complex network according to the key nodes; including:
(3.1)计算每个网络节点的聚集系数其中ki表示与节点i相邻节点的数量,e(i)表示节点i实际连边数量;(3.1) Calculate the aggregation coefficient of each network node where k i represents the number of nodes adjacent to node i, and e(i) represents the actual number of edges connected to node i;
(3.2)统计节点i在三种不同情绪间聚集系数的差异大小q1≠q2,其中表示不同类别情绪脑复杂网络的节点i的聚集系数,q1和q2代表两种不同的情绪状态,对M值进行从大到小排序,选取M值较大的30个节点作为关键节点,通过保留脑复杂网络中关键节点与关键节点之间的连边,并去除其他连边,来重构多层脑复杂网络。(3.2) Statistical difference in clustering coefficient of node i among three different emotions q1≠q2, where Represents the clustering coefficient of node i of different types of emotional brain complex networks, q1 and q2 represent two different emotional states, sort M values from large to small, select 30 nodes with larger M values as key nodes, and keep The connection between the key nodes and the key nodes in the complex brain network is removed, and other connections are removed to reconstruct the multi-layer complex brain network.
4)搭建双输入深度学习模型的卷积神经网络分支,卷积神经网络分支的输入是多层脑复杂网络和对应的标签,所述的卷积神经网络分支由五层相同的卷积神经网络模型组成,每一层卷积神经网络模型对应多层脑复杂网络的五个频段δ,θ,α,β,γ中的一个频段;所述的每一层卷积神经网络模型包括依次连接的:4) Build a convolutional neural network branch of a dual-input deep learning model. The input of the convolutional neural network branch is a multi-layer brain complex network and corresponding labels, and the convolutional neural network branch is composed of five layers of the same convolutional neural network. Model composition, each layer of convolutional neural network model corresponds to one of the five frequency bands δ, θ, α, β, γ of the multi-layer brain complex network; each layer of the convolutional neural network model includes sequentially connected :
(4.1)数据输入层,输入为对应频段脑复杂网络Af;(4.1) data input layer, the input is the corresponding frequency band brain complex network A f ;
(4.2)第一卷积层,卷积核大小为5×5,卷积核数量为8个,激活函数为ReLU函数;(4.2) The first convolution layer, the size of the convolution kernel is 5×5, the number of convolution kernels is 8, and the activation function is the ReLU function;
(4.3)第一平均池化层,池化核大小为2×2;(4.3) The first average pooling layer, the size of the pooling kernel is 2×2;
(4.4)第二卷积层,卷积核大小为5×5,卷积核数量为16个,激活函数为ReLU;(4.4) The second convolution layer, the size of the convolution kernel is 5×5, the number of convolution kernels is 16, and the activation function is ReLU;
(4.5)第二平均池化层,池化核大小为2×2;(4.5) The second average pooling layer, the size of the pooling kernel is 2×2;
(4.6)Dropout层,以概率p随机选择上一层的神经元,使该神经元不输出,减轻过拟合现象,对于δ,θ,α频段p=0.03,对于β,γ频段p=0.01;(4.6) Dropout layer, randomly select the neurons of the previous layer with probability p, so that the neurons do not output, and reduce the over-fitting phenomenon, for δ, θ, α frequency bands p=0.03, for β, γ frequency bands p=0.01 ;
(4.7)批量归一化层;(4.7) Batch normalization layer;
(4.8)Flatten层,将多维的输入一维化;(4.8) Flatten layer, which makes the multi-dimensional input one-dimensional;
(4.9)全连接层,激活函数为ReLU,选择L1范数作为正则化项,L1范数设置为0.0005。(4.9) The fully connected layer, the activation function is ReLU, the L1 norm is selected as the regularization term, and the L1 norm is set to 0.0005.
5)搭建双输入深度学习模型的密集卷积网络(DenseNet)分支,密集卷积网络分支的输入是使用小波变换进行0-50Hz滤波后的情绪EEG脑电信号和对应标签;5) Build a dense convolutional network (DenseNet) branch of the dual-input deep learning model. The input of the dense convolutional network branch is the emotional EEG signal and the corresponding label after 0-50Hz filtering using wavelet transform;
所述的双输入深度学习模型的密集卷积网络分支,包括依次连接的:The dense convolutional network branch of the dual-input deep learning model includes sequentially connected:
(5.1)数据输入层,输入数据为0-50Hz的情绪EEG脑电信号;(5.1) Data input layer, the input data is the emotional EEG signal of 0-50Hz;
(5.2)卷积层,卷积核大小为1×62,卷积核数量为24个;(5.2) Convolution layer, the size of the convolution kernel is 1×62, and the number of convolution kernels is 24;
(5.3)第一密集块,由三个卷积块依次连构组成,每个卷积块包括一个批量归一化层和一个卷积层,每个卷积层的卷积核大小为3×1,三个卷积层的卷积核数量分别为36,48,60;(5.3) The first dense block is composed of three convolution blocks connected in sequence, each convolution block includes a batch normalization layer and a convolution layer, and the convolution kernel size of each convolution layer is 3× 1. The number of convolution kernels of the three convolutional layers is 36, 48, and 60 respectively;
(5.4)第一过渡块,由一个卷积核大小为1×1、卷积核数量为72的卷积层,一个卷积核大小为2×1、卷积核数量为72的卷积层,以及一个2×2池化层组成;(5.4) The first transition block consists of a convolution layer with a convolution kernel size of 1×1 and a convolution kernel number of 72, and a convolution layer with a convolution kernel size of 2×1 and the number of convolution kernels 72 , and a 2×2 pooling layer;
(5.5)第二密集块,由三个卷积块依次连构组成,每个卷积块包括一个批量归一化层和一个卷积层,每个卷积层的卷积核大小为3×1,三个卷积层的卷积核数量分别为72,84,96;(5.5) The second dense block is composed of three convolution blocks connected in sequence, each convolution block includes a batch normalization layer and a convolution layer, and the convolution kernel size of each convolution layer is 3× 1. The number of convolution kernels of the three convolutional layers is 72, 84, and 96 respectively;
(5.6)第二过渡块,由一个卷积核大小为1×1、卷积核数量为108的卷积层,一个卷积核大小为2×1、卷积核数量为108的卷积层,以及一个2×2池化层组成;(5.6) The second transition block consists of a convolution layer with a convolution kernel size of 1×1 and a convolution kernel number of 108, a convolution layer with a convolution kernel size of 2×1 and the number of convolution kernels 108 , and a 2×2 pooling layer;
(5.7)第三密集块,由三个卷积块依次连构组成,每个卷积块包括一个批量归一化层和一个卷积层,每个卷积层的卷积核大小为3×1,三个卷积层的卷积核数量分别为108,120,132;(5.7) The third dense block is composed of three convolution blocks connected in sequence. Each convolution block includes a batch normalization layer and a convolution layer, and the size of the convolution kernel of each convolution layer is 3× 1. The number of convolution kernels of the three convolutional layers is 108, 120, and 132 respectively;
(5.8)全局池化层,池化核大小为50×1。(5.8) Global pooling layer with a pooling kernel size of 50×1.
6)通过串联层连接所述卷积神经网络分支的输出和所述密集卷积网络分支的输出,所述的串联层与另外一个全连接层相连,全连接层输出分类结果,并用H个神经元表示,分类层选择Softmax作为激活函数,Softmax函数本质上是归一化的指数函数,定义为其中h=1…H,e为自然对数值,zh为第h个神经元的输出,式中分母充当正则项作用,使得使用Keras框架实现深度学习模型的搭建,得到双输入深度学习模型;6) Connect the output of the convolutional neural network branch and the output of the dense convolutional network branch through a series layer, the series layer is connected to another fully connected layer, the fully connected layer outputs the classification result, and uses H neural networks. Meta representation, the classification layer selects Softmax as the activation function, the Softmax function is essentially a normalized exponential function, defined as where h=1...H, e is the natural logarithm value, z h is the output of the hth neuron, and the denominator acts as a regular term, so that Use the Keras framework to build a deep learning model and obtain a dual-input deep learning model;
7)将采集到的三种情绪状态下的情绪EEG脑电信号分为训练集和验证集,使用训练集对双输入深度学习模型进行训练,使用验证集对训练后的双输入深度学习模型进行验证,最终得到用于情绪分类检测的双输入深度学习模型。包括:7) Divide the collected emotional EEG signals in three emotional states into a training set and a validation set, use the training set to train the dual-input deep learning model, and use the validation set to perform the training on the dual-input deep learning model. After verification, a dual-input deep learning model for emotion classification and detection is finally obtained. include:
对双输入深度学习模型的训练,选用Adam优化算法,通过最小化交叉熵损失函数,对双输入深度学习模型中的各种权值和偏差进行优化,记录使得验证集损失函数最小时的最优双输入深度学习模型,最优双输入深度学习模型学习率设为0.001,共进行100周期的循环训练,Batchsize大小为100,最终得到用于情绪分类检测的双输入深度学习模型。For the training of the dual-input deep learning model, the Adam optimization algorithm is used. By minimizing the cross-entropy loss function, various weights and deviations in the dual-input deep learning model are optimized, and the optimal value when the loss function of the validation set is minimized is recorded. For the dual-input deep learning model, the learning rate of the optimal dual-input deep learning model is set to 0.001, a total of 100 cycles of cyclic training are performed, and the batch size is 100. Finally, a dual-input deep learning model for emotion classification and detection is obtained.
本发明的实例中,双输入深度学习模型训练时的脑电数据样本共来自于15位实验者,平均年龄23岁,其中包括7名女性,脑电采集频率为200Hz,每位实验者去除中间休息时间采集50分钟数据。考虑到实验者之间的个体差异,用每个实验者的EEG脑电信号训练一个双输入深度学习模型。在双输入深度学习模型训练时,样本集的80%被用作训练模型参数,10%被用作验证集以在训练过程中获得最好的模型,10%的样本集被用作测试集确定模型最终性能。在性能测试中,15位实验者的模型在情绪三分类任务中获得了97.27%的平均分类准确率。In the example of the present invention, the EEG data samples during the training of the dual-input deep learning model came from a total of 15 experimenters, with an average age of 23 years, including 7 women. The EEG acquisition frequency was 200 Hz. 50 minutes of data were collected during the rest time. Considering individual differences among experimenters, a two-input deep learning model was trained with the EEG signals of each experimenter. During dual-input deep learning model training, 80% of the sample set is used as the training model parameters, 10% is used as the validation set to get the best model during training, and 10% of the sample set is used as the test set to determine Model final performance. In the performance test, the model of 15 experimenters achieved an average classification accuracy of 97.27% in the emotion three-classification task.
本发明的基于深度学习和脑机接口的情绪识别系统,对于通过每个使用者的脑电样本数据训练好的双输入深度学习模型,不需要新的调整。在情绪分类检测任务中,只要输入使用者新的脑电样本数据,双输入深度学习模型就可以确定相应的情绪种类。The emotion recognition system based on deep learning and brain-computer interface of the present invention does not require new adjustment for the dual-input deep learning model trained by the EEG sample data of each user. In the emotion classification detection task, as long as the user's new EEG sample data is input, the dual-input deep learning model can determine the corresponding emotion type.
如图5所示,本发明的基于深度学习和脑机接口的情绪识别系统的应用,包括如下步骤:As shown in Figure 5, the application of the emotion recognition system based on deep learning and brain-computer interface of the present invention includes the following steps:
1)使用者观看不同的情感视频,在视频刺激下产生三种不同的情绪,分别是积极、中性和消极;1) The user watches different emotional videos and generates three different emotions under the stimulation of the video, namely positive, neutral and negative;
2)使用便携式脑电采集设备采集使用者三种不同情绪下的情绪EEG脑电信号,根据情绪EEG脑电信号和相应的情绪标签训练情绪分类深度学习模型;2) Use the portable EEG acquisition device to collect the emotional EEG signals of the user under three different emotions, and train the emotion classification deep learning model according to the emotional EEG EEG signals and the corresponding emotional labels;
3)使用便携式脑电采集设备采集使用者待识别的新的情绪EEG脑电信号,并输入到用于情绪分类检测的双输入深度学习模型,对新的情绪EEG脑电信号进行情绪分类;3) Use a portable EEG acquisition device to collect the new emotional EEG signal of the user to be identified, and input it into a dual-input deep learning model for emotion classification and detection, and perform emotion classification on the new emotional EEG signal;
4)通过显示器显示出当前情绪种类,提示使用者的情绪状态。4) Display the current emotion type through the display, and prompt the user's emotional state.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010481477.1A CN111616721B (en) | 2020-05-31 | 2020-05-31 | Emotion recognition system and application based on deep learning and brain-computer interface |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010481477.1A CN111616721B (en) | 2020-05-31 | 2020-05-31 | Emotion recognition system and application based on deep learning and brain-computer interface |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111616721A true CN111616721A (en) | 2020-09-04 |
CN111616721B CN111616721B (en) | 2022-05-27 |
Family
ID=72267292
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010481477.1A Active CN111616721B (en) | 2020-05-31 | 2020-05-31 | Emotion recognition system and application based on deep learning and brain-computer interface |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111616721B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111190484A (en) * | 2019-12-25 | 2020-05-22 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-mode interaction system and method |
CN112200025A (en) * | 2020-09-25 | 2021-01-08 | 北京师范大学 | Control ergonomics analysis method, equipment and system |
CN112244873A (en) * | 2020-09-29 | 2021-01-22 | 陕西科技大学 | A hybrid neural network-based method for EEG spatiotemporal feature learning and emotion classification |
CN112488002A (en) * | 2020-12-03 | 2021-03-12 | 重庆邮电大学 | Emotion recognition method and recognition system based on N170 |
CN112603337A (en) * | 2020-12-21 | 2021-04-06 | 广东海洋大学 | Electroencephalogram signal identification method |
CN112686158A (en) * | 2020-12-30 | 2021-04-20 | 西安慧脑智能科技有限公司 | Emotion recognition system and method based on electroencephalogram signals and storage medium |
CN112883855A (en) * | 2021-02-04 | 2021-06-01 | 东北林业大学 | Electroencephalogram signal emotion recognition based on CNN + data enhancement algorithm Borderline-SMOTE |
CN112990008A (en) * | 2021-03-13 | 2021-06-18 | 山东海量信息技术研究院 | Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network |
CN113081004A (en) * | 2021-02-23 | 2021-07-09 | 厦门大学 | Small sample feature extraction method based on electroencephalogram motor imagery |
CN113128353A (en) * | 2021-03-26 | 2021-07-16 | 安徽大学 | Emotion sensing method and system for natural human-computer interaction |
CN113133769A (en) * | 2021-04-23 | 2021-07-20 | 河北师范大学 | Equipment control method, device and terminal based on motor imagery electroencephalogram signals |
CN113191438A (en) * | 2021-05-08 | 2021-07-30 | 啊哎(上海)科技有限公司 | Learning style recognition model training and recognition method, device, equipment and medium |
CN113509190A (en) * | 2021-03-18 | 2021-10-19 | 上海交通大学 | A product design evaluation method and system |
CN113974625A (en) * | 2021-10-18 | 2022-01-28 | 杭州电子科技大学 | An emotion recognition method based on brain-computer cross-modal transfer |
CN113974628A (en) * | 2021-10-29 | 2022-01-28 | 杭州电子科技大学 | An emotion recognition method based on brain-computer modal co-space |
CN114129163A (en) * | 2021-10-22 | 2022-03-04 | 中央财经大学 | Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114209341A (en) * | 2021-12-23 | 2022-03-22 | 杭州电子科技大学 | Emotional Activation Pattern Discovery Method for Reconstruction of EEG Data with Feature Contribution Differential |
CN114492560A (en) * | 2021-12-06 | 2022-05-13 | 陕西师范大学 | Electroencephalogram emotion classification method based on transfer learning |
CN114795209A (en) * | 2022-04-26 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | A Distributed Multimodal Emotion Detection Method |
CN114947852A (en) * | 2022-06-14 | 2022-08-30 | 华南师范大学 | Multi-mode emotion recognition method, device, equipment and storage medium |
CN115105079A (en) * | 2022-07-26 | 2022-09-27 | 杭州罗莱迪思科技股份有限公司 | Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof |
CN115474947A (en) * | 2022-08-25 | 2022-12-16 | 西安电子科技大学 | Deep learning-based two-class basic emotion intensity electroencephalogram signal identification method |
CN115770044A (en) * | 2022-11-17 | 2023-03-10 | 天津大学 | Emotion recognition method and device based on EEG phase-amplitude coupling network |
WO2023060719A1 (en) * | 2021-10-11 | 2023-04-20 | 北京工业大学 | Method, apparatus, and system for calculating emotional indicators based on pupil waves |
CN116269386A (en) * | 2023-03-13 | 2023-06-23 | 中国矿业大学 | Multi-channel Physiological Time Series Emotion Recognition Method Based on Ordinal Partitioning Network |
CN117033638A (en) * | 2023-08-23 | 2023-11-10 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN119015568A (en) * | 2024-10-28 | 2024-11-26 | 山东科技大学 | A BCI-driven VR environment generation system and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810260A (en) * | 2014-01-27 | 2014-05-21 | 西安理工大学 | Complex network community discovery method based on topological characteristics |
CN106503799A (en) * | 2016-10-11 | 2017-03-15 | 天津大学 | Deep learning model and the application in brain status monitoring based on multiple dimensioned network |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108433722A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Portable brain electric collecting device and its application in SSVEP and Mental imagery |
CN109299811A (en) * | 2018-08-20 | 2019-02-01 | 众安在线财产保险股份有限公司 | A method of the identification of fraud clique and Risk of Communication prediction based on complex network |
US20190347476A1 (en) * | 2018-05-09 | 2019-11-14 | Korea Advanced Institute Of Science And Technology | Method for estimating human emotions using deep psychological affect network and system therefor |
CN110826788A (en) * | 2019-10-30 | 2020-02-21 | 南京智慧航空研究院有限公司 | Airport scene variable slide-out time prediction method based on big data deep learning |
CN110859614A (en) * | 2019-11-22 | 2020-03-06 | 东南大学 | Brain network analysis method of mathematically gifted adolescents based on weighted phase lag index |
-
2020
- 2020-05-31 CN CN202010481477.1A patent/CN111616721B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103810260A (en) * | 2014-01-27 | 2014-05-21 | 西安理工大学 | Complex network community discovery method based on topological characteristics |
CN106503799A (en) * | 2016-10-11 | 2017-03-15 | 天津大学 | Deep learning model and the application in brain status monitoring based on multiple dimensioned network |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108433722A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Portable brain electric collecting device and its application in SSVEP and Mental imagery |
US20190347476A1 (en) * | 2018-05-09 | 2019-11-14 | Korea Advanced Institute Of Science And Technology | Method for estimating human emotions using deep psychological affect network and system therefor |
CN109299811A (en) * | 2018-08-20 | 2019-02-01 | 众安在线财产保险股份有限公司 | A method of the identification of fraud clique and Risk of Communication prediction based on complex network |
CN110826788A (en) * | 2019-10-30 | 2020-02-21 | 南京智慧航空研究院有限公司 | Airport scene variable slide-out time prediction method based on big data deep learning |
CN110859614A (en) * | 2019-11-22 | 2020-03-06 | 东南大学 | Brain network analysis method of mathematically gifted adolescents based on weighted phase lag index |
Non-Patent Citations (1)
Title |
---|
陈骥驰 等: "基于脑电信号的疲劳驾驶状态研究", 《汽车工程》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111190484A (en) * | 2019-12-25 | 2020-05-22 | 中国人民解放军军事科学院国防科技创新研究院 | Multi-mode interaction system and method |
CN112200025A (en) * | 2020-09-25 | 2021-01-08 | 北京师范大学 | Control ergonomics analysis method, equipment and system |
CN112200025B (en) * | 2020-09-25 | 2022-08-19 | 北京师范大学 | Operation and control work efficiency analysis method, device and system |
CN112244873A (en) * | 2020-09-29 | 2021-01-22 | 陕西科技大学 | A hybrid neural network-based method for EEG spatiotemporal feature learning and emotion classification |
CN112488002A (en) * | 2020-12-03 | 2021-03-12 | 重庆邮电大学 | Emotion recognition method and recognition system based on N170 |
CN112603337A (en) * | 2020-12-21 | 2021-04-06 | 广东海洋大学 | Electroencephalogram signal identification method |
CN112686158A (en) * | 2020-12-30 | 2021-04-20 | 西安慧脑智能科技有限公司 | Emotion recognition system and method based on electroencephalogram signals and storage medium |
CN112883855A (en) * | 2021-02-04 | 2021-06-01 | 东北林业大学 | Electroencephalogram signal emotion recognition based on CNN + data enhancement algorithm Borderline-SMOTE |
CN113081004A (en) * | 2021-02-23 | 2021-07-09 | 厦门大学 | Small sample feature extraction method based on electroencephalogram motor imagery |
CN112990008B (en) * | 2021-03-13 | 2022-06-17 | 山东海量信息技术研究院 | Emotion recognition method and system based on three-dimensional feature map and convolutional neural network |
CN112990008A (en) * | 2021-03-13 | 2021-06-18 | 山东海量信息技术研究院 | Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network |
CN113509190A (en) * | 2021-03-18 | 2021-10-19 | 上海交通大学 | A product design evaluation method and system |
CN113128353A (en) * | 2021-03-26 | 2021-07-16 | 安徽大学 | Emotion sensing method and system for natural human-computer interaction |
CN113128353B (en) * | 2021-03-26 | 2023-10-24 | 安徽大学 | Emotion perception method and system oriented to natural man-machine interaction |
CN113133769A (en) * | 2021-04-23 | 2021-07-20 | 河北师范大学 | Equipment control method, device and terminal based on motor imagery electroencephalogram signals |
CN113191438A (en) * | 2021-05-08 | 2021-07-30 | 啊哎(上海)科技有限公司 | Learning style recognition model training and recognition method, device, equipment and medium |
CN113191438B (en) * | 2021-05-08 | 2023-08-15 | 啊哎(上海)科技有限公司 | Learning style recognition model training and recognition method, device, equipment and medium |
WO2023060719A1 (en) * | 2021-10-11 | 2023-04-20 | 北京工业大学 | Method, apparatus, and system for calculating emotional indicators based on pupil waves |
CN113974625B (en) * | 2021-10-18 | 2024-05-03 | 杭州电子科技大学 | Emotion recognition method based on brain-computer cross-modal migration |
CN113974625A (en) * | 2021-10-18 | 2022-01-28 | 杭州电子科技大学 | An emotion recognition method based on brain-computer cross-modal transfer |
CN114129163A (en) * | 2021-10-22 | 2022-03-04 | 中央财经大学 | Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning |
CN114129163B (en) * | 2021-10-22 | 2023-08-29 | 中央财经大学 | Emotion analysis method and system for multi-view deep learning based on electroencephalogram signals |
CN113974628A (en) * | 2021-10-29 | 2022-01-28 | 杭州电子科技大学 | An emotion recognition method based on brain-computer modal co-space |
CN114492560A (en) * | 2021-12-06 | 2022-05-13 | 陕西师范大学 | Electroencephalogram emotion classification method based on transfer learning |
CN114209341B (en) * | 2021-12-23 | 2023-06-20 | 杭州电子科技大学 | Emotional activation pattern mining method based on feature contribution differential EEG data reconstruction |
CN114209341A (en) * | 2021-12-23 | 2022-03-22 | 杭州电子科技大学 | Emotional Activation Pattern Discovery Method for Reconstruction of EEG Data with Feature Contribution Differential |
CN114176607A (en) * | 2021-12-27 | 2022-03-15 | 杭州电子科技大学 | Electroencephalogram signal classification method based on visual Transformer |
CN114176607B (en) * | 2021-12-27 | 2024-04-19 | 杭州电子科技大学 | Electroencephalogram signal classification method based on vision transducer |
CN114795209A (en) * | 2022-04-26 | 2022-07-29 | 中国人民解放军军事科学院国防科技创新研究院 | A Distributed Multimodal Emotion Detection Method |
CN114947852A (en) * | 2022-06-14 | 2022-08-30 | 华南师范大学 | Multi-mode emotion recognition method, device, equipment and storage medium |
CN114947852B (en) * | 2022-06-14 | 2023-01-10 | 华南师范大学 | A multi-modal emotion recognition method, device, equipment and storage medium |
CN115105079A (en) * | 2022-07-26 | 2022-09-27 | 杭州罗莱迪思科技股份有限公司 | Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof |
CN115474947A (en) * | 2022-08-25 | 2022-12-16 | 西安电子科技大学 | Deep learning-based two-class basic emotion intensity electroencephalogram signal identification method |
CN115770044A (en) * | 2022-11-17 | 2023-03-10 | 天津大学 | Emotion recognition method and device based on EEG phase-amplitude coupling network |
CN116269386A (en) * | 2023-03-13 | 2023-06-23 | 中国矿业大学 | Multi-channel Physiological Time Series Emotion Recognition Method Based on Ordinal Partitioning Network |
CN116269386B (en) * | 2023-03-13 | 2024-06-11 | 中国矿业大学 | Multichannel physiological time sequence emotion recognition method based on ordinal division network |
CN117033638A (en) * | 2023-08-23 | 2023-11-10 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN117033638B (en) * | 2023-08-23 | 2024-04-02 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN119015568A (en) * | 2024-10-28 | 2024-11-26 | 山东科技大学 | A BCI-driven VR environment generation system and device |
Also Published As
Publication number | Publication date |
---|---|
CN111616721B (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111616721B (en) | Emotion recognition system and application based on deep learning and brain-computer interface | |
CN111513735B (en) | Major depressive disorder identification system based on brain-computer interface and deep learning and application | |
CN111616682B (en) | Epileptic seizure early warning system based on portable electroencephalogram acquisition equipment and application | |
CN110070105B (en) | Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening | |
CN111584029A (en) | EEG adaptive model based on discriminative adversarial network and its application in rehabilitation | |
CN112932501B (en) | Method for automatically identifying insomnia based on one-dimensional convolutional neural network | |
CN114504730A (en) | Portable brain-controlled hand electrical stimulation rehabilitation system based on deep learning | |
CN113128353B (en) | Emotion perception method and system oriented to natural man-machine interaction | |
Sukumar et al. | Physical actions classification of surface EMG signals using VMD | |
Rajalakshmi et al. | Classification of yoga, meditation, combined yoga–meditation EEG signals using L-SVM, KNN, and MLP classifiers | |
Zhang et al. | Efficient and generalizable cross-patient epileptic seizure detection through a spiking neural network | |
Rahim et al. | Emotion charting using real-time monitoring of physiological signals | |
CN114676720B (en) | Mental state recognition method and system based on graph neural network | |
CN113974627B (en) | Emotion recognition method based on brain-computer generated confrontation | |
CN118656732B (en) | Method and system for identifying depression based on multi-mode data | |
Lee et al. | PPG and EMG Based Emotion Recognition using Convolutional Neural Network. | |
CN118839240A (en) | Electroencephalogram signal classification method based on three-dimensional attention depth separable convolution network | |
Zhang et al. | A pruned deep learning approach for classification of motor imagery electroencephalography signals | |
CN116999070A (en) | Epileptic seizure prediction system integrating intelligent wearing and modal transfer network | |
CN113974625B (en) | Emotion recognition method based on brain-computer cross-modal migration | |
CN117150397A (en) | Multi-branch sample selection-based cross-test electroencephalogram emotion recognition method and system | |
Li et al. | An optimized multi-label TSK fuzzy system for emotion recognition of multimodal physiological signals | |
Jana et al. | A hybrid method for classification of physical action using discrete wavelet transform and artificial neural network | |
Ghosh et al. | Classification of silent speech in english and bengali languages using stacked autoencoder | |
CN114186591A (en) | Method for improving generalization capability of emotion recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |