CN112790750A - Fear and tension emotion recognition method based on video eye movement and heart rate analysis - Google Patents
Fear and tension emotion recognition method based on video eye movement and heart rate analysis Download PDFInfo
- Publication number
- CN112790750A CN112790750A CN201911107613.4A CN201911107613A CN112790750A CN 112790750 A CN112790750 A CN 112790750A CN 201911107613 A CN201911107613 A CN 201911107613A CN 112790750 A CN112790750 A CN 112790750A
- Authority
- CN
- China
- Prior art keywords
- heart rate
- video
- fear
- eye movement
- emotion recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000004424 eye movement Effects 0.000 title claims abstract description 23
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 22
- 238000004458 analytical method Methods 0.000 title claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000012706 support-vector machine Methods 0.000 claims abstract description 10
- 210000004369 blood Anatomy 0.000 claims description 16
- 239000008280 blood Substances 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000002996 emotional effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 6
- 238000010521 absorption reaction Methods 0.000 claims description 5
- 230000017531 blood circulation Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 230000008560 physiological behavior Effects 0.000 claims description 3
- 230000033764 rhythmic process Effects 0.000 claims description 3
- 210000001061 forehead Anatomy 0.000 claims 1
- 210000002837 heart atrium Anatomy 0.000 claims 1
- 230000002159 abnormal effect Effects 0.000 abstract description 9
- 238000010187 selection method Methods 0.000 abstract description 7
- 238000012549 training Methods 0.000 abstract description 4
- 230000007246 mechanism Effects 0.000 abstract description 3
- 238000010998 test method Methods 0.000 abstract description 3
- 238000012360 testing method Methods 0.000 abstract description 3
- 238000007405 data analysis Methods 0.000 abstract description 2
- 210000001519 tissue Anatomy 0.000 description 6
- 230000010247 heart contraction Effects 0.000 description 4
- 210000003205 muscle Anatomy 0.000 description 4
- 208000000059 Dyspnea Diseases 0.000 description 3
- 206010013975 Dyspnoeas Diseases 0.000 description 3
- 206010033557 Palpitations Diseases 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 238000012843 least square support vector machine Methods 0.000 description 3
- 206010033664 Panic attack Diseases 0.000 description 2
- 206010044565 Tremor Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000004087 circulation Effects 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 208000019906 panic disease Diseases 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 208000013220 shortness of breath Diseases 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 206010003497 Asphyxia Diseases 0.000 description 1
- 206010008479 Chest Pain Diseases 0.000 description 1
- 206010008469 Chest discomfort Diseases 0.000 description 1
- 206010008590 Choking sensation Diseases 0.000 description 1
- 206010012374 Depressed mood Diseases 0.000 description 1
- 206010012422 Derealisation Diseases 0.000 description 1
- 208000037490 Medically Unexplained Symptoms Diseases 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 208000002193 Pain Diseases 0.000 description 1
- 206010000059 abdominal discomfort Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 208000002173 dizziness Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 231100000862 numbness Toxicity 0.000 description 1
- 208000035824 paresthesia Diseases 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000035900 sweating Effects 0.000 description 1
- 230000002889 sympathetic effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/024—Measuring pulse rate or heart rate
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Theoretical Computer Science (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Biomedical Technology (AREA)
- Psychiatry (AREA)
- General Physics & Mathematics (AREA)
- Developmental Disabilities (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Ophthalmology & Optometry (AREA)
- Data Mining & Analysis (AREA)
- Cardiology (AREA)
- Social Psychology (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Child & Adolescent Psychology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Physiology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a fear tension recognition method based on video eye movement and heart rate analysis, which is characterized in that a Relief feature selection method is used for optimal selection analysis of two physiological signals by utilizing a mode of combining human eye movement in a video and remote non-contact heart rate estimation. And finally, identifying and judging the fear stress state of the personnel by respectively adopting a k nearest neighbor (kNN) algorithm and a least-squares support vector machine (LS-SVM) algorithm, and the invention relates to the technical field of video analysis. The fear tension emotion recognition method based on video eye movement and heart rate analysis can solve the problems that an abnormal emotion recognition method in the prior art is restricted by the matching degree of a detected person, a test method is not secret and the test efficiency is low, the algorithm efficiency is improved through an abnormal data processing mechanism introduced by eye movement data analysis, a feature selection method is adopted, the feature dimension is reduced, and the abnormal emotion recognition of similar criminal acquaintances is effectively improved while the training time is improved.
Description
Technical Field
The invention relates to the technical field of video analysis, in particular to a fear tension emotion recognition method based on video eye movement data analysis and non-contact heart rate analysis.
Background
Fear-stress mood is a negative mood that a person presents to a particular scene and target in life, fear is a strongly depressed mood experience that people are afraid when they are faced with a certain dangerous situation, trying to get rid of without being able to do so, the fear psychology is the usual "fear", from the point of view of Kelly, the fear is similar to a threat but to a lesser extent, when one's edge elements of a building system, rather than the core building, are proven ineffective, the fear will occur, and somatic symptoms are many sympathetic nerves with reactive symptoms, such as shortness of breath, breathlessness, palpitation, and the like.
At present, the result of identifying the specific emotional state of a person by using physiological signal parameters is more objective and real, but because the acquisition based on physiological signals is based on contact equipment, various physiological index acquisition equipment needs to be carried on the body of a person to be detected, and the mode is difficult to be generally applied because of the relation between the freedom and the privacy of the person.
The method comprises the steps that five physiological parameters are recorded in an MIT laboratory brought by Picard, 40 features are extracted, feasibility of emotion recognition based on multiple physiological parameters is explored, Kim utilizes audio materials and video clips as inducing materials, 200 tested four physiological parameters are collected, a support vector machine algorithm is adopted to classify and recognize four emotions, the recognition rate is found to be reduced when emotion recognition types are increased by utilizing the same algorithm, in the multi-feature optimization processing, the recognition accuracy is guaranteed, meanwhile, the complexity of a whole recognition algorithm model is reduced, however, the algorithms usually occupy larger calculation resources, and for recognition and judgment of fear tension, questionnaires and observation modes are mostly focused at present, but the mode is not used for capturing the fear emotion tension of people under human-computer interaction and public security scenes.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a fear stress emotion recognition method based on video eye movement and heart rate analysis, which utilizes a mode of combining human eye movement in a video and remote non-contact heart rate estimation to provide a method for using a Relief feature selection method for the optimal selection analysis of two physiological signals, and finally adopts a k nearest neighbor (kNN) algorithm and a least support vector machine (LS-SVM) algorithm to recognize and judge the fear stress mental state of a person.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a fear stress emotion recognition method based on video eye movement and heart rate analysis specifically comprises the following steps:
s1, shooting and collecting a video sample of the testee facing the lens;
s2, firstly, detecting the positions of the face and the eyes, estimating the sight direction by using a trained Deep Convolutional Neural Network (DCNN), and outputting the sight direction estimation of the face in the three-dimensional world to obtain a sight equation;
s3, detecting the frontal astriotic position of the face, adopting an rPPG remote photoelectric volume scanning technology to carry out remote non-contact heart rate estimation, wherein the rPPG measures the slight brightness change of the skin by using the reflected ambient light, the slight brightness change of the skin is caused by the blood flow caused by the heart beating, therefore, a signal similar to the Blood Volume Pulse (BVP) can be obtained by the rPPG technology, the signal is used for predicting the heart rate, specifically, when the light with a certain wavelength is irradiated on the surface of the living skin, the light is transmitted out of the surface of the skin of the human body through reflection or transmission, in the process, the light is absorbed by the blood, tissues, muscles and the skin, the intensity is weakened, the absorption of the skin, the tissues and the muscles to the light with the specific wavelength is kept constant in the circulation of the blood, and the light intensity is changed only due to the blood volume in the skin along with the heart beating, since the light absorbed by human tissue is not as much as the blood volume absorbs the light, the change of the blood volume is reflected by the change of the light intensity reflected or transmitted from the surface of the skin, and then a heart rate estimation model based on deep learning is proposed;
s4, inputting the eye position information and the heart rate characteristic information vector obtained in the steps S1-S3 into a Relief-SVM algorithm framework for processing;
s5, classifying the emotional characteristics of fear tension, wherein the characteristic extraction of the fear tension is to design a task paradigm according to the fear state, such as simulating a parachuting task based on virtual reality, watching a horror dynamic film and the like, and recording the facial emotional characteristics and the physiological behavior characteristics of the tested person in the whole process.
Preferably, the rPPG technique in step S3 is specifically: assuming that the light intensity of the experimental environment is constant, the light intensity is set as a constant valueThe amount of absorption of the natural light intensity by the blood volume isThe light intensity obtained by camera shooting and observation isIt is possible to obtain:wherein,andthe human heart rate measuring device has the same period and frequency, so that the periodic change of the light intensity of specific wavelength in the human face area can be detected through the camera theoretically, and the measurement of the human heart rate is realized.
Preferably, the processing procedure of the heart rate estimation model based on deep learning proposed in step S3 for a segment of face video sequence is as follows:
a1, firstly, dividing the video into a plurality of short video sequences, and aligning the human face in each short video;
a2, then, extracting space-time characteristics from the step a1 aligned human face sequence, wherein each segment of the space-time characteristics represents a heart rhythm signal;
a3, using the heart rate signals as input, and through a trained convolutional neural network, predicting the heart rate of the person in each short video;
a4, and finally using the average heart rate of all short video segments as the output heart rate of the video segment.
Preferably, the processing steps of the Relief-SVM algorithm framework in the step S4 are as follows:
b1, initializing a characteristic weight value;
b2, calculating the weight of the feature of each dimension by using the Relief, and eliminating the feature with a smaller weight value;
b3, performing classification calculation on the selected feature vectors by using a classifier;
b4, obtaining a classification result, comparing the classification result with an actual result, and calculating the recognition rate;
b5, adjusting parameters until the optimal recognition rate is obtained, and finally finishing the algorithm model.
Preferably, the input and output of the Relef algorithm in step S4 are attribute value vectors and sample quantities, and the output is a weight estimation for each attribute as follows:
setting an initial weight vector W [ A ] = 0;
For i =1:m do begin;
randomly selecting a sample R;
finding a similar closest point H and a non-similar closest point M;
or a = 1; characteristic dimension do
W[A]= W[A]–diff(A,R,H)/m+diff(A,R,M)/m
End。
Preferably, the SVM algorithm in step S4 is a least squares support vector machine algorithm with a kernel function being a gaussian kernel function.
(III) advantageous effects
The invention provides a fear tension emotion recognition method based on video eye movement and heart rate analysis. Compared with the prior art, the method has the following beneficial effects: according to the fear stress recognition method based on video eye movement and heart rate analysis, a Relief feature selection method is used for optimization selection analysis of the two physiological signals in a mode of combining human eye movement in a video and remote non-contact heart rate estimation. And finally, recognizing and judging the frightened and nervous states of the staff by respectively adopting a k nearest neighbor (kNN) algorithm and a least support vector machine (LS-SVM) algorithm, solving the problems that the abnormal emotion recognition method in the prior art is restricted by the matching degree of a detected person, the test method is not secret and the test efficiency is low, greatly improving the algorithm efficiency by analyzing an introduced abnormal data processing mechanism through eye movement data, reducing the characteristic dimension by adopting a characteristic selection method, effectively improving the abnormal emotion recognition of similar criminals and other acquaintances while improving the training time, and having ingenious and novel method and good application prospect.
Drawings
FIG. 1 is a schematic diagram of video data acquisition according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a technical solution: a fear stress emotion recognition method based on video eye movement and heart rate analysis specifically comprises the following steps:
s1, shooting and collecting a video sample of the testee facing the lens;
s2, firstly, detecting the positions of the face and the eyes, estimating the sight direction by using a trained Deep Convolutional Neural Network (DCNN), and outputting the sight direction estimation of the face in the three-dimensional world to obtain a sight equation;
s3, detecting the frontal astriotic position of the face, adopting an rPPG remote photoelectric volume scanning technology to carry out remote non-contact heart rate estimation, wherein the rPPG measures the slight brightness change of the skin by using the reflected ambient light, the slight brightness change of the skin is caused by the blood flow caused by the heart beating, therefore, a signal similar to the Blood Volume Pulse (BVP) can be obtained by the rPPG technology, the signal is used for predicting the heart rate, specifically, when the light with a certain wavelength is irradiated on the surface of the living skin, the light is transmitted out of the surface of the skin of the human body through reflection or transmission, in the process, the light is absorbed by the blood, tissues, muscles and the skin, the intensity is weakened, the absorption of the skin, the tissues and the muscles to the light with the specific wavelength is kept constant in the circulation of the blood, and the light intensity is changed only due to the blood volume in the skin along with the heart beating, since the light absorbed by human tissue is not as much as the blood volume absorbs the light, the change of the blood volume is reflected by the change of the light intensity reflected or transmitted from the surface of the skin, and then a heart rate estimation model based on deep learning is proposed;
and S4, inputting the eye position information and the heart rate characteristic information vector obtained in the steps S1-S3 into a Relief-SVM algorithm framework for processing, wherein the Relief algorithm is a characteristic selection algorithm with small calculation amount. The feature dimensionality is reduced by selecting features with high overall data correlation, and the algorithm assigns corresponding weights to each dimension of the features respectively and characterizes the correlation with the category according to the weights.
The Relief algorithm randomly selects m samples from the training samples, calculates the assumed intervals, accumulates the assumed intervals as the weight of each dimension of the final feature, and updates the weight of a certain dimension feature p in the sample x as follows:
if p is discrete, then
If p is continuous, then
Wherein max (p), min (p) are the upper and lower bounds of p, respectively;
s5, classifying the emotional characteristics of fear tension, wherein the characteristic extraction of the fear tension is to design a task paradigm according to the fear state, such as simulating a parachuting task based on virtual reality, watching a horror dynamic film and the like, and recording the facial emotional characteristics and the physiological behavior characteristics of the tested person in the whole process.
In the invention, the rPPG technique in step S3 specifically is as follows: assuming that the light intensity of the experimental environment is constant, the light intensity is set as a constant valueThe amount of absorption of the natural light intensity by the blood volume isThe light intensity obtained by camera shooting and observation isIt is possible to obtain:wherein,andthe human heart rate measuring device has the same period and frequency, so that the periodic change of the light intensity of specific wavelength in the human face area can be detected through the camera theoretically, and the measurement of the human heart rate is realized.
In the present invention, the processing procedure of the heart rate estimation model based on deep learning proposed in step S3 for a segment of face video sequence is as follows:
a1, firstly, dividing the video into a plurality of short video sequences, and aligning the human face in each short video;
a2, then, extracting space-time characteristics from the step a1 aligned human face sequence, wherein each segment of the space-time characteristics represents a heart rhythm signal;
a3, using the heart rate signals as input, and through a trained convolutional neural network, predicting the heart rate of the person in each short video;
a4, and finally using the average heart rate of all short video segments as the output heart rate of the video segment.
In the invention, the processing steps of the Relief-SVM algorithm framework in the step S4 are as follows:
b1, initializing a characteristic weight value;
b2, calculating the weight of the feature of each dimension by using the Relief, and eliminating the feature with a smaller weight value;
b3, performing classification calculation on the selected feature vectors by using a classifier;
b4, obtaining a classification result, comparing the classification result with an actual result, and calculating the recognition rate;
b5, adjusting parameters until the optimal recognition rate is obtained, and finally finishing the algorithm model.
In the present invention, the input and output of the Relef algorithm in step S4 are attribute value vectors and sample quantities, and the output is a weight estimation for each attribute as follows:
setting an initial weight vector W [ A ] = 0;
For i =1:m do begin;
randomly selecting a sample R;
finding a similar closest point H and a non-similar closest point M;
or a = 1; characteristic dimension do
W[A]= W[A]–diff(A,R,H)/m+diff(A,R,M)/m
End。
In the invention, the SVM algorithm in the step S4 adopts a least square support vector machine algorithm with a kernel function being a Gaussian kernel function, the SVM algorithm has unique advantages in the processing of nonlinear, high-dimensional and small sample data, the support vector machine has the basic idea that sample points which are difficult to divide in an N-dimensional space are distinguished by finding a plane which is most beneficial to classification in the high-dimensional space, and the least square support vector machine algorithm not only has the characteristics of a classical support vector machine, but also has the advantages of high calculation efficiency, less required calculation resources and the like, and meanwhile, the kernel function is utilized in the support vector machine algorithm to improve the calculation efficiency, so the least square support vector machine algorithm with the kernel function being the Gaussian kernel function is adopted in the invention.
In response to a terrorist condition, the present invention sets a self-evaluation quantitative table and a others evaluation quantitative table according to panic attack criteria defined in manual of diagnostic and statistical handbook for mental disorders (fifth edition) (DSM-5) published by the American psychiatric Association, DSM-5 defines panic attacks as a sudden strong fear or a strong discomfort that peaks in several minutes during which at least the following 4 or more symptoms appear: 1. palpitations, palpitation, or accelerated heart rate; 2. sweating; 3. tremor or trembling; 4. shortness of breath or asphyxia; 5. choking sensation in the middle of choking; 6. chest pain or discomfort; 7. nausea or abdominal discomfort; 8. feel dizziness, unstable footsteps, heavy head and light feet or faint; 9. a cold or hot sensation; 10. skin paresthesia, such as numbness or stinging; 11. physical disintegration (feeling unreal) or personality disintegration (feeling off of oneself); 12. fear of loss of control or "crazy"; 13 sense of dying.
The video data acquisition device comprises the following components: the video is taken indoors, illuminated by sunlight transmitted through a window, the participant 3 sits at a position about 40 to 50 cm in front of a notebook computer 2 with a built-in camera 1, sits still, breathes naturally, faces the camera, takes video acquisition with 24-bit RGB true color, frame rate of 15 frames/second, pixel resolution of 1920 x 1080, and measures with a measuring instrument 4.
To sum up the above
The invention provides a method for optimizing, selecting and analyzing two physiological signals by using a Relief characteristic selection method in a mode of combining human eye movement in a video and remote non-contact heart rate estimation. And finally, recognizing and judging the frightened and nervous states of the staff by respectively adopting a k nearest neighbor (kNN) algorithm and a least support vector machine (LS-SVM) algorithm, solving the problems that the abnormal emotion recognition method in the prior art is restricted by the matching degree of a detected person, the test method is not secret and the test efficiency is low, greatly improving the algorithm efficiency by analyzing an introduced abnormal data processing mechanism through eye movement data, reducing the characteristic dimension by adopting a characteristic selection method, effectively improving the abnormal emotion recognition of similar criminals and other acquaintances while improving the training time, and having ingenious and novel method and good application prospect.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (6)
1. A fear stress emotion recognition method based on video eye movement and heart rate analysis is characterized by comprising the following steps: the method specifically comprises the following steps:
s1, shooting and collecting a video sample of the testee facing the lens;
s2, firstly, detecting the positions of the human face and the eyes, estimating the sight direction by using the trained deep convolutional neural network, and outputting the sight direction estimation of the human face in the three-dimensional world to obtain a sight equation;
s3, detecting the forehead atrium position of the human face, performing remote non-contact heart rate estimation by adopting an rPPG remote photoelectric volume scanning technology, measuring the slight brightness change of the skin by utilizing the reflected ambient light by the rPPG, wherein the slight brightness change of the skin is caused by the blood flow caused by the heartbeat, so that a signal similar to a Blood Volume Pulse (BVP) can be obtained by the rPPG technology and used for predicting the heart rate, and then a heart rate estimation model based on deep learning is provided;
s4, inputting the eye position information and the heart rate characteristic information vector obtained in the steps S1-S3 into a Relief-SVM algorithm framework for processing;
and S5, classifying the emotional characteristics of the fear stress, wherein the characteristic extraction of the fear stress is to design a task paradigm according to the fear state and record the facial emotional characteristics and the physiological behavior characteristics of the tested person in the whole process.
2. The fear stress emotion recognition method based on video eye movement and heart rate analysis, as claimed in claim 1, wherein: the rPPG technique in step S3 is specifically: assuming that the light intensity of the experimental environment is constant, the light intensity is set as a constant valueThe amount of absorption of the natural light intensity by the blood volume isThe light intensity obtained by camera shooting and observation isIt is possible to obtain:wherein,andwith the same period and frequency.
3. The fear stress emotion recognition method based on video eye movement and heart rate analysis, as claimed in claim 1, wherein: the processing procedure of the heart rate estimation model based on deep learning proposed in the step S3 for a segment of face video sequence is as follows:
a1, firstly, dividing the video into a plurality of short video sequences, and aligning the human face in each short video;
a2, then, extracting space-time characteristics from the step a1 aligned human face sequence, wherein each segment of the space-time characteristics represents a heart rhythm signal;
a3, using the heart rate signals as input, and through a trained convolutional neural network, predicting the heart rate of the person in each short video;
a4, and finally using the average heart rate of all short video segments as the output heart rate of the video segment.
4. The fear stress emotion recognition method based on video eye movement and heart rate analysis, as claimed in claim 1, wherein: the processing steps of the Relief-SVM algorithm framework in the step S4 are as follows:
b1, initializing a characteristic weight value;
b2, calculating the weight of the feature of each dimension by using the Relief, and eliminating the feature with a smaller weight value;
b3, performing classification calculation on the selected feature vectors by using a classifier;
b4, obtaining a classification result, comparing the classification result with an actual result, and calculating the recognition rate;
b5, adjusting parameters until the optimal recognition rate is obtained, and finally finishing the algorithm model.
5. The fear stress emotion recognition method based on video eye movement and heart rate analysis, as claimed in claim 1, wherein: the input and output of the Relef algorithm in step S4 are attribute value vectors and sample quantities, and the output is a weight estimation for each attribute as follows:
setting an initial weight vector W [ A ] = 0;
For i =1:m do begin;
randomly selecting a sample R;
finding a similar closest point H and a non-similar closest point M;
or a =1, characteristic dimension do;
W[A]= W[A]–diff(A,R,H)/m+diff(A,R,M)/m;
End。
6. the fear stress emotion recognition method based on video eye movement and heart rate analysis, as claimed in claim 1, wherein: the SVM algorithm in step S4 is a least squares support vector machine algorithm using a kernel function as a gaussian kernel function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911107613.4A CN112790750A (en) | 2019-11-13 | 2019-11-13 | Fear and tension emotion recognition method based on video eye movement and heart rate analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911107613.4A CN112790750A (en) | 2019-11-13 | 2019-11-13 | Fear and tension emotion recognition method based on video eye movement and heart rate analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112790750A true CN112790750A (en) | 2021-05-14 |
Family
ID=75803144
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911107613.4A Withdrawn CN112790750A (en) | 2019-11-13 | 2019-11-13 | Fear and tension emotion recognition method based on video eye movement and heart rate analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112790750A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113827240A (en) * | 2021-09-22 | 2021-12-24 | 北京百度网讯科技有限公司 | Emotion classification method and emotion classification model training method, device and equipment |
CN113935424A (en) * | 2021-10-21 | 2022-01-14 | 中国银行股份有限公司 | Abnormal service prediction method and device |
CN114617554A (en) * | 2022-02-18 | 2022-06-14 | 国网浙江省电力有限公司湖州供电公司 | Auxiliary method and device based on business capability evaluation of emergency repair service seat personnel |
CN114677733A (en) * | 2022-03-25 | 2022-06-28 | 中国工商银行股份有限公司 | Information warning methods, systems, devices, terminal equipment, media and program products |
CN116595423A (en) * | 2023-07-11 | 2023-08-15 | 四川大学 | Air traffic controller cognitive load assessment method based on multi-feature fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103561635A (en) * | 2011-05-11 | 2014-02-05 | 谷歌公司 | Gaze tracking system |
US20140112556A1 (en) * | 2012-10-19 | 2014-04-24 | Sony Computer Entertainment Inc. | Multi-modal sensor based emotion recognition and emotional interface |
US20150031965A1 (en) * | 2013-07-26 | 2015-01-29 | Tata Consultancy Services Limited | Monitoring physiological parameters |
CN109199412A (en) * | 2018-09-28 | 2019-01-15 | 南京工程学院 | Abnormal emotion recognition methods based on eye movement data analysis |
CN109512441A (en) * | 2018-12-29 | 2019-03-26 | 中山大学南方学院 | Emotion identification method and device based on multiple information |
-
2019
- 2019-11-13 CN CN201911107613.4A patent/CN112790750A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103561635A (en) * | 2011-05-11 | 2014-02-05 | 谷歌公司 | Gaze tracking system |
US20140112556A1 (en) * | 2012-10-19 | 2014-04-24 | Sony Computer Entertainment Inc. | Multi-modal sensor based emotion recognition and emotional interface |
US20150031965A1 (en) * | 2013-07-26 | 2015-01-29 | Tata Consultancy Services Limited | Monitoring physiological parameters |
CN109199412A (en) * | 2018-09-28 | 2019-01-15 | 南京工程学院 | Abnormal emotion recognition methods based on eye movement data analysis |
CN109512441A (en) * | 2018-12-29 | 2019-03-26 | 中山大学南方学院 | Emotion identification method and device based on multiple information |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113827240A (en) * | 2021-09-22 | 2021-12-24 | 北京百度网讯科技有限公司 | Emotion classification method and emotion classification model training method, device and equipment |
CN113827240B (en) * | 2021-09-22 | 2024-03-22 | 北京百度网讯科技有限公司 | Emotion classification method, training device and training equipment for emotion classification model |
CN113935424A (en) * | 2021-10-21 | 2022-01-14 | 中国银行股份有限公司 | Abnormal service prediction method and device |
CN114617554A (en) * | 2022-02-18 | 2022-06-14 | 国网浙江省电力有限公司湖州供电公司 | Auxiliary method and device based on business capability evaluation of emergency repair service seat personnel |
CN114677733A (en) * | 2022-03-25 | 2022-06-28 | 中国工商银行股份有限公司 | Information warning methods, systems, devices, terminal equipment, media and program products |
CN116595423A (en) * | 2023-07-11 | 2023-08-15 | 四川大学 | Air traffic controller cognitive load assessment method based on multi-feature fusion |
CN116595423B (en) * | 2023-07-11 | 2023-09-19 | 四川大学 | A cognitive load assessment method for air traffic controllers based on multi-feature fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112790750A (en) | Fear and tension emotion recognition method based on video eye movement and heart rate analysis | |
US10004410B2 (en) | System and methods for measuring physiological parameters | |
Zhang | Automated biometrics: technnologies and systems | |
KR101738278B1 (en) | Emotion recognition method based on image | |
CN106203497B (en) | Finger vena area-of-interest method for screening images based on image quality evaluation | |
Rigas et al. | Current research in eye movement biometrics: An analysis based on BioEye 2015 competition | |
Zhang et al. | Exercise fatigue detection algorithm based on video image information extraction | |
Nie et al. | SPIDERS: Low-cost wireless glasses for continuous in-situ bio-signal acquisition and emotion recognition | |
Liu et al. | Learning temporal similarity of remote photoplethysmography for fast 3D mask face presentation attack detection | |
Das et al. | Iris recognition performance in children: A longitudinal study | |
He et al. | Remote photoplethysmography heart rate variability detection using signal to noise ratio bandpass filtering | |
Tarmizi et al. | A review of facial thermography assessment for vital signs estimation | |
Muender et al. | Extracting heart rate from videos of online participants | |
Kuzu et al. | Gender-specific characteristics for hand-vein biometric recognition: analysis and exploitation | |
Mirabet-Herranz et al. | Lvt face database: A benchmark database for visible and hidden face biometrics | |
Hsieh et al. | The emotion recognition system with Heart Rate Variability and facial image features | |
CN115690528A (en) | EEG signal aesthetic evaluation processing method, device, medium and terminal across subject scenes | |
Zheng | Static and dynamic analysis of near infra-red dorsal hand vein images for biometric applications | |
KR102616230B1 (en) | Method for determining user's concentration based on user's image and operating server performing the same | |
Li | Biometric Person Identification Using Near-infrared Hand-dorsa Vein Images | |
Mohammadi et al. | Two-step deep learning for estimating human sleep pose occluded by bed covers | |
Ciftci et al. | Heart rate based face synthesis for pulse estimation | |
Rawat et al. | Real-Time Heartbeat Sensing with Face Video using a Webcam and OpenCV | |
CN114219772A (en) | Method, device, terminal equipment and storage medium for predicting health parameters | |
Al-Yoonus et al. | Video-Based Discrimination of Genuine and Counterfeit Facial Features Leveraging Cardiac Pulse Rhythm Signals in Access Control Systems. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210514 |