[go: up one dir, main page]

CN109614927B - Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction - Google Patents

Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction Download PDF

Info

Publication number
CN109614927B
CN109614927B CN201811499959.9A CN201811499959A CN109614927B CN 109614927 B CN109614927 B CN 109614927B CN 201811499959 A CN201811499959 A CN 201811499959A CN 109614927 B CN109614927 B CN 109614927B
Authority
CN
China
Prior art keywords
frame
value
difference
color
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811499959.9A
Other languages
Chinese (zh)
Other versions
CN109614927A (en
Inventor
张延良
郭辉
李赓
桂伟峰
王俊峰
蒋涵笑
卢冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University of Technology
Original Assignee
Henan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Technology filed Critical Henan University of Technology
Priority to CN201811499959.9A priority Critical patent/CN109614927B/en
Publication of CN109614927A publication Critical patent/CN109614927A/en
Application granted granted Critical
Publication of CN109614927B publication Critical patent/CN109614927B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供了一种微表情识别方法,对视频中的每一帧进行人脸识别,提取人脸区域;提取视频中每一帧的像素数、背景颜色、人脸亮度;依次选取一非首帧,计算选取的帧与其前一帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差;计算每一非首帧的差异值;将差异值大于预设阈值的帧,以及,视频的首帧均确定为候选帧;在候选帧中,将标识连续的帧均确定为微表情帧;提取微表情帧的表情特征,通过预先训练的降维模型对表情特征进行降维处理,对降维后的特征进行识别,得到识别结果。本申请根据人脸区域面积差、像素数差、背景颜色差、人脸亮度差选择微表情帧进行识别,可以准确的提取人脸视频中与微表情相关的帧,提升微表情帧的识别效率与准确性。

Figure 201811499959

The present application provides a micro-expression recognition method, which performs face recognition on each frame in a video, and extracts a face area; extracts the number of pixels, background color, and face brightness of each frame in the video; frame, calculate the face area difference, pixel number difference, background color difference, and face brightness difference between the selected frame and the previous frame; calculate the difference value of each non-first frame; And, the first frame of the video is determined as a candidate frame; in the candidate frame, the frames with continuous identification are determined as micro-expression frames; the expression features of the micro-expression frames are extracted, and the expression features are dimensionally reduced by a pre-trained dimensionality reduction model Process, identify the features after dimensionality reduction, and obtain the identification result. In this application, micro-expression frames are selected for identification according to the difference in the area of the face area, the difference in the number of pixels, the difference in the background color, and the difference in the brightness of the face, so that the frames related to the micro-expression in the face video can be accurately extracted, and the recognition efficiency of the micro-expression frame can be improved. with accuracy.

Figure 201811499959

Description

基于前后帧差异及特征降维的微表情识别Micro-expression Recognition Based on Difference Between Front and Back Frames and Feature Dimensionality Reduction

技术领域technical field

本申请涉及人工智能技术领域,尤其涉及微表情识别方法。This application relates to the technical field of artificial intelligence, and in particular to a micro-expression recognition method.

背景技术Background technique

微表情是一种非言语行为,能够展现出人的自身情感。Micro-expression is a kind of non-verbal behavior that can show people's own emotions.

目前关注的大都以普通表情为主,除了人脸普通表情外,还有在心理抑制状态下,面部肌肉不受控制收缩而产生的微表情。At present, most of the attention is on ordinary expressions. In addition to ordinary facial expressions, there are also micro-expressions produced by uncontrolled contraction of facial muscles under psychological inhibition.

微表情的持续时间短,且动作幅度非常小。因此正确观测并且识别有着相当的难度。用裸眼准确捕捉和识别微表情成功率很低。经过专业训练后,识别率也仅能达到47%。Microexpressions are short in duration and have very small movements. Therefore, it is quite difficult to observe and identify correctly. The success rate of accurately capturing and recognizing micro-expressions with the naked eye is very low. After professional training, the recognition rate can only reach 47%.

因此,微表情的识别方法受到了越来越多的研究者的关注。Therefore, the recognition method of micro-expressions has attracted more and more researchers' attention.

发明内容Contents of the invention

为解决上述问题,本申请实施例提出了一种微表情识别方法。In order to solve the above problems, the embodiment of the present application proposes a micro-expression recognition method.

获取人脸视频;Get face video;

对所述视频中的每一帧进行人脸识别,提取人脸区域;Carry out face recognition to each frame in the video, and extract the face area;

提取所述视频中每一帧的像素数、背景颜色、人脸亮度;Extract the number of pixels, background color, and face brightness of each frame in the video;

依次选取一非首帧,计算选取的帧与其前一帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差;Select a non-first frame in turn, and calculate the face area difference, pixel number difference, background color difference, and face brightness difference between the selected frame and the previous frame;

计算每一非首帧的差异值,其中,差异值=(人脸区域面积差*人脸亮度差+背景颜色差)^像素数差;Calculate the difference value of each non-first frame, wherein, difference value=(face area difference*face brightness difference+background color difference)^pixel number difference;

将差异值大于预设阈值的帧,以及,所述视频的首帧均确定为候选帧;Determining a frame whose difference value is greater than a preset threshold, and the first frame of the video as a candidate frame;

在候选帧中,将标识连续的帧均确定为微表情帧;In the candidate frame, the frames with continuous marks are all determined as micro-expression frames;

提取微表情帧的表情特征,通过预先训练的降维模型对表情特征进行降维处理,对降维后的特征进行识别,得到识别结果。The expression features of micro-expression frames are extracted, and the dimensionality reduction processing is performed on the expression features through the pre-trained dimensionality reduction model, and the features after dimensionality reduction are recognized to obtain the recognition results.

可选地,提取所述视频中每一帧的背景颜色,包括:Optionally, extracting the background color of each frame in the video includes:

对于所述视频中任一帧,For any frame in the video,

将所述任一帧中非人脸区域确定为背景区域;Determining the non-face area in any frame as the background area;

确定所述任一帧背景区域中各像素点的RGB颜色值,所述RGB颜色值包括红色颜色值、绿色颜色值、蓝色颜色值;Determine the RGB color value of each pixel in the background area of any frame, and the RGB color value includes a red color value, a green color value, and a blue color value;

通过如下公式计算所述任一帧背景区域的RGB颜色均值,所述RGB颜色均值包括红色颜色均值、绿色颜色均值,蓝色颜色均值:The RGB color mean value of the background area of any frame is calculated by the following formula, and the RGB color mean value includes a red color mean value, a green color mean value, and a blue color mean value:

Figure BDA0001897945250000021
Figure BDA0001897945250000021

其中j为所述任一帧背景区域的像素点标识,

Figure BDA0001897945250000022
为所述任一帧背景区域的红色颜色均值,
Figure BDA0001897945250000023
为所述任一帧背景区域的绿色颜色均值,
Figure BDA0001897945250000024
为所述任一帧背景区域的蓝色颜色均值,c1j为所述任一帧背景区域第j像素点的红色颜色值,c2j为所述任一帧背景区域第j像素点的绿色颜色值,c3j为所述任一帧背景区域第j像素点的蓝色颜色值,n1为所述任一帧背景区域中像素点总数量;Wherein j is the pixel point identification of the background area of any frame,
Figure BDA0001897945250000022
is the mean value of the red color of the background area of any frame,
Figure BDA0001897945250000023
is the mean value of the green color of the background area of any frame,
Figure BDA0001897945250000024
For the blue color mean value of the background area of any frame, c1j is the red color value of the jth pixel in the background area of any frame, and c2j is the green color of the jth pixel in the background area of any frame value, c 3j is the blue color value of the jth pixel in the background area of any frame, and n 1 is the total number of pixels in the background area of any frame;

计算所述任一帧背景区域的RGB颜色均方差,所述RGB颜色均方差包括红色颜色均方差、绿色颜色均方差、蓝色颜色均方差:Calculate the RGB color mean square error of any frame background area, the RGB color mean square error includes red color mean square error, green color mean square error, blue color mean square error:

Figure BDA0001897945250000025
Figure BDA0001897945250000025

其中,σ11为红色颜色均方差,σ21为绿色颜色均方差,σ31为蓝色颜色均方差;Among them, σ 11 is the mean square deviation of the red color, σ 21 is the mean square deviation of the green color, and σ 31 is the mean square deviation of the blue color;

确定所述任一帧背景区域的RGB颜色区间,所述RGB颜色区间包括红色颜色区间

Figure BDA0001897945250000026
绿色颜色区间
Figure BDA0001897945250000027
蓝色颜色区间
Figure BDA0001897945250000028
Determine the RGB color interval of the background area of any frame, and the RGB color interval includes the red color interval
Figure BDA0001897945250000026
green color range
Figure BDA0001897945250000027
blue color range
Figure BDA0001897945250000028

在所述任一帧背景区域的所有像素点中,确定RGB颜色值中红色颜色值位于RGB颜色区间中红色颜色区间内,且绿色颜色值位于绿色颜色区间内,且蓝色颜色值位于蓝色颜色区间内像素点数量n2;In all pixels in the background area of any frame, determine that the red color value in the RGB color value is located in the red color interval in the RGB color interval, and the green color value is located in the green color interval, and the blue color value is located in the blue color interval The number of pixels in the color interval n2;

根据n2确定所述任一帧的背景颜色。Determine the background color of any frame according to n2.

可选地,所述背景颜色由RGB颜色值表示;Optionally, the background color is represented by an RGB color value;

所述根据n2确定所述任一帧的背景颜色,包括:The determining the background color of any frame according to n2 includes:

计算所述任一帧的像素数比值n3=n2/n1Calculating the pixel number ratio n 3 =n 2 /n 1 of any frame;

所述任一帧的背景颜色的红色颜色值为

Figure BDA0001897945250000029
绿色颜色值为
Figure BDA00018979452500000210
蓝色颜色值为
Figure BDA00018979452500000211
The background color of either frame has a red color value of
Figure BDA0001897945250000029
The green color value is
Figure BDA00018979452500000210
The blue color value is
Figure BDA00018979452500000211

可选地,提取所述视频中每一帧的人脸亮度,包括:Optionally, extracting the face brightness of each frame in the video includes:

对于所述视频中任一帧,For any frame in the video,

通过如下公式确定所述任一帧的人脸区域中各像素点的亮度值:Determine the brightness value of each pixel in the face area of any frame by the following formula:

Figure BDA0001897945250000031
Figure BDA0001897945250000031

其中,k为所述任一帧人脸区域的像素点标识,hk为所述任一帧人脸区域的第k像素点的亮度值,Rk为第k像素点的RGB颜色值中的红色颜色值,Gk为第k像素点的RGB颜色值中的绿色颜色值,Bk为第k像素点的RGB颜色值中的蓝色颜色值;Wherein, k is the pixel point identification of described any frame human face area, h k is the luminance value of the kth pixel point of described any frame human face area, R k is the RGB color value in the kth pixel point Red color value, G k is the green color value in the RGB color value of the k pixel point, B k is the blue color value in the RGB color value of the k pixel point;

在所述任一帧人脸区域的所有像素点的亮度值中,确定最大亮度值和最小亮度值;Among the brightness values of all pixels in the human face area of any frame, determine a maximum brightness value and a minimum brightness value;

计算所述任一帧人脸区域的亮度均值

Figure BDA0001897945250000032
其中,n4为所述任一帧的人脸区域中像素点总数量;Calculate the mean value of the brightness of the face area of any frame
Figure BDA0001897945250000032
Wherein, n4 is the total number of pixels in the face area of any frame;

根据最大亮度值、最小亮度值和

Figure BDA0001897945250000038
确定所述视频中任一帧的人脸亮度。According to the maximum brightness value, minimum brightness value and
Figure BDA0001897945250000038
Determine the brightness of a face at any frame in the video.

可选地,所述根据最大亮度值、最小亮度值和

Figure BDA0001897945250000037
确定所述视频中任一帧的人脸亮度,包括:Optionally, according to the maximum brightness value, the minimum brightness value and
Figure BDA0001897945250000037
Determine the brightness of a face in any frame of the video, including:

计算第一差值d1=最大亮度值-最小亮度值;Calculate the first difference d1=maximum brightness value-minimum brightness value;

计算第二差值

Figure BDA0001897945250000033
Calculate the second difference
Figure BDA0001897945250000033

计算第三差值

Figure BDA0001897945250000034
Calculate the third difference
Figure BDA0001897945250000034

计算亮度比值d4=|d1-d2|/|d1-d3|;Calculate brightness ratio d4=|d1-d2|/|d1-d3|;

计算所述任一帧人脸区域的亮度均方差

Figure BDA0001897945250000035
Calculate the mean square error of the brightness of the face area of any frame
Figure BDA0001897945250000035

所述视频中任一帧的人脸亮度为

Figure BDA0001897945250000036
The face brightness of any frame in the video is
Figure BDA0001897945250000036

可选地,所述计算每一非首帧的差异值之前,还包括:Optionally, before calculating the difference value of each non-first frame, it also includes:

根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛;According to the face area difference, pixel number difference, background color difference, and face brightness difference of each non-first frame, the non-first frame is initially screened;

所述计算每一非首帧的差异值,包括:The calculation of the difference value of each non-first frame includes:

计算初筛后各帧的差异值。Calculate the difference value of each frame after the primary screening.

可选地,所述根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛,包括:Optionally, the preliminary screening of the non-first frames according to the face area difference, pixel number difference, background color difference, and face brightness difference of each non-first frame includes:

对于任一非首帧,For any non-first frame,

若所述任一非首帧的人脸区域面积差不大于第一值,且,像素数差不大于第二值,且,背景颜色差不大于第三值,且,人脸亮度差不大于第四值,则所述任一非首帧通过初筛;或者,If the face area difference of any non-first frame is not greater than the first value, and the pixel number difference is not greater than the second value, and the background color difference is not greater than the third value, and the face brightness difference is not greater than fourth value, then any non-first frame passes the preliminary screening; or,

若所述任一非首帧的人脸区域面积差不大于第一值,但像素数差、背景颜色差、人脸亮度差均为0,则所述任一非首帧通过初筛;或者,If the face area difference of any non-first frame is not greater than the first value, but the pixel number difference, background color difference, and face brightness difference are all 0, then any non-first frame passes the preliminary screening; or ,

若所述任一非首帧的人脸亮度差不大于第四值,但人脸区域面积差、像素数差、背景颜色差均为0,则所述任一非首帧通过初筛;If the face brightness difference of any non-first frame is not greater than the fourth value, but the face area difference, pixel count difference, and background color difference are all 0, then any non-first frame passes the preliminary screening;

其中,第一值为(所有非首帧的人脸区域面积差之和+首帧的人脸区域面积-avg1)/所述人脸视频的总帧数,第二值为(所有非首帧的像素数差之和+首帧的像素数-avg2)/所述人脸视频的总帧数,第三值为(所有非首帧的背景颜色差之和+首帧的背景颜色-avg3)/所述人脸视频的总帧数,第四值为(所有非首帧的人脸亮度差之和+首帧的人脸亮度-avg4)/所述人脸视频的总帧数,avg1=所有帧的人脸区域面积之和/所述人脸视频的总帧数,avg2=所有帧的像素数之和/所述人脸视频的总帧数,avg3=所有帧的背景颜色之和/所述人脸视频的总帧数,avg4=所有帧的人脸亮度之和/所述人脸视频的总帧数。Wherein, the first value is (the total number of frames of the human face video of (the sum of the face area difference of all non-first frames+the face area of the first frame-avg1)/described human face video, and the second value is (all non-first frames The sum of the difference in the number of pixels+the number of pixels of the first frame-avg2)/the total number of frames of the face video, and the third value is (the sum of the background color differences of all non-first frames+the background color of the first frame-avg3) /The total number of frames of the human face video, the fourth value is (the sum of the difference in brightness of the human face of all non-first frames+the brightness of the human face of the first frame-avg4)/the total number of frames of the human face video, avg1= The sum of the face area areas of all frames/the total number of frames of the human face video, avg2=the sum of the pixels of all frames/the total number of frames of the described human face video, avg3=the sum of the background colors of all frames/ The total number of frames of the human face video, avg4=the sum of the brightness of the human face of all frames/the total number of frames of the human face video.

可选地,所述通过预先训练的降维模型对表情特征进行降维处理之前,还包括:Optionally, before performing the dimensionality reduction processing on the expression features through the pre-trained dimensionality reduction model, it also includes:

获取样本集X,其中,X中样本总数为m,每个样本包括多个表情特征,每个样本属于一个类别;Obtain a sample set X, where the total number of samples in X is m, each sample includes multiple expression features, and each sample belongs to a category;

按类别,将所有样本进行分类;Classify all samples by category;

计算各类的均值向量

Figure BDA0001897945250000041
其中,i为类标识,μi为第i类的均值向量,bi为第i类的样本数量,j为样本标识,xij为第i类第j个样本的表情特征组成的向量;Calculate the mean vector of each class
Figure BDA0001897945250000041
Wherein, i is the class identification, μ i is the mean value vector of the i-th class, b i is the number of samples of the i-th class, j is the sample identification, and x ij is a vector composed of the expression features of the j-th sample of the i-th class;

根据各类的均值向量,确定总均值向量

Figure BDA0001897945250000042
其中,μ0为总均值向量,E为X中样本所属不同类别总数;According to the mean vector of each category, determine the total mean vector
Figure BDA0001897945250000042
Among them, μ 0 is the total mean value vector, and E is the total number of different categories of samples in X;

根据总均值向量,计算类间方差向量和类内方差向量;Calculate the between-class variance vector and the intra-class variance vector according to the total mean vector;

根据所述类间方差向量和类内方差向量确定降维后的表情特征,形成降维模型。The dimensionality-reduced expression features are determined according to the inter-class variance vector and the intra-class variance vector to form a dimensionality reduction model.

可选地,所述根据总均值向量,计算类间方差向量和类内方差向量,包括:Optionally, the calculation of the inter-class variance vector and the intra-class variance vector according to the total mean vector includes:

Figure BDA0001897945250000051
Figure BDA0001897945250000051

Figure BDA0001897945250000052
Figure BDA0001897945250000052

其中,Sw为类间方差向量,Sb为类内方差向量,Xi为第i类样本组成的集合。Among them, S w is the between-class variance vector, S b is the intra-class variance vector, and Xi is the set of i -th class samples.

可选地,所述根据所述类间方差向量和类内方差向量确定降维后的表情特征,包括:Optionally, the determining the dimensionality-reduced expression feature according to the inter-class variance vector and the intra-class variance vector includes:

计算由各表情特征权重组成的权重向量W=diag(Sb·/Sw),其中,diag()为函数,所述函数用于取矩阵对角线上的元素,·/为运算符,所述运算符用于将Sw和Sb的对应元素相除;Calculate the weight vector W=diag(S b /S w ) composed of each expression feature weight, wherein, diag () is a function, and the function is used to get the elements on the matrix diagonal, / is an operator, said operator is used to divide corresponding elements of Sw and Sb;

按各表情特征权重从大到小的顺序,对表情特征进行排序;According to the order of the weight of each expression feature from large to small, the expression features are sorted;

将预设数量个排序靠前的表情特征确定为降维后的表情特征。A preset number of top-ranked expression features are determined as expression features after dimensionality reduction.

有益效果如下:Beneficial effects are as follows:

根据人脸区域面积差、像素数差、背景颜色差、人脸亮度差选择微表情帧进行识别,可以准确的提取人脸视频中与微表情相关的帧,提升微表情帧的识别效率与准确性。Select micro-expression frames for recognition based on face area difference, pixel number difference, background color difference, and face brightness difference, which can accurately extract frames related to micro-expressions in face videos, and improve the recognition efficiency and accuracy of micro-expression frames sex.

附图说明Description of drawings

下面将参照附图描述本申请的具体实施例,其中:Specific embodiments of the application will be described below with reference to the accompanying drawings, wherein:

图1示出了本申请一实施例提供的一种分成2类的降维模型原理示意图;Fig. 1 shows a schematic diagram of the principle of a dimensionality reduction model divided into two categories provided by an embodiment of the present application;

图2示出了本申请一实施例提供的一种微表情识别方法流程示意图;Fig. 2 shows a schematic flow chart of a micro-expression recognition method provided by an embodiment of the present application;

图3示出了本申请一实施例提供的一种LBP描述子计算示意图;Fig. 3 shows a schematic diagram of calculation of an LBP descriptor provided by an embodiment of the present application;

图4示出了本申请一实施例提供的一种特征提取示意图。FIG. 4 shows a schematic diagram of feature extraction provided by an embodiment of the present application.

具体实施方式Detailed ways

为了使本申请的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。并且在不冲突的情况下,本说明中的实施例及实施例中的特征可以互相结合。In order to make the technical solutions and advantages of the present application clearer, the exemplary embodiments of the present application will be further described in detail below in conjunction with the accompanying drawings. Obviously, the described embodiments are only part of the embodiments of the present application, not all implementations. Exhaustive list of examples. And in the case of no conflict, the embodiments in this description and the features in the embodiments can be combined with each other.

由于微表情的持续时间短,且动作幅度非常小。因此正确观测并且识别有着相当的难度。基于此,本申请提供一种微表情识别方法,该方法比较每一帧与其后一帧的差别,以及与其前一帧的差别,得到该帧的差异值,根据各帧的差异值确定微表情帧,该方法可以准确的提取人脸视频中与微表情相关的帧,提升微表情帧的识别效率与准确性。Due to the short duration of micro-expression, and the movement range is very small. Therefore, it is quite difficult to observe and identify correctly. Based on this, the present application provides a micro-expression recognition method, which compares the difference between each frame and the next frame, and the difference with the previous frame to obtain the difference value of the frame, and determine the micro-expression according to the difference value of each frame frame, this method can accurately extract frames related to micro-expressions in face videos, and improve the recognition efficiency and accuracy of micro-expression frames.

本申请提供的表情识别方法包括2个大过程,第一个大过程是训练降维模型过程,另一个大过程是基于训练好的降维模型进行实际微表情识别过程。The facial expression recognition method provided in this application includes two major processes, the first major process is the process of training the dimensionality reduction model, and the other major process is the actual micro-expression recognition process based on the trained dimensionality reduction model.

训练降维模型过程并不是每次执行本申请提供的表情识别方法均要执行的过程,仅当第一次执行本申请提供的表情识别方法,或者,表情识别场景发生变化,或者,基于训练好的降维模型在进行实际微表情识别时,表情特征降维效果不理想,或者其他原因时,才会执行训练降维模型过程,以提升表情特征降维效果,进而提升实际微表情识别结果的准确性。The process of training the dimensionality reduction model is not a process that must be performed every time the expression recognition method provided by this application is executed, only when the expression recognition method provided by this application is executed for the first time, or the expression recognition scene changes, or, based on the training When the dimensionality reduction model is used for actual micro-expression recognition, the dimensionality reduction effect of expression features is not ideal, or other reasons, the process of training dimensionality reduction model will be executed to improve the dimensionality reduction effect of expression features, and then improve the accuracy of the actual micro-expression recognition results. accuracy.

本申请不对训练降维模型过程的执行触发条件进行限定。This application does not limit the execution trigger conditions of the process of training the dimensionality reduction model.

训练降维模型过程的具体实现方法如下:The specific implementation method of training the dimensionality reduction model process is as follows:

步骤1,获取样本集X。Step 1, get sample set X.

其中,X中样本总数为m,每个样本包括多个表情特征,每个样本属于一个类别。Among them, the total number of samples in X is m, each sample includes multiple expression features, and each sample belongs to a category.

例如,若X中样本共属于E个不同的类别,分别为第1类,第2类,……,第i类,……第E类。第1类中有b1个样本,b1个样本组成的集合为X1,第2类中有b2个样本,b2个样本组成的集合为X2,……。For example, if the samples in X belong to E different categories, they are the first category, the second category, ..., the i-th category, ... the E-th category. There are b 1 samples in the first category, and the set of b 1 samples is X 1 , there are b 2 samples in the second category, and the set of b 2 samples is X 2 ,  ….

步骤2,按类别,将所有样本进行分类。Step 2, classify all samples by category.

以步骤1中的列子为例,本步骤将所有样本分为E类,属于第1类的样本分为1类,属于第2类的样本分为1类,……。Taking Liezi in step 1 as an example, in this step, all samples are classified into class E, samples belonging to class 1 are classified into class 1, samples belonging to class 2 are classified into class 1, ... .

步骤3,计算各类的均值向量。Step 3, calculate the mean vector of each category.

具体的,对于任一类(如第i类),其均值向量通过如下公式计算:Specifically, for any class (such as class i), its mean vector is calculated by the following formula:

Figure BDA0001897945250000061
Figure BDA0001897945250000061

其中,i为类标识,μi为第i类的均值向量,bi为第i类的样本数量,j为样本标识,xij为第i类第j个样本的表情特征组成的向量。Among them, i is the class identifier, μ i is the mean vector of the i-th class, b i is the number of samples of the i-th class, j is the sample ID, and x ij is a vector composed of the expression features of the j-th sample of the i-th class.

步骤4,根据各类的均值向量,确定总均值向量。Step 4, according to the mean vectors of each category, determine the total mean vector.

具体的,通过如下公式确定总均值向量:Specifically, the total mean vector is determined by the following formula:

Figure BDA0001897945250000071
Figure BDA0001897945250000071

其中,μ0为总均值向量,E为X中样本所属不同类别总数。Among them, μ 0 is the total mean vector, and E is the total number of different categories of samples in X.

步骤5,根据总均值向量,计算类间方差向量和类内方差向量。Step 5, calculate the between-class variance vector and the intra-class variance vector according to the total mean vector.

具体计算公式如下:The specific calculation formula is as follows:

Figure BDA0001897945250000072
Figure BDA0001897945250000072

Figure BDA0001897945250000073
Figure BDA0001897945250000073

其中,Sw为类间方差向量,Sb为类内方差向量,Xi为第i类样本组成的集合。Among them, S w is the between-class variance vector, S b is the intra-class variance vector, and Xi is the set of i -th class samples.

步骤6,根据类间方差向量和类内方差向量确定降维后的表情特征,形成降维模型。Step 6: Determine the dimensionality-reduced expression features according to the inter-class variance vector and the intra-class variance vector to form a dimensionality reduction model.

具体计算方法如下:The specific calculation method is as follows:

1)计算由各表情特征权重组成的权重向量W=diag(Sb·/Sw)。1) Calculate the weight vector W=diag(S b ·/S w ) composed of the weights of each expression feature.

其中,diag()为函数,该函数用于取矩阵对角线上的元素,·/为运算符,该运算符用于将Sw和Sb的对应元素相除。Wherein, diag() is a function, which is used to obtain elements on the diagonal of the matrix, and ·/ is an operator, which is used to divide corresponding elements of S w and S b .

2)按各表情特征权重从大到小的顺序,对表情特征进行排序。2) According to the descending order of the weight of each expression feature, the expression features are sorted.

3)将预设数量个排序靠前的表情特征确定为降维后的表情特征。3) Determining a preset number of top-ranked expression features as dimensionality-reduced expression features.

降维后的表情特征可以形成特征子集F。权重越大,该权重所对应的特征分量越适合微表情分类。The expression features after dimensionality reduction can form a feature subset F. The larger the weight, the more suitable the feature component corresponding to the weight is for micro-expression classification.

输出得到特征子集,形成一个降维模型。The output is a subset of features, forming a dimensionality reduction model.

图1示出了分成2类的降维模型原理示意图。Fig. 1 shows a schematic diagram of the principle of a dimensionality reduction model divided into two categories.

基于训练好的降维模型进行实际微表情识别过程的实现方法如图2所示:The implementation method of the actual micro-expression recognition process based on the trained dimensionality reduction model is shown in Figure 2:

S101,获取人脸视频。S101. Acquire a face video.

因为微表情的持续时间短,且动作幅度非常小,因此,本步骤的人脸视频图像只要每帧均包括人脸即可,不要必须精确的对应微表情的视频。Because the duration of the micro-expression is short and the range of motion is very small, therefore, as long as each frame of the human face video image in this step includes a human face, it is not necessary to accurately correspond to the video of the micro-expression.

S102,对视频中的每一帧进行人脸识别,提取人脸区域。S102, perform face recognition on each frame in the video, and extract a face area.

本实施例不对人脸区域的提取方法进行限定,现有的提取方法均可。This embodiment does not limit the extraction method of the face area, and any existing extraction method may be used.

S103,提取视频中每一帧的像素数、背景颜色、人脸亮度。S103, extracting the number of pixels, background color, and face brightness of each frame in the video.

不同配置的视频获取设备得到的视频文件的像素值不同,而前一帧和后一帧的像素数不同程度,会对微表情识别产生影响,因此,本提案会提取视频中每一帧的像素数。The pixel values of video files obtained by video acquisition devices with different configurations are different, and the pixel numbers of the previous frame and the next frame are different, which will affect micro-expression recognition. Therefore, this proposal will extract the pixels of each frame in the video number.

像素数可以用一个数表示,比如一个“0.3兆像素”数码相机,它有额定30万像素;也可以用一对数字表示,例如“640*480显示器”,它表示横向640像素和纵向480像素(如VGA显示器)。而一对数字也可以转变为一个数,如640*480显示器的像素为640*480=307200像素。The number of pixels can be expressed by a single number, such as a "0.3 megapixel" digital camera, which has a rated 300,000 pixels; it can also be expressed by a pair of numbers, such as "640*480 display", which means 640 pixels horizontally and 480 pixels vertically (such as a VGA monitor). And a pair of numbers can also be converted into a number, such as the pixels of a 640*480 display are 640*480=307200 pixels.

本步骤中各帧的像素数,为该帧中像素点的总数量,可以通过图像的分辨率算出。如一帧的图像分辨率为1280*960,该帧的像素数=1280*960=1228800。The number of pixels in each frame in this step is the total number of pixels in the frame, which can be calculated through the resolution of the image. For example, the image resolution of one frame is 1280*960, and the number of pixels of this frame=1280*960=1228800.

本实施例不对像素数的提取方法进行限定,现有的提取方法均可。This embodiment does not limit the method for extracting the number of pixels, and any existing extraction method may be used.

对于提取视频中每一帧的背景颜色的实现方法,包括但不限于:The implementation methods for extracting the background color of each frame in the video include but are not limited to:

对于视频中任一帧,For any frame in the video,

步骤1.1,将任一帧中非人脸区域确定为背景区域。Step 1.1, determine the non-face area in any frame as the background area.

步骤1.2,确定任一帧背景区域中各像素点的RGB颜色值。Step 1.2, determine the RGB color value of each pixel in the background area of any frame.

其中,RGB颜色值包括红色颜色值、绿色颜色值、蓝色颜色值。Wherein, the RGB color value includes a red color value, a green color value, and a blue color value.

步骤1.3,通过如下公式计算任一帧背景区域的RGB颜色均值。In step 1.3, the RGB color mean value of any frame background area is calculated by the following formula.

其中,RGB颜色均值包括红色颜色均值、绿色颜色均值,蓝色颜色均值:Among them, the RGB color mean value includes the red color mean value, the green color mean value, and the blue color mean value:

Figure BDA0001897945250000081
Figure BDA0001897945250000081

j为任一帧背景区域的像素点标识,

Figure BDA0001897945250000082
为任一帧背景区域的红色颜色均值,
Figure BDA0001897945250000083
为任一帧背景区域的绿色颜色均值,
Figure BDA0001897945250000084
为任一帧背景区域的蓝色颜色均值,c1j为任一帧背景区域第j像素点的红色颜色值,c2j为任一帧背景区域第j像素点的绿色颜色值,c3j为任一帧背景区域第j像素点的蓝色颜色值,n1为任一帧背景区域中像素点总数量。j is the pixel identification of the background area of any frame,
Figure BDA0001897945250000082
is the mean value of the red color of the background area of any frame,
Figure BDA0001897945250000083
is the mean value of the green color of the background area of any frame,
Figure BDA0001897945250000084
is the mean value of the blue color in the background area of any frame, c 1j is the red color value of the jth pixel in the background area of any frame, c 2j is the green color value of the jth pixel in the background area of any frame, and c 3j is any The blue color value of the jth pixel in the background area of a frame, n 1 is the total number of pixels in the background area of any frame.

步骤1.4,计算所述任一帧背景区域的RGB颜色均方差。Step 1.4, calculating the RGB color mean square deviation of the background area of any frame.

其中,RGB颜色均方差包括红色颜色均方差、绿色颜色均方差、蓝色颜色均方差:Among them, RGB color mean square deviation includes red color mean square deviation, green color mean square deviation, blue color mean square deviation:

Figure BDA0001897945250000091
Figure BDA0001897945250000091

σ11为红色颜色均方差,σ21为绿色颜色均方差,σ31为蓝色颜色均方差。σ 11 is the mean square deviation of the red color, σ 21 is the mean square deviation of the green color, and σ 31 is the mean square deviation of the blue color.

步骤1.5,确定任一帧背景区域的RGB颜色区间。Step 1.5, determine the RGB color range of the background area of any frame.

其中,RGB颜色区间包括红色颜色区间

Figure BDA0001897945250000092
绿色颜色区间
Figure BDA0001897945250000093
蓝色颜色区间
Figure BDA0001897945250000094
Among them, the RGB color interval includes the red color interval
Figure BDA0001897945250000092
green color range
Figure BDA0001897945250000093
blue color range
Figure BDA0001897945250000094

步骤1.6,在任一帧背景区域的所有像素点中,确定RGB颜色值中红色颜色值位于RGB颜色区间中红色颜色区间内,且绿色颜色值位于绿色颜色区间内,且蓝色颜色值位于蓝色颜色区间内像素点数量n2Step 1.6, in all pixels in the background area of any frame, determine that the red color value in the RGB color value is in the red color range in the RGB color range, and the green color value is in the green color range, and the blue color value is in the blue color range The number n 2 of pixels in the color interval.

步骤1.7,根据n2确定任一帧的背景颜色。Step 1.7, determine the background color of any frame according to n 2 .

其中,背景颜色由RGB颜色值表示,RGB颜色值包括红色颜色值、绿色颜色值和蓝色颜色值。Wherein, the background color is represented by an RGB color value, and the RGB color value includes a red color value, a green color value and a blue color value.

具体的,计算任一帧的像素数比值n3=n2/n1;任一帧的背景颜色的红色颜色值为

Figure BDA0001897945250000095
绿色颜色值为
Figure BDA0001897945250000096
蓝色颜色值为
Figure BDA0001897945250000097
Specifically, calculate the pixel number ratio of any frame n 3 =n 2 /n 1 ; the red color value of the background color of any frame is
Figure BDA0001897945250000095
The green color value is
Figure BDA0001897945250000096
The blue color value is
Figure BDA0001897945250000097

本实施例提供的背景颜色提取方法,并不是简单的将背景中各像素RGB颜色值中各颜色通道的均值作为背景颜色,而是根据各像素RGB颜色值中各颜色通道对应值的分布情况动态的对均值进行调整,将调整后的值作为背景颜色,使得背景颜色的确定更加符合实际情况。The background color extraction method provided in this embodiment does not simply use the mean value of each color channel in the RGB color values of each pixel in the background as the background color, but dynamically The average value is adjusted, and the adjusted value is used as the background color, so that the determination of the background color is more in line with the actual situation.

对于提取视频中每一帧的人脸亮度的实现方案,包括但不限于:The implementation scheme for extracting the face brightness of each frame in the video includes but is not limited to:

对于视频中任一帧,For any frame in the video,

步骤2.1,通过如下公式确定任一帧的人脸区域中各像素点的亮度值。In step 2.1, the brightness value of each pixel in the face area of any frame is determined by the following formula.

Figure BDA0001897945250000098
Figure BDA0001897945250000098

其中,k为任一帧人脸区域的像素点标识,hk为任一帧人脸区域的第k像素点的亮度值,Rk为第k像素点的RGB颜色值中的红色颜色值,Gk为第k像素点的RGB颜色值中的绿色颜色值,Bk为第k像素点的RGB颜色值中的蓝色颜色值。Wherein, k is the pixel point identification of any frame of human face area, h k is the brightness value of the kth pixel point of any frame of human face area, R k is the red color value in the RGB color value of the kth pixel point, G k is the green color value in the RGB color value of the kth pixel point, and B k is the blue color value in the RGB color value of the kth pixel point.

步骤2.2,在任一帧人脸区域的所有像素点的亮度值中,确定最大亮度值和最小亮度值。Step 2.2, determine the maximum brightness value and the minimum brightness value among the brightness values of all pixels in the face area of any frame.

步骤2.3,计算任一帧人脸区域的亮度均值

Figure BDA0001897945250000101
Step 2.3, calculate the average brightness of any frame face area
Figure BDA0001897945250000101

其中,n4为任一帧的人脸区域中像素点总数量。Among them, n 4 is the total number of pixels in the face area of any frame.

步骤2.4,根据最大亮度值、最小亮度值和

Figure BDA0001897945250000102
确定视频中任一帧的人脸亮度。Step 2.4, according to the maximum brightness value, minimum brightness value and
Figure BDA0001897945250000102
Determines the brightness of a face at any frame in a video.

具体的,specific,

1)计算第一差值d1=最大亮度值-最小亮度值。1) Calculate the first difference d1=maximum luminance value−minimum luminance value.

2)计算第二差值

Figure BDA0001897945250000103
2) Calculate the second difference
Figure BDA0001897945250000103

3)计算第三差值

Figure BDA0001897945250000104
3) Calculate the third difference
Figure BDA0001897945250000104

4)计算亮度比值d4=|d1-d2|/|d1-d3|。4) Calculate the brightness ratio d4=|d1-d2|/|d1-d3|.

5)计算任一帧人脸区域的亮度均方差

Figure BDA0001897945250000105
5) Calculate the brightness mean square error of any frame face area
Figure BDA0001897945250000105

6)视频中任一帧的人脸亮度为

Figure BDA0001897945250000106
6) The face brightness of any frame in the video is
Figure BDA0001897945250000106

本实施例提供的人脸亮度提取方法,并不是简单的将人脸区域中各像素亮度的均值作为人脸亮度,而是根据各像素亮度与最大亮度和最小亮度之间的差距动态的对均值进行调整,将调整后的值作为人脸亮度,使得人脸亮度的确定更加符合实际情况。The face brightness extraction method provided in this embodiment does not simply use the average value of the brightness of each pixel in the face area as the brightness of the face, but dynamically compares the average value according to the difference between the brightness of each pixel and the maximum brightness and the minimum brightness Adjustment is performed, and the adjusted value is used as the brightness of the face, so that the determination of the brightness of the face is more in line with the actual situation.

S104,依次选取一非首帧,计算选取的帧与其前一帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差。S104. Select a non-first frame in sequence, and calculate the face area difference, pixel number difference, background color difference, and face brightness difference between the selected frame and the previous frame.

从第二帧开始至最后一帧结束,依次选择一帧,将选择的帧与该帧的前一帧的人脸区域面积的差值确定为人脸区域面积差、像素数的差值确定为像素数差、背景颜色的差值确定为背景颜色差、人脸亮度的差值确定为人脸亮度差。From the second frame to the end of the last frame, select a frame in turn, determine the difference between the face area area of the selected frame and the previous frame of the frame as the face area difference, and determine the difference in the number of pixels as pixels The difference between the numbers and the background color is determined as the background color difference, and the difference in the brightness of the face is determined as the brightness difference of the face.

例如,人脸区域面积差=选择帧的人脸区域面积-其前一帧的人脸区域面积。像素数差=选择帧的像素数-其前一帧的像素数。背景颜色差=选择帧的背景颜色-其前一帧的背景颜色。人脸亮度差=选择帧的人脸亮度-其前一帧的人脸亮度。For example, the area difference of the face area=the area of the face area of the selected frame−the area of the face area of the previous frame. Pixel number difference = pixel number of the selected frame - pixel number of the previous frame. Background color difference = background color of the selected frame - background color of the previous frame. Face brightness difference = face brightness of the selected frame - face brightness of the previous frame.

S105,计算每一非首帧的差异值。S105. Calculate the difference value of each non-first frame.

其中,差异值=(人脸区域面积差*人脸亮度差+背景颜色差)^像素数差。Wherein, the difference value=(area difference of the face area*the brightness difference of the face+the background color difference)^the difference in the number of pixels.

^为乘方运算符。^ is the power operator.

另外,为了提升本实施例提供的方案执行速度,在计算每一非首帧的差异值之前,还可以对非首帧进行初筛,提出明显不为同一人,明显不属于微表情的帧。In addition, in order to improve the execution speed of the solution provided by this embodiment, before calculating the difference value of each non-first frame, a preliminary screening can be performed on the non-first frames to propose frames that are obviously not the same person and obviously do not belong to micro-expressions.

即S105具体执行过程为:根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛,计算初筛后各帧的差异值。That is, the specific execution process of S105 is: perform preliminary screening on the non-first frames according to the face area difference, pixel number difference, background color difference, and face brightness difference of each non-first frame, and calculate the difference value of each frame after the preliminary screening.

根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛的方案,包括但不限于:According to the face area difference, pixel number difference, background color difference, and face brightness difference of each non-first frame, the scheme for preliminary screening of non-first frames includes but is not limited to:

对于任一非首帧,若任一非首帧的人脸区域面积差不大于第一值,且,像素数差不大于第二值,且,背景颜色差不大于第三值,且,人脸亮度差不大于第四值,则任一非首帧通过初筛;或者,若任一非首帧的人脸区域面积差不大于第一值,但像素数差、背景颜色差、人脸亮度差均为0,则任一非首帧通过初筛;或者,若任一非首帧的人脸亮度差不大于第四值,但人脸区域面积差、像素数差、背景颜色差均为0,则任一非首帧通过初筛。For any non-first frame, if the face area difference of any non-first frame is not greater than the first value, and the pixel number difference is not greater than the second value, and the background color difference is not greater than the third value, and the human If the face brightness difference is not greater than the fourth value, then any non-first frame passes the preliminary screening; or, if the face area difference of any non-first frame is not greater than the first value, but the number of pixels is poor, the background color is poor, the face If the brightness difference is 0, any non-first frame passes the preliminary screening; or, if the face brightness difference of any non-first frame is not greater than the fourth value, but the face area difference, pixel number difference, and background color difference are all If it is 0, any non-first frame passes the preliminary screening.

其中,第一值为(所有非首帧的人脸区域面积差之和+首帧的人脸区域面积-avg1)/所述人脸视频的总帧数,第二值为(所有非首帧的像素数差之和+首帧的像素数-avg2)/所述人脸视频的总帧数,第三值为(所有非首帧的背景颜色差之和+首帧的背景颜色-avg3)/所述人脸视频的总帧数,第四值为(所有非首帧的人脸亮度差之和+首帧的人脸亮度-avg4)/所述人脸视频的总帧数,avg1=所有帧的人脸区域面积之和/所述人脸视频的总帧数,avg2=所有帧的像素数之和/所述人脸视频的总帧数,avg3=所有帧的背景颜色之和/所述人脸视频的总帧数,avg4=所有帧的人脸亮度之和/所述人脸视频的总帧数。Wherein, the first value is (the total number of frames of the human face video of (the sum of the face area difference of all non-first frames+the face area of the first frame-avg1)/described human face video, and the second value is (all non-first frames The sum of the difference in the number of pixels+the number of pixels of the first frame-avg2)/the total number of frames of the face video, and the third value is (the sum of the background color differences of all non-first frames+the background color of the first frame-avg3) /The total number of frames of the human face video, the fourth value is (the sum of the difference in brightness of the human face of all non-first frames+the brightness of the human face of the first frame-avg4)/the total number of frames of the human face video, avg1= The sum of the face area areas of all frames/the total number of frames of the human face video, avg2=the sum of the pixels of all frames/the total number of frames of the described human face video, avg3=the sum of the background colors of all frames/ The total number of frames of the human face video, avg4=the sum of the brightness of the human face of all frames/the total number of frames of the human face video.

S106,将差异值大于预设阈值的帧,以及,视频的首帧均确定为候选帧。S106. Determine the frame whose difference value is greater than the preset threshold and the first frame of the video as candidate frames.

预设阈值保证了差异的大小,不同的微表情差异幅度不同,通过预设阈值根据本方法应用领域不同,进行适应性挑选,进而保证本申请提供的表情识别方法的通用性。The preset threshold guarantees the size of the difference. Different micro-expressions have different difference ranges. Adaptive selection is carried out through the preset threshold according to the different application fields of the method, thereby ensuring the versatility of the expression recognition method provided in this application.

S107,在候选帧中,将标识连续的帧均确定为微表情帧。S107, among the candidate frames, determine frames with continuous identifiers as micro-expression frames.

例如,候选帧为帧3、帧5、帧6、帧8、帧9,则将标识连续的帧(帧5、帧6,帧8、帧9)均确定为微表情帧。For example, if the candidate frames are frame 3, frame 5, frame 6, frame 8, and frame 9, the consecutive frames (frame 5, frame 6, frame 8, and frame 9) are all determined as micro-expression frames.

此时很可能帧5、帧6表示一个微表情,帧8、帧9表示一个微表情。At this time, it is likely that frames 5 and 6 represent a micro-expression, and frames 8 and 9 represent a micro-expression.

上述仅为示例,不代表实际情况。The above are examples only and do not represent the actual situation.

本申请并不对“连续”进行限定,只要是非单独帧即可。例如,有2个标识连续的帧则将该2个标识连续的帧均确定为微表情帧。再例如,有3个标识连续的帧则将该3个标识连续的帧均确定为微表情帧。The present application does not limit "continuous", as long as it is not a single frame. For example, if there are 2 frames with continuous identifiers, both frames with consecutive identifiers are determined as micro-expression frames. For another example, if there are 3 frames with consecutive identifiers, all the 3 frames with consecutive identifiers are determined as micro-expression frames.

S108,提取微表情帧的表情特征,通过预先训练的降维模型对表情特征进行降维处理,对降维后的特征进行识别,得到识别结果。S108, extracting the expression features of the micro-expression frame, performing dimensionality reduction processing on the expression features through a pre-trained dimensionality reduction model, identifying the dimensionality-reduced features, and obtaining a recognition result.

本步骤可以预先训练微表情识别模型,再提取微表情帧的表情特征,通过预先训练的降维模型对表情特征进行降维处理后,采用该微表情识别模型对降维后的特征进行识别,得到识别结果。In this step, the micro-expression recognition model can be pre-trained, and then the expression features of the micro-expression frame can be extracted. After the dimensionality reduction processing is performed on the expression features through the pre-trained dimensionality reduction model, the micro-expression recognition model can be used to identify the features after dimensionality reduction. Get the recognition result.

其中,微表情识别模型的训练过程,包括但不限于:Among them, the training process of the micro-expression recognition model includes but is not limited to:

步骤3.1、获取多个样本视频。Step 3.1, obtaining a plurality of sample videos.

样本视频可以从现有的微表情数据集中获得。Sample videos can be obtained from existing microexpression datasets.

由于微表情是人在试图掩饰自己情绪时产生的微小面部动作。在严格意义上说人们主观模拟的微小表情不能称为微表情,因此微表情的诱导方式决定数据的可靠程度。Microexpressions are tiny facial movements that people make when they try to hide their emotions. Strictly speaking, the tiny expressions that people simulate subjectively cannot be called micro-expressions, so the way micro-expressions are induced determines the reliability of the data.

本步骤可以从如下2种现有的微表情数据集中的一个或2个获取多个样本视频:In this step, multiple sample videos can be obtained from one or two of the following two existing micro-expression datasets:

微表情数据集SMIC,由芬兰奥卢大学建立,要求被试者观看有大情绪波动的视频,并且试图掩盖自己的情绪不被外露,记录者在不观看视频的情况下,观察被试者的表情。若记录者观察出被试者的面部表情,则被试者就会得到惩罚。在这种诱导机制下组成了16个人的164个视频序列,微表情类别有3种,分别为积极(positive)、惊讶(surprise)、消极(negative),视频序列数分别为70、51、43。The micro-expression data set SMIC, established by the University of Oulu in Finland, requires the subjects to watch videos with large emotional fluctuations and try to hide their emotions from being exposed. The recorder observes the subjects' emotions without watching the video. expression. If the recorder observed the subject's facial expression, the subject would be punished. Under this induction mechanism, 164 video sequences of 16 people were formed. There are 3 types of micro-expressions, namely positive, surprise, and negative. The number of video sequences is 70, 51, and 43, respectively. .

微表情数据集CASME2,由中国科学院心理研究所建立,采用类似的诱导机制来确保数据的可靠性,只是如果被试者成功抑制住自己的面部表情并且不被记录者发现的话,会得到相应的奖励。该数据集是由26个人的247个视频序列组成的5种微表情类别,分别是高兴(happiness)、厌恶(disgust)、惊讶(surprise)、压抑(repression)、其他(other),各类别对应的视频序列数分别为32、64、25、27、99。The micro-expression data set CASME2, established by the Institute of Psychology of the Chinese Academy of Sciences, uses a similar induction mechanism to ensure the reliability of the data, but if the subject successfully suppresses his facial expressions and is not discovered by the recorder, he will get a corresponding award. The data set is composed of 247 video sequences of 26 people, including 5 micro-expression categories, namely happiness, disgust, surprise, repression, and other. Each category corresponds to The number of video sequences are 32, 64, 25, 27, 99, respectively.

步骤3.2、对于每个样本视频,采用局部二值模式提取对应的表情特征。Step 3.2. For each sample video, use the local binary mode to extract the corresponding expression features.

局部二值模式(Local Binary Pattern,LBP)描述子是定义在中心像素及其周围矩形邻域上,如图3所示,以中心像素的灰度值为阈值,二值量化中心像素周围的邻域像素,大于或等于中心像素值的编码为1,小于则编码为0,并形成一个局部二进制模式。The local binary pattern (Local Binary Pattern, LBP) descriptor is defined on the central pixel and its surrounding rectangular neighborhood, as shown in Figure 3, the gray value of the central pixel is the threshold value, and the binary quantization of the neighboring pixels around the central pixel Field pixels that are greater than or equal to the center pixel value are coded as 1, and those less than are coded as 0, and form a local binary pattern.

将该二进制模式以左上角为起点按照顺时针方向进行串联得到一串二进制数字,其对应的十进制数字能够唯一地标识中心像素点。按照这种方法,图像中的每一个像素都可以用一个局部二进制模式来计算得到。The binary pattern is concatenated clockwise starting from the upper left corner to obtain a series of binary numbers, and the corresponding decimal numbers can uniquely identify the central pixel. In this way, each pixel in the image can be computed using a local binary pattern.

如图3所示,左面表格中的中心像素值为178,左上角的值为65,65<178,所以,对应的值为0,188>178,所以对应的值为1。以此类推,得到图3右面的表格,进而得到二进制模式值为01000100。As shown in Figure 3, the center pixel value in the table on the left is 178, and the value in the upper left corner is 65, 65<178, so the corresponding value is 0, and 188>178, so the corresponding value is 1. By analogy, the table on the right of Figure 3 is obtained, and the binary mode value is 01000100.

另外,还可以将LBP静态纹理描述子在时空域的延伸,形成3个正交平面上的2维局部二值模式。如图4所示,分别提取在XY、XT及YT三个正交平面视频序列的LBP特征,将每个正交平面上的特征向量进行串联,组成LBP-TOP特征向量。这种方法既考虑了图像的局部纹理信息,又对视频随时间变化的情况进行了描述。In addition, the LBP static texture descriptor can also be extended in the space-time domain to form a 2-dimensional local binary pattern on three orthogonal planes. As shown in Figure 4, the LBP features of the three orthogonal plane video sequences of XY, XT and YT are extracted respectively, and the feature vectors on each orthogonal plane are concatenated to form the LBP-TOP feature vector. This method not only considers the local texture information of the image, but also describes the situation of the video changing with time.

但是,LBP-TOP的向量维数是3×2L,L为领域点个数。如果直接用步骤3.2提取的表情特征进行建模会因为特征维数大造成模型训练速度较慢,效果不佳。因此,本申请在执行步骤3.2提取表情特征之后,会执行步骤3.3,以降低实际训练模型是所考虑的表情特征的维度,提升模型训练效率。However, the vector dimension of LBP-TOP is 3×2 L , where L is the number of domain points. If the expression features extracted in step 3.2 are directly used for modeling, the model training speed will be slow due to the large feature dimension, and the effect will not be good. Therefore, after performing step 3.2 to extract expression features, the present application will perform step 3.3 to reduce the dimension of the actual training model to consider the expression features and improve the efficiency of model training.

步骤3.3、对各样本视频进行识别训练,形成微表情识别模型。Step 3.3: Carry out recognition training on each sample video to form a micro-expression recognition model.

本步骤的训练方法可以有多种,本实施例提供采用如下训练方法:There can be multiple training methods in this step, and the present embodiment provides the following training methods:

3.3.1,采用任何一种聚类算法(例如k-means算法)基于表情特征对各样本视频进行聚类,形成各样本视频所属的微表情类。3.3.1. Use any clustering algorithm (such as the k-means algorithm) to cluster each sample video based on expression features to form the micro-expression class to which each sample video belongs.

3.3.2,根据各样本视频的第二标准分类结果对聚类算法中的参数进行调整。3.3.2. Adjust the parameters in the clustering algorithm according to the second standard classification results of each sample video.

由于各样本视频本身带有标识其微表情类别的标签,本步骤中获取该标签,将其作为各样本视频的第二标准分类结果。Since each sample video itself has a label identifying its micro-expression category, the label is obtained in this step and used as the second standard classification result of each sample video.

3.3.3,重复3.3.1和3.3.2,完成训练,形成微表情识别模型。3.3.3. Repeat 3.3.1 and 3.3.2 to complete the training and form a micro-expression recognition model.

本申请中的微表情识别模型即为一个分类器。The micro-expression recognition model in this application is a classifier.

例如:采用支持向量机(Support Vector Machine,SVM)方法。SVM的关键在于核函数,采用不同的核函数就会有不同的SVM分类效果。For example: using a Support Vector Machine (Support Vector Machine, SVM) method. The key to SVM is the kernel function, and different kernel functions will have different SVM classification effects.

例如可以采用如下核函数:线性核(Linear Kernel)、卡方核(Chi-squareKernel)、直方图交叉核(Histogram Intersection kernel)。For example, the following kernel functions may be used: linear kernel (Linear Kernel), chi-square kernel (Chi-square Kernel), and histogram intersection kernel (Histogram Intersection kernel).

另外,为了提升最终训练好的分类模型的分类识别率,还可以采用交叉验证(Cross Validation)检验微表情识别模型的性能。具体的,将所有的样本视频分成两个子集,一个子集用于训练分类器称为训练集,另一个子集用于验证分析分类器的有效性称为测试集。利用测试集来测试经训练得到的分类器,以此作为分类器的性能指标。常用的方法有简单交叉验证、K折交叉验证和留一交叉验证。In addition, in order to improve the classification recognition rate of the final trained classification model, cross validation (Cross Validation) can also be used to test the performance of the micro-expression recognition model. Specifically, all the sample videos are divided into two subsets, one subset is used to train the classifier called the training set, and the other subset is used to verify and analyze the validity of the classifier called the test set. Use the test set to test the trained classifier as the performance index of the classifier. Commonly used methods are simple cross-validation, K-fold cross-validation and leave-one-out cross-validation.

以运用留一交叉验证方法对不同核函数的SVM分类器进行微表情分类训练为例。每次选择一位受试者的所有视频序列作为测试样本,其余I个受试者的所有视频序列作为训练样本,重复I+1次实验,计算I+1次的平均分类识别率。Take the micro-expression classification training of SVM classifiers with different kernel functions by using the leave-one-out cross-validation method as an example. Each time select all the video sequences of one subject as the test samples, and all the video sequences of the other I subjects as the training samples, repeat the experiment for I+1 times, and calculate the average classification recognition rate of I+1 times.

基于此,完成一个微表情识别模型的训练。Based on this, the training of a micro-expression recognition model is completed.

在训练好微表情识别模型后,通过预先训练的降维模型对表情特征进行降维处理,采用该微表情识别模型对降维后的特征进行识别,得到识别结果。After the micro-expression recognition model is trained, the dimensionality reduction process is performed on the expression features through the pre-trained dimensionality reduction model, and the micro-expression recognition model is used to recognize the reduced-dimensionality features, and the recognition result is obtained.

由于在通过微表情识别模型进行微表情识别之前,对表情特征进行了降维处理,可以提升微表情识别模型的识别速率和识别准确率。Since the expression features are dimensionally reduced before the micro-expression recognition model is used for micro-expression recognition, the recognition rate and recognition accuracy of the micro-expression recognition model can be improved.

需要说明的是,本实施例及后续实施例中的“第一”,“第二”,“第三”等仅用于区分不同步骤中的预设阈值、分类结果、标准分类结果等,不具有其他任何特殊含义。It should be noted that "first", "second", and "third" in this embodiment and subsequent embodiments are only used to distinguish preset thresholds, classification results, standard classification results, etc. in different steps, not have any other special meaning.

有益效果:Beneficial effect:

根据人脸区域面积差、像素数差、背景颜色差、人脸亮度差选择微表情帧进行识别,可以准确的提取人脸视频中与微表情相关的帧,提升微表情帧的识别效率与准确性。Select micro-expression frames for recognition based on face area difference, pixel number difference, background color difference, and face brightness difference, which can accurately extract frames related to micro-expressions in face videos, and improve the recognition efficiency and accuracy of micro-expression frames sex.

Claims (10)

1.一种微表情识别方法,其特征在于,所述方法包括:1. a micro-expression recognition method, is characterized in that, described method comprises: 获取人脸视频;Get face video; 对所述视频中的每一帧进行人脸识别,提取人脸区域;Carry out face recognition to each frame in the video, and extract the face area; 提取所述视频中每一帧的像素数、背景颜色、人脸亮度;Extract the number of pixels, background color, and face brightness of each frame in the video; 依次选取一非首帧,计算选取的帧与其前一帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差;Select a non-first frame in turn, and calculate the face area difference, pixel number difference, background color difference, and face brightness difference between the selected frame and the previous frame; 计算每一非首帧的差异值,其中,差异值=(人脸区域面积差*人脸亮度差+背景颜色差)^像素数差;Calculate the difference value of each non-first frame, wherein, difference value=(face area difference*face brightness difference+background color difference)^pixel number difference; 将差异值大于预设阈值的帧,以及,所述视频的首帧均确定为候选帧;Determining a frame whose difference value is greater than a preset threshold, and the first frame of the video as a candidate frame; 在候选帧中,将标识连续的帧均确定为微表情帧;In the candidate frame, the frames with continuous marks are all determined as micro-expression frames; 提取微表情帧的表情特征,通过预先训练的降维模型对表情特征进行降维处理,对降维后的特征进行识别,得到识别结果。The expression features of micro-expression frames are extracted, and the dimensionality reduction processing is performed on the expression features through the pre-trained dimensionality reduction model, and the features after dimensionality reduction are recognized to obtain the recognition results. 2.根据权利要求1所述的方法,其特征在于,提取所述视频中每一帧的背景颜色,包括:2. The method according to claim 1, wherein extracting the background color of each frame in the video comprises: 对于所述视频中任一帧,For any frame in the video, 将所述任一帧中非人脸区域确定为背景区域;Determining the non-face area in any frame as the background area; 确定所述任一帧背景区域中各像素点的RGB颜色值,所述RGB颜色值包括红色颜色值、绿色颜色值、蓝色颜色值;Determine the RGB color value of each pixel in the background area of any frame, and the RGB color value includes a red color value, a green color value, and a blue color value; 通过如下公式计算所述任一帧背景区域的RGB颜色均值,所述RGB颜色均值包括红色颜色均值、绿色颜色均值,蓝色颜色均值:The RGB color mean value of the background area of any frame is calculated by the following formula, and the RGB color mean value includes a red color mean value, a green color mean value, and a blue color mean value:
Figure FDA0001897945240000011
Figure FDA0001897945240000011
其中j为所述任一帧背景区域的像素点标识,
Figure FDA0001897945240000012
为所述任一帧背景区域的红色颜色均值,
Figure FDA0001897945240000013
为所述任一帧背景区域的绿色颜色均值,
Figure FDA0001897945240000014
为所述任一帧背景区域的蓝色颜色均值,c1j为所述任一帧背景区域第j像素点的红色颜色值,c2j为所述任一帧背景区域第j像素点的绿色颜色值,c3j为所述任一帧背景区域第j像素点的蓝色颜色值,n1为所述任一帧背景区域中像素点总数量;
Wherein j is the pixel point identification of the background area of any frame,
Figure FDA0001897945240000012
is the mean value of the red color of the background area of any frame,
Figure FDA0001897945240000013
is the mean value of the green color of the background area of any frame,
Figure FDA0001897945240000014
For the blue color mean value of the background area of any frame, c1j is the red color value of the jth pixel in the background area of any frame, and c2j is the green color of the jth pixel in the background area of any frame value, c 3j is the blue color value of the jth pixel in the background area of any frame, and n 1 is the total number of pixels in the background area of any frame;
计算所述任一帧背景区域的RGB颜色均方差,所述RGB颜色均方差包括红色颜色均方差、绿色颜色均方差、蓝色颜色均方差:Calculate the RGB color mean square error of any frame background area, the RGB color mean square error includes red color mean square error, green color mean square error, blue color mean square error:
Figure FDA0001897945240000021
Figure FDA0001897945240000021
其中,σ11为红色颜色均方差,σ21为绿色颜色均方差,σ31为蓝色颜色均方差;Among them, σ 11 is the mean square deviation of the red color, σ 21 is the mean square deviation of the green color, and σ 31 is the mean square deviation of the blue color; 确定所述任一帧背景区域的RGB颜色区间,所述RGB颜色区间包括红色颜色区间
Figure FDA0001897945240000022
绿色颜色区间
Figure FDA0001897945240000023
蓝色颜色区间
Figure FDA0001897945240000024
Determine the RGB color interval of the background area of any frame, and the RGB color interval includes the red color interval
Figure FDA0001897945240000022
green color range
Figure FDA0001897945240000023
blue color range
Figure FDA0001897945240000024
在所述任一帧背景区域的所有像素点中,确定RGB颜色值中红色颜色值位于RGB颜色区间中红色颜色区间内,且绿色颜色值位于绿色颜色区间内,且蓝色颜色值位于蓝色颜色区间内像素点数量n2;In all pixels in the background area of any frame, determine that the red color value in the RGB color value is located in the red color interval in the RGB color interval, and the green color value is located in the green color interval, and the blue color value is located in the blue color interval The number of pixels in the color interval n2; 根据n2确定所述任一帧的背景颜色。Determine the background color of any frame according to n2.
3.根据权利要求2所述的方法,其特征在于,所述背景颜色由RGB颜色值表示;3. The method according to claim 2, wherein the background color is represented by an RGB color value; 所述根据n2确定所述任一帧的背景颜色,包括:The determining the background color of any frame according to n2 includes: 计算所述任一帧的像素数比值n3=n2/n1Calculating the pixel number ratio n 3 =n 2 /n 1 of any frame; 所述任一帧的背景颜色的红色颜色值为
Figure FDA0001897945240000025
绿色颜色值为
Figure FDA0001897945240000026
蓝色颜色值为
Figure FDA0001897945240000027
The background color of either frame has a red color value of
Figure FDA0001897945240000025
The green color value is
Figure FDA0001897945240000026
The blue color value is
Figure FDA0001897945240000027
4.根据权利要求1所述的方法,其特征在于,提取所述视频中每一帧的人脸亮度,包括:4. The method according to claim 1, wherein extracting the face brightness of each frame in the video comprises: 对于所述视频中任一帧,For any frame in the video, 通过如下公式确定所述任一帧的人脸区域中各像素点的亮度值:Determine the brightness value of each pixel in the face area of any frame by the following formula:
Figure FDA0001897945240000028
Figure FDA0001897945240000028
其中,k为所述任一帧人脸区域的像素点标识,hk为所述任一帧人脸区域的第k像素点的亮度值,Rk为第k像素点的RGB颜色值中的红色颜色值,Gk为第k像素点的RGB颜色值中的绿色颜色值,Bk为第k像素点的RGB颜色值中的蓝色颜色值;Wherein, k is the pixel point identification of described any frame human face area, h k is the luminance value of the kth pixel point of described any frame human face area, R k is the RGB color value in the kth pixel point Red color value, G k is the green color value in the RGB color value of the k pixel point, B k is the blue color value in the RGB color value of the k pixel point; 在所述任一帧人脸区域的所有像素点的亮度值中,确定最大亮度值和最小亮度值;Among the brightness values of all pixels in the human face area of any frame, determine a maximum brightness value and a minimum brightness value; 计算所述任一帧人脸区域的亮度均值
Figure FDA0001897945240000031
其中,n4为所述任一帧的人脸区域中像素点总数量;
Calculate the mean value of the brightness of the face area of any frame
Figure FDA0001897945240000031
Wherein, n4 is the total number of pixels in the face area of any frame;
根据最大亮度值、最小亮度值和
Figure FDA0001897945240000032
确定所述视频中任一帧的人脸亮度。
According to the maximum brightness value, minimum brightness value and
Figure FDA0001897945240000032
Determine the brightness of a face at any frame in the video.
5.根据权利要求4所述的方法,其特征在于,所述根据最大亮度值、最小亮度值和
Figure FDA0001897945240000033
确定所述视频中任一帧的人脸亮度,包括:
5. The method according to claim 4, characterized in that, according to the maximum brightness value, the minimum brightness value and
Figure FDA0001897945240000033
Determine the brightness of a face in any frame of the video, including:
计算第一差值d1=最大亮度值-最小亮度值;Calculate the first difference d1=maximum brightness value-minimum brightness value; 计算第二差值
Figure FDA0001897945240000034
Calculate the second difference
Figure FDA0001897945240000034
计算第三差值
Figure FDA0001897945240000035
Calculate the third difference
Figure FDA0001897945240000035
计算亮度比值d4=|d1-d2|/|d1-d3|;Calculate brightness ratio d4=|d1-d2|/|d1-d3|; 计算所述任一帧人脸区域的亮度均方差
Figure FDA0001897945240000036
Calculate the mean square error of the brightness of the face area of any frame
Figure FDA0001897945240000036
所述视频中任一帧的人脸亮度为
Figure FDA0001897945240000037
The face brightness of any frame in the video is
Figure FDA0001897945240000037
6.根据权利要求1所述的方法,其特征在于,所述计算每一非首帧的差异值之前,还包括:6. The method according to claim 1, wherein, before calculating the difference value of each non-first frame, further comprising: 根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛;According to the face area difference, pixel number difference, background color difference, and face brightness difference of each non-first frame, the non-first frame is initially screened; 所述计算每一非首帧的差异值,包括:The calculation of the difference value of each non-first frame includes: 计算初筛后各帧的差异值。Calculate the difference value of each frame after the primary screening. 7.根据权利要求6所述的方法,其特征在于,所述根据各非首帧的人脸区域面积差、像素数差、背景颜色差、人脸亮度差对非首帧进行初筛,包括:7. The method according to claim 6, characterized in that, the non-first frame is initially screened according to the area difference of the face area, the number of pixels difference, the background color difference, and the brightness difference of the face of each non-first frame, including : 对于任一非首帧,For any non-first frame, 若所述任一非首帧的人脸区域面积差不大于第一值,且,像素数差不大于第二值,且,背景颜色差不大于第三值,且,人脸亮度差不大于第四值,则所述任一非首帧通过初筛;或者,If the face area difference of any non-first frame is not greater than the first value, and the pixel number difference is not greater than the second value, and the background color difference is not greater than the third value, and the face brightness difference is not greater than fourth value, then any non-first frame passes the preliminary screening; or, 若所述任一非首帧的人脸区域面积差不大于第一值,但像素数差、背景颜色差、人脸亮度差均为0,则所述任一非首帧通过初筛;或者,If the face area difference of any non-first frame is not greater than the first value, but the pixel number difference, background color difference, and face brightness difference are all 0, then any non-first frame passes the preliminary screening; or , 若所述任一非首帧的人脸亮度差不大于第四值,但人脸区域面积差、像素数差、背景颜色差均为0,则所述任一非首帧通过初筛;If the face brightness difference of any non-first frame is not greater than the fourth value, but the face area difference, pixel count difference, and background color difference are all 0, then any non-first frame passes the preliminary screening; 其中,第一值为(所有非首帧的人脸区域面积差之和+首帧的人脸区域面积-avg1)/所述人脸视频的总帧数,第二值为(所有非首帧的像素数差之和+首帧的像素数-avg2)/所述人脸视频的总帧数,第三值为(所有非首帧的背景颜色差之和+首帧的背景颜色-avg3)/所述人脸视频的总帧数,第四值为(所有非首帧的人脸亮度差之和+首帧的人脸亮度-avg4)/所述人脸视频的总帧数,avg1=所有帧的人脸区域面积之和/所述人脸视频的总帧数,avg2=所有帧的像素数之和/所述人脸视频的总帧数,avg3=所有帧的背景颜色之和/所述人脸视频的总帧数,avg4=所有帧的人脸亮度之和/所述人脸视频的总帧数。Wherein, the first value is (the total number of frames of the human face video of (the sum of the face area difference of all non-first frames+the face area of the first frame-avg1)/described human face video, and the second value is (all non-first frames The sum of the difference in the number of pixels+the number of pixels of the first frame-avg2)/the total number of frames of the face video, and the third value is (the sum of the background color differences of all non-first frames+the background color of the first frame-avg3) /The total number of frames of the human face video, the fourth value is (the sum of the difference in brightness of the human face of all non-first frames+the brightness of the human face of the first frame-avg4)/the total number of frames of the human face video, avg1= The sum of the face area areas of all frames/the total number of frames of the human face video, avg2=the sum of the pixels of all frames/the total number of frames of the described human face video, avg3=the sum of the background colors of all frames/ The total number of frames of the human face video, avg4=the sum of the brightness of the human face of all frames/the total number of frames of the human face video. 8.根据权利要求1至7任一权利要求所述的方法,其特征在于,所述通过预先训练的降维模型对表情特征进行降维处理之前,还包括:8. The method according to any one of claims 1 to 7, characterized in that, before the dimension reduction processing of the expression features by the pre-trained dimension reduction model, further comprising: 获取样本集X,其中,X中样本总数为m,每个样本包括多个表情特征,每个样本属于一个类别;Obtain a sample set X, where the total number of samples in X is m, each sample includes multiple expression features, and each sample belongs to a category; 按类别,将所有样本进行分类;Classify all samples by category; 计算各类的均值向量
Figure FDA0001897945240000041
其中,i为类标识,μi为第i类的均值向量,bi为第i类的样本数量,j为样本标识,xij为第i类第j个样本的表情特征组成的向量;
Calculate the mean vector of each class
Figure FDA0001897945240000041
Wherein, i is the class identification, μ i is the mean value vector of the i-th class, b i is the number of samples of the i-th class, j is the sample identification, and x ij is a vector composed of the expression features of the j-th sample of the i-th class;
根据各类的均值向量,确定总均值向量
Figure FDA0001897945240000042
其中,μ0为总均值向量,E为X中样本所属不同类别总数;
According to the mean vector of each category, determine the total mean vector
Figure FDA0001897945240000042
Among them, μ 0 is the total mean value vector, and E is the total number of different categories of samples in X;
根据总均值向量,计算类间方差向量和类内方差向量;Calculate the between-class variance vector and the intra-class variance vector according to the total mean vector; 根据所述类间方差向量和类内方差向量确定降维后的表情特征,形成降维模型。The dimensionality-reduced expression features are determined according to the inter-class variance vector and the intra-class variance vector to form a dimensionality reduction model.
9.根据权利要求8所述的方法,其特征在于,所述根据总均值向量,计算类间方差向量和类内方差向量,包括:9. method according to claim 8, is characterized in that, described according to total mean value vector, calculates interclass variance vector and intraclass variance vector, comprising:
Figure FDA0001897945240000043
Figure FDA0001897945240000043
Figure FDA0001897945240000044
Figure FDA0001897945240000044
其中,Sw为类间方差向量,Sb为类内方差向量,Xi为第i类样本组成的集合。Among them, S w is the between-class variance vector, S b is the intra-class variance vector, and Xi is the set of i -th class samples.
10.根据权利要求9所述的方法,其特征在于,所述根据所述类间方差向量和类内方差向量确定降维后的表情特征,包括:10. The method according to claim 9, wherein said determining the dimensionality-reduced expression feature according to said inter-class variance vector and intra-class variance vector comprises: 计算由各表情特征权重组成的权重向量W=diag(Sb·/Sw),其中,diag()为函数,所述函数用于取矩阵对角线上的元素,·/为运算符,所述运算符用于将Sw和Sb的对应元素相除;Calculate the weight vector W=diag(S b /S w ) composed of each expression feature weight, wherein, diag () is a function, and the function is used to get the elements on the matrix diagonal, / is an operator, said operator is used to divide corresponding elements of Sw and Sb; 按各表情特征权重从大到小的顺序,对表情特征进行排序;According to the order of the weight of each expression feature from large to small, the expression features are sorted; 将预设数量个排序靠前的表情特征确定为降维后的表情特征。A preset number of top-ranked expression features are determined as expression features after dimensionality reduction.
CN201811499959.9A 2018-12-10 2018-12-10 Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction Expired - Fee Related CN109614927B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811499959.9A CN109614927B (en) 2018-12-10 2018-12-10 Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811499959.9A CN109614927B (en) 2018-12-10 2018-12-10 Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction

Publications (2)

Publication Number Publication Date
CN109614927A CN109614927A (en) 2019-04-12
CN109614927B true CN109614927B (en) 2022-11-08

Family

ID=66007965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811499959.9A Expired - Fee Related CN109614927B (en) 2018-12-10 2018-12-10 Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction

Country Status (1)

Country Link
CN (1) CN109614927B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991458B (en) * 2019-11-25 2023-05-23 创新奇智(北京)科技有限公司 Image feature-based artificial intelligent recognition result sampling system and sampling method
CN112528945B (en) * 2020-12-24 2024-04-26 上海寒武纪信息科技有限公司 Method and device for processing data stream
CN113076813B (en) * 2021-03-12 2024-04-12 首都医科大学宣武医院 Training method and device for mask face feature recognition model
CN118053128B (en) * 2024-04-16 2024-07-23 绵阳职业技术学院 Intelligent management method suitable for educational park

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796683B (en) * 2014-01-22 2018-08-14 南京中兴软件有限责任公司 A kind of method and system of calibration image color
CN105139039B (en) * 2015-09-29 2018-05-29 河北工业大学 The recognition methods of the micro- expression of human face in video frequency sequence
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning

Also Published As

Publication number Publication date
CN109614927A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN109614927B (en) Micro-expression recognition based on the difference between before and after frames and feature dimensionality reduction
Chadha et al. Face recognition using discrete cosine transform for global and local features
CN104036278B (en) The extracting method of face algorithm standard rules face image
CN103634680B (en) The control method for playing back and device of a kind of intelligent television
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
Reese et al. A comparison of face detection algorithms in visible and thermal spectrums
CN109190582B (en) Novel micro-expression recognition method
CN113450369A (en) Classroom analysis system and method based on face recognition technology
CN111832405A (en) A face recognition method based on HOG and deep residual network
Jun et al. Face detection based on LBP
CN110175578B (en) Micro-expression recognition method based on deep forest for criminal investigation
El Maghraby et al. Hybrid face detection system using combination of viola-jones method and skin detection
CN107886110A (en) Method for detecting human face, device and electronic equipment
Shrivastava et al. Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model
CN111709305B (en) A Face Age Recognition Method Based on Partial Image Blocks
Szankin et al. Influence of thermal imagery resolution on accuracy of deep learning based face recognition
Song et al. Face liveliness detection based on texture and color features
Chen et al. Fast eye detection using different color spaces
KR101107308B1 (en) Image Search and Recognition Method
Liu et al. Dfdnet: Discriminant face descriptor network for facial age estimation
Sikander et al. Facial feature detection: A facial symmetry approach
CN109684931B (en) A Face Recognition Method Based on Color Channels
Chang et al. Personalized facial expression recognition in indoor environments
Dwivedi et al. A new hybrid approach on face detection and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221108

CF01 Termination of patent right due to non-payment of annual fee