[go: up one dir, main page]

CN104077579B - Facial expression recognition method based on expert system - Google Patents

Facial expression recognition method based on expert system Download PDF

Info

Publication number
CN104077579B
CN104077579B CN201410333366.0A CN201410333366A CN104077579B CN 104077579 B CN104077579 B CN 104077579B CN 201410333366 A CN201410333366 A CN 201410333366A CN 104077579 B CN104077579 B CN 104077579B
Authority
CN
China
Prior art keywords
image
expression
facial expression
expert system
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410333366.0A
Other languages
Chinese (zh)
Other versions
CN104077579A (en
Inventor
程武山
梁冬梅
陈敏
邵新发
孙启峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN201410333366.0A priority Critical patent/CN104077579B/en
Publication of CN104077579A publication Critical patent/CN104077579A/en
Application granted granted Critical
Publication of CN104077579B publication Critical patent/CN104077579B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of facial expression recognition method based on expert system, the method makes inferences identification human face expression by setting up the expert system on the basis of facial expression image processing method and traditional computer program function to the image for pre-processing, and the described method comprises the following steps:1) image is caught from video, the user profile in the video is obtained, then by image procossing, image characteristics extraction carries out authentication, obtains the facial expression image characteristic parameter of user, determine user's expression storehouse, set up identification human face expression expert system;2) image procossing and image characteristics extraction are carried out to the image caught from video, obtain characteristic parameter when user's expression degree is maximized, by characteristic parameter and step 1) determine user expression storehouse in expression training sample parameter compare, by the statistics of the reasoning final output Expression Recognition of inference engine of expert system.Compared with prior art, the present invention has the advantages that recognition speed is fast.

Description

基于专家系统的人脸表情图像识别方法Image Recognition Method of Facial Expression Based on Expert System

技术领域technical field

本发明涉及一种专家系统应用技术,尤其是涉及一种基于专家系统的人脸表情图像识别方法。The invention relates to an expert system application technology, in particular to an expert system-based facial expression image recognition method.

背景技术Background technique

专家系统是一类具有专门知识和经验的计算机智能程序系统,通过对人类专家的问题求解能力的建模,采用人工智能中的知识表示和知识推理技术来模拟通常由专家才能解决的复杂问题,达到具有与专家同等解决问题能力的水平。这种基于知识的系统设计方法是以知识库和推理机为中心而展开。它把知识从系统中与其他部分分离开来。专家系统强调的是知识而不是方法。很多问题没有基于算法的解决方案,或算法方案太复杂,采用专家系统,可以利用人类专家拥有丰富的知识,因此专家系统也称为基于知识的系统。Expert system is a kind of computer intelligent program system with specialized knowledge and experience. By modeling the problem-solving ability of human experts, it uses knowledge representation and knowledge reasoning techniques in artificial intelligence to simulate complex problems that are usually solved by experts. Reach a level of problem-solving ability equivalent to that of an expert. This knowledge-based system design method is centered on knowledge base and inference engine. It separates knowledge from other parts of the system. Expert systems emphasize knowledge rather than methods. There are no algorithm-based solutions for many problems, or the algorithm solutions are too complex. Using expert systems can take advantage of the rich knowledge of human experts, so expert systems are also called knowledge-based systems.

目前,随着科技的发展,康复护理智能床日益普及。然而患者中,有很大一部分人不能通过肢体或语言准确的向外界表达自己的意图。对每个人来说,不同的表情代表不同的状态。当语言和肢体不能传达时,我们可以通过表情来识别患者的想法,完成相应的操作。市面上护理床操作者都是护理人员,而忽略了患者的本身意愿。我们本着以人为本的设计理念,通过表情识别技术的专家系统,使得行动不方便的患者也可自行操作护理床。At present, with the development of science and technology, smart beds for rehabilitation care are becoming more and more popular. However, among the patients, a large part of them cannot accurately express their intentions to the outside world through body or language. For each person, different expressions represent different states. When language and body cannot convey, we can recognize the patient's thoughts through facial expressions and complete corresponding operations. Nursing bed operators on the market are all nursing staff, ignoring the patient's own wishes. Based on the people-oriented design concept, through the expert system of expression recognition technology, patients with limited mobility can also operate the nursing bed by themselves.

发明内容Contents of the invention

本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种基于专家系统的具有学习功能并能快速准确识别人脸表情的人脸表情图像识别方法。The object of the present invention is to provide a human facial expression image recognition method based on an expert system that has a learning function and can quickly and accurately recognize human facial expressions in order to overcome the above-mentioned defects in the prior art.

本发明的目的可以通过以下技术方案来实现:The purpose of the present invention can be achieved through the following technical solutions:

一种基于专家系统的人脸表情图像识别方法,该方法通过建立在表情图像处理方法和传统计算机程序功能的基础上的用于人脸表情识别的专家系统对预处理的图像进行推理识别人脸表情,所述方法包括以下步骤:A facial expression image recognition method based on an expert system, which uses an expert system for facial expression recognition based on the facial expression image processing method and traditional computer program functions to perform inference on the preprocessed image and recognize the human face Expression, described method comprises the following steps:

1)从视频中捕捉图像,获得该视频中的用户信息,然后通过图像处理,图像特征提取,进行身份验证,获取用户的表情图像特征参数,确定用户表情库,建立识别人脸表情专家系统;1) capture the image from the video, obtain the user information in the video, then through image processing, image feature extraction, identity verification, obtain the user's expression image feature parameters, determine the user expression database, and establish an expert system for recognizing facial expressions;

2)对从视频中捕捉的图像进行图像处理和图像特征提取,获得用户表情程度最大化时的特征参数,将特征参数与步骤1)确定的用户表情库中的表情训练样本参数进行比对,经过专家系统推理机的推理最终输出表情识别的统计结果。2) carry out image processing and image feature extraction to the image captured from video, obtain the feature parameter when user's expression degree maximizes, the feature parameter is compared with the expression training sample parameter in the user's expression storehouse that step 1) determines, After the reasoning of the expert system reasoning machine, the statistical results of facial expression recognition are finally output.

所述步骤1)中建立识别人脸表情图像专家系统具体包括步骤:Described step 1) in setting up the recognition human facial expression image expert system to specifically include steps:

11)获取人脸表情图像;11) Obtaining facial expression images;

12)对人脸表情图像预处理;12) Preprocessing the facial expression image;

13)提取人脸表情图像特征;13) Extracting facial expression image features;

14)建立人脸表情专家系统规则库并将步骤13)中提取的特征参数存入到规则库中。14) Establish a rule base of the facial expression expert system and store the feature parameters extracted in step 13) into the rule base.

所述步骤2)中表情识别获取人脸表情图像识别结果具体包括步骤:Described step 2) in expression recognition, obtain facial expression image recognition result and specifically include steps:

21)获取用户的表情图像:视频信息接收后,从视频信息中捕捉图像,获取用户的表情图像。21) Acquiring the user's expression image: after the video information is received, capture an image from the video information to obtain the user's expression image.

22)对步骤21)中获取的人脸表情图像,进行图像预处理;22) to the face expression image that obtains in step 21), carry out image preprocessing;

23)对步骤22)中获得的眼部和嘴部的图像进行特征提取;23) carry out feature extraction to the image of eye and mouth that obtains in step 22);

24)表情识别:将步骤23)中得到的特征参数输入到识别人脸表情图像专家系统中并与专家系统规则库中存储的面部表情特征参数进行对比,经过专家系统中推理机的推理并输出推理结果。24) facial expression recognition: the feature parameter obtained in step 23) is input in the recognition facial expression image expert system and compares with the facial expression feature parameter stored in the expert system rule base, and outputs through the inference of the inference engine in the expert system reasoning results.

所述步骤12)和步骤22)和中人脸表情图像的预处理具体包括图像去噪、尺度归一化、灰度归一化、图像分割和图像二值化;Described step 12) and step 22) and the preprocessing of facial expression image in and specifically include image denoising, scale normalization, grayscale normalization, image segmentation and image binarization;

所述图像去噪后得到去噪后的图像g(i,j)为:After the image is denoised, the image g(i, j) obtained after denoising is:

g(i,j)=∑f(i,j)/N,(i,j)∈Mg(i,j)=∑f(i,j)/N, (i,j)∈M

其中:f(i,j)为给定的含有噪声的图像,M是所取邻域中各邻近像素的坐标,N是邻域中包含的邻近像素的个数;Among them: f(i, j) is a given image containing noise, M is the coordinates of each adjacent pixel in the selected neighborhood, and N is the number of adjacent pixels included in the neighborhood;

所述尺度归一化后得到目标图像g(x,y)为:After the scale is normalized, the target image g(x, y) is obtained as:

g(x,y)=f(x/a+x0,y/a+y0)g(x, y)=f(x/a+x 0 , y/a+y 0 )

其中:f(x,y)为归一化前图像,(x0,y0)为图片目标区域的重心,a为尺度因子,所述尺度因子为:Where: f(x, y) is the image before normalization, (x 0 , y 0 ) is the center of gravity of the image target area, a is the scale factor, and the scale factor is:

a=sqrt(T/m)a=sqrt(T/m)

其中:m为目标图像的面积;T为归一化前图像的面积Among them: m is the area of the target image; T is the area of the image before normalization

所述灰度归一化为分段线性灰度变换方法;The grayscale normalization is a piecewise linear grayscale transformation method;

所述图像分割技术为串行区域分割技术,通过对嘴巴和眼睛目标区域的直接检测来对图像进行分割。The image segmentation technology is a serial area segmentation technology, and the image is segmented through direct detection of the mouth and eye target areas.

所述图像二值化算法为二值化方法Otsu算法,使用灰度阈值将灰度图像分成目标部分和背景部分两类。The image binarization algorithm is a binarization method Otsu algorithm, which uses a gray threshold to divide the gray image into two types: target part and background part.

所述步骤13)和33)中人脸表情图像特征包括左眼特征、右眼特征和嘴部特征,所述人脸表情图像特征提取算法具体包括以下步骤:Described step 13) and 33) in facial expression image feature comprise left eye feature, right eye feature and mouth feature, described human facial expression image feature extraction algorithm specifically comprises the following steps:

201)对每一像素点计算相关矩阵M:201) Calculate the correlation matrix M for each pixel:

其中: in:

其中:Ix为x方向的差分,Iy为y方向的差分,w(x,y)为高斯函数,具体为:Wherein: I x is the difference in the x direction, I y is the difference in the y direction, and w(x, y) is a Gaussian function, specifically:

202)计算每像素点的Harris角点响应:R=(AB-CD)2-k(A+B)2202) Calculate Harris corner response per pixel: R=(AB-CD) 2 -k(A+B) 2 ;

203)在w*w范围内寻找极大值点,若Harris角点响应大于阈值,所述阈值为w*w范围内极大值,则视为角点,将眼部和嘴部的特征提取出来。203) Find the maximum value point within the w*w range, if the Harris corner point response is greater than the threshold value, and the threshold value is the maximum value within the w*w range, it will be regarded as a corner point, and the features of the eyes and mouth will be extracted come out.

所述步骤24)结束后,系统将本次任务数据加入到规则库中。After the step 24) is finished, the system adds the task data of this time into the rule base.

所述图像二值化过程具体包括步骤:The image binarization process specifically includes steps:

101)计算图像的平均灰度u为:101) Calculate the average grayscale u of the image as:

u=∑i*n(i)/(M*N)u=∑i*n(i)/(M*N)

其中:M*N为图像的像素个数,n(i)为灰度为i的像素点的个数;Wherein: M*N is the number of pixels of the image, and n(i) is the number of pixels whose grayscale is i;

102)确定灰度阈值t,t为前景与背景的分割阈值(所述灰度阈值t为使类间方差G最大的值),所述类间方差G为:102) Determine the grayscale threshold t, t is the segmentation threshold of the foreground and background (the grayscale threshold t is the value that makes the inter-class variance G the largest), and the inter-class variance G is:

G=w1*(u1-u)*(u1-u)+w2*(u2-u)*(u2-u)G=w 1 *(u 1 -u)*(u 1 -u)+w 2 *(u 2 -u)*(u 2 -u)

当方差G最大时,可以认为此时前景和背景差异最大,也就是此时的灰度是最佳阈值。When the variance G is the largest, it can be considered that the difference between the foreground and the background is the largest at this time, that is, the gray level at this time is the optimal threshold.

其中:w1为目标像素占图像的比例,w2为背景像素占图像的比例,u1为目标像素的平均灰度,u2为背景像素的平均灰度,所述目标像素占图像的比例w1和目标像素的平均灰度u1分别为:Among them: w1 is the proportion of the target pixel in the image, w2 is the proportion of the background pixel in the image, u1 is the average grayscale of the target pixel, u2 is the average grayscale of the background pixel, the proportion of the target pixel in the image w1 and the average gray level u1 of the target pixel are respectively :

w1=W1/(M*N) u1=∑i*n(i)/W1,i>tw 1 =W 1 /(M*N) u 1 =∑i*n(i)/W 1 , i>t

其中:W1为灰度值大于t的统计数,Among them: W 1 is the statistical number whose gray value is greater than t,

所述背景像素占图像的比例w2和背景像素的平均灰度u2分别为: The ratio w2 of the background pixel to the image and the average grayscale u2 of the background pixel are respectively:

w2=W2/(M*N) u2=∑i*n(i)/W2,i<tw 2 =W 2 /(M*N) u 2 =∑i*n(i)/W 2 , i<t

其中:W2为灰度值小于t的统计数;Among them: W 2 is the statistical number whose gray value is less than t;

103)将灰度值大于t的像素作为目标像素,将灰度值小于t的像素作为背景像素对图像进行二值化。103) Binarize the image by using pixels with grayscale values greater than t as target pixels and pixels with grayscale values smaller than t as background pixels.

与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:

1)本发明提出的建立识别人脸表情专家系统,专家系统是一个具有大量的专门知识与经验的程序系统,充分考虑了训练样本数及其类别信息,获得了较好的识别结果,为人脸识别提供了一种有效的途径。1) The present invention proposes to set up an expert system for identifying human facial expressions. The expert system is a program system with a large amount of specialized knowledge and experience. It fully considers the number of training samples and their category information, and obtains better recognition results. Recognition provides an effective approach.

2)本发明提出一种人脸表情的快速识别方法,提出视频环境下用于识别人脸表情的既具有较快的速度又有较高识别率的新的人脸表情识别方法。2) the present invention proposes a kind of fast recognition method of human facial expression, proposes the new human facial expression recognition method that not only has faster speed but also has higher recognition rate for recognizing human facial expression under the video environment.

3)表情图像处理方法和传统计算机程序功能的基础上建造专家系统。表情图像处理专家系统不是取代已具备较强功能和达到相当高水平的经典方法和传统程序,而是在充分利用其已有成果的基础上,着重处理目前尚难于解决的那部分问题。3) An expert system is built on the basis of facial expression image processing methods and traditional computer program functions. The facial expression image processing expert system does not replace the classic methods and traditional programs that already have strong functions and have reached a fairly high level, but on the basis of making full use of its existing achievements, it focuses on dealing with the part of the problem that is still difficult to solve.

附图说明Description of drawings

图1为本发明提供的一种识别人脸表情图像的专家系统的架构示意图;Fig. 1 is the structural representation of the expert system of a kind of recognition facial expression image provided by the present invention;

图2为灰度归一化中分三段分线性变换函数图;Fig. 2 is a three-section linear transformation function diagram in the gray scale normalization;

图3为harris算子对尺度的敏感性示意图。Figure 3 is a schematic diagram of the sensitivity of the Harris operator to scale.

具体实施方式detailed description

下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

一种基于专家系统的人脸表情图像识别方法,该方法包括以下步骤:A kind of facial expression image recognition method based on expert system, this method comprises the following steps:

1)从视频中捕捉图像,获得该视频中的用户信息,然后通过图像处理,图像特征提取,进行身份验证,获取用户的表情图像特征参数,确定用户表情库,建立识别人脸表情专家系统;1) capture the image from the video, obtain the user information in the video, then through image processing, image feature extraction, identity verification, obtain the user's expression image feature parameters, determine the user expression database, and establish an expert system for recognizing facial expressions;

2)对从视频中捕捉的图像进行图像处理和图像特征提取,获得用户表情程度最大化时的特征参数,将特征参数与步骤1)确定的用户表情库中的表情训练样本参数进行比对,经过专家系统推理机的推理最终输出表情识别的统计结果。2) carry out image processing and image feature extraction to the image captured from video, obtain the feature parameter when user's expression degree maximizes, the feature parameter is compared with the expression training sample parameter in the user's expression storehouse that step 1) determines, After the reasoning of the expert system reasoning machine, the statistical results of facial expression recognition are finally output.

步骤1)中建立识别人脸表情图像专家系统具体包括步骤:In the step 1), the expert system for identifying facial expression images is set up to specifically include steps:

11)获取人脸表情图像;11) Obtaining facial expression images;

12)对人脸表情图像预处理;12) Preprocessing the facial expression image;

13)提取人脸表情图像特征;13) Extracting facial expression image features;

14)建立人脸表情专家系统规则库并将步骤13)中提取的特征参数存入到规则库中。14) Establish a rule base of the facial expression expert system and store the feature parameters extracted in step 13) into the rule base.

步骤2)中表情识别获取人脸表情图像识别结果具体包括步骤:In step 2), expression recognition obtains facial expression image recognition result and specifically includes steps:

21)获取用户的表情图像:视频信息接收后,从视频信息中捕捉图像,获取用户的表情图像。21) Acquiring the user's expression image: after the video information is received, capture an image from the video information to obtain the user's expression image.

22)对步骤21)中获取的人脸表情图像,进行图像预处理;22) to the face expression image that obtains in step 21), carry out image preprocessing;

23)对步骤22)中获得的眼部和嘴部的图像进行特征提取;23) carry out feature extraction to the image of eye and mouth that obtains in step 22);

24)表情识别:将步骤23)中得到的特征参数输入到识别人脸表情图像专家系统中并与专家系统规则库中存储的面部表情特征参数进行对比,经过专家系统中推理机的推理并输出推理结果。24) facial expression recognition: the feature parameter obtained in step 23) is input in the recognition facial expression image expert system and compares with the facial expression feature parameter stored in the expert system rule base, and outputs through the inference of the inference engine in the expert system reasoning results.

如图1所示为本发明的工作架构示意图,本发明专家系统具有学习功能,步骤24)结束后本次数据被添加至规则库中,进一步增加本发明的工作效率和识别准确率。As shown in Figure 1, it is a schematic diagram of the working structure of the present invention. The expert system of the present invention has a learning function. After step 24), this data is added to the rule base, further increasing the work efficiency and recognition accuracy of the present invention.

步骤12)和步骤22)和中人脸表情图像的预处理具体包括图像去噪、尺度归一化、灰度归一化、图像分割和图像二值化。The preprocessing of the facial expression image in step 12) and step 22) specifically includes image denoising, scale normalization, grayscale normalization, image segmentation and image binarization.

步骤13)和33)中人脸表情图像特征包括左眼特征、右眼特征和嘴部特征。The facial expression image features in steps 13) and 33) include left eye features, right eye features and mouth features.

下面结合实例对本发明作进一步描述。The present invention will be further described below in conjunction with example.

1)建立人脸表情图像专家系统;1) Establish a facial expression image expert system;

视频信息接收后,从视频信息中捕捉图像,并且可以获得该视频信息的用户信息,通过图像预处理、图像特征提取,进行身份验证,确定该用户的表情库,建立识别人脸表情专家系统,待表情识别时提取;After the video information is received, the image is captured from the video information, and the user information of the video information can be obtained. Through image preprocessing and image feature extraction, identity verification is performed to determine the user's expression library and establish an expert system for recognizing facial expressions. Extract when waiting for facial expression recognition;

11)人脸表情图像获取11) Facial expression image acquisition

通过摄像头图像捕捉工具获取某种人脸表情的静态图像;Obtain a static image of a certain facial expression through a camera image capture tool;

12)人脸表情图像预处理12) Facial expression image preprocessing

A图像去噪A image denoising

图像去噪的最终目的是改善给定的图像,解决实际图像由于噪声干扰而导致图像质量下降的问题。通过去噪技术可以有效地提高图像质量,增大信噪比,更好的体现原来图像所携带的信息。基于C#程序语言,本发明中图像去噪算法采用的是传统的空间域滤波法。空域滤波是在原图像上直接进行数据运算,对像素的灰度值进行处理。本发明采用的的空间域图像去噪算法是邻域平均法。邻域平均法用数学公式表达:设f(i,j)为给定的含有噪声的图像,经过邻域平均处理后的图像为g(i,j),则g(i,j)=∑f(i,j)/N,(i,j)∈M,M是所取邻域中各邻近像素的坐标,是邻域中包含的邻近像素的个数。邻域平均处理方法是以图像模糊为代价来减小噪声的,且模板尺寸越大,噪声减小的效果越显著。如果f(i,j)是噪声点,其邻近像素灰度与之相差很大,采用邻域平均法就是用邻近像素的平均值来代替它,这样能明显消弱噪声点,使邻域中灰度接近均匀,起到平滑灰度的作用。The ultimate goal of image denoising is to improve a given image and solve the problem of image quality degradation caused by noise interference in actual images. The denoising technology can effectively improve the image quality, increase the signal-to-noise ratio, and better reflect the information carried by the original image. Based on the C# programming language, the image denoising algorithm in the present invention adopts the traditional spatial domain filtering method. Spatial filtering is to directly perform data calculation on the original image, and process the gray value of the pixel. The spatial domain image denoising algorithm adopted in the present invention is a neighborhood averaging method. The neighborhood averaging method is expressed by a mathematical formula: let f(i, j) be a given image containing noise, and the image after neighborhood averaging processing is g(i, j), then g(i, j)=∑ f(i, j)/N, (i, j)∈M, M is the coordinates of each adjacent pixel in the selected neighborhood, and is the number of adjacent pixels contained in the neighborhood. Neighborhood averaging method reduces noise at the cost of image blur, and the larger the template size, the more significant the effect of noise reduction. If f(i, j) is a noise point, and the gray level of its adjacent pixels is very different from it, the neighborhood average method is to replace it with the average value of adjacent pixels, which can significantly weaken the noise point and make the neighborhood The gray scale is close to uniform, which plays a role in smoothing the gray scale.

B尺度归一化B scale normalization

图像平移尺度归一化就是指通过变换消除平移和比例缩放对图像的影响。基于C#程序语言,本发明中的尺度归一化使用的是标准矩的方法,先把坐标原点移至图像重心处,由标准矩可以得到目标的重心(x0,y0)。因为目标的重心对平移、尺度和旋转是不变的,将图像的原点放在目标重心上以解决平移的问题。然后定义一个尺度因子a解决尺度问题:a=sqrt(T/m)。实际上,若二值图像上目标像素值为1,背景为像素值0,m是目标的面积,使目标的面积为一个固定的大小,这样,通过下面的标变换就可以得到尺度归一化目标:g(x,y)=f(x/a+x0,y/a+y0)。这样就将脸部的区域进行放大。Image translation scale normalization refers to eliminating the influence of translation and scaling on the image through transformation. Based on the C# programming language, the method of standard moments is used for scale normalization in the present invention. First, the coordinate origin is moved to the center of gravity of the image, and the center of gravity (x 0 , y 0 ) of the target can be obtained from the standard moments. Since the center of gravity of the object is invariant to translation, scale and rotation, placing the origin of the image on the center of gravity of the object solves the problem of translation. Then define a scale factor a to solve the scale problem: a=sqrt(T/m). In fact, if the target pixel value on the binary image is 1, the background has a pixel value of 0, and m is the area of the target, so that the area of the target is a fixed size, so that the scale normalization can be obtained through the following standard transformation Objective: g(x,y)=f(x/a+x 0 , y/a+y 0 ). This enlarges the area of the face.

C灰度归一化C grayscale normalization

灰度归一化为了改善图像的质量。基于C#程序语言,本发明中我们采用最基本的分段线性灰度变换方法,分段线性变换也叫做灰度线性拉伸,用的是分三段分线性变换。如图2所示,图中对灰度区间[a,b]进行了扩展,而灰度区间[0,a]和[b,c]收到了压缩。通过细心调整折线拐点的位置及控制分段直线的斜率,可对任意灰度区间进行扩展和压缩,实现灰度归一化处理。将人脸表情彩色图像转换成灰度图像。Grayscale normalization is used to improve image quality. Based on the C# programming language, we adopt the most basic piecewise linear grayscale transformation method in the present invention. The piecewise linear transformation is also called grayscale linear stretching, which uses three-section linear transformation. As shown in Figure 2, the grayscale interval [a, b] is expanded, while the grayscale intervals [0, a] and [b, c] are compressed. By carefully adjusting the position of the inflection point of the polyline and controlling the slope of the segmented straight line, any gray scale interval can be expanded and compressed to achieve gray scale normalization. Convert color images of human facial expressions to grayscale images.

D图像分割D image segmentation

人脸中最能反映人脸表情变化的地方就是嘴巴和眼睛,所以,本发明中经过图像分割处理后提取的部位是眼睛和嘴巴。基于C#程序语言,本发明中我们采用的是串行区域分割技术,就是采用串行方式,通过对目标区域的直接检测来对图像进行分割的技术。其特点是:将整个处理过程分解为顺序的多个步骤依次进行,对后续步骤的处理要根据前续步骤的处理结果而定。我们采用从全图出发,然后逐渐分裂成所需的分割区域。The places that can best reflect the changes of facial expressions in the human face are the mouth and the eyes, so the parts extracted after the image segmentation process in the present invention are the eyes and the mouth. Based on the C# programming language, what we use in the present invention is the serial area segmentation technology, which is the technology of segmenting the image through direct detection of the target area in a serial manner. Its characteristics are: the whole processing process is decomposed into multiple steps in sequence, and the processing of the subsequent steps is determined according to the processing results of the previous steps. We start from the full image and then gradually split into the desired segmentation regions.

E图像二值化E image binarization

基于C#程序语言,本发明中二值化方法我们采用的是经典的二值化方法Otsu算法。该算法的基本思想是:设使用某一个阈值将灰度图像根据灰度大小,分成目标部分和背景部分两类,在这两类的类内方差最小和类间方差最大的时候,得到的阈值是最优的二值化阈值。对一幅N*M个像素的图像来说,首先计算图像的平均灰度u,统计得到全部图像中灰度为i对应的像素个数u(i),于是该图像的平均灰度值u=∑i*n(i)/(M*N);列出求解最佳阈值t的相关变量,记t为目标与背景的分割阈值,记目标像素(灰度大于t)占图像的比例为w1:w1=W1/(M*N),其中W1是灰度值大于t的统计数;记目标像素的平均灰度为u1:u1=∑i*n(i)/W1,i>t,同理,得到背景像素占图像的比例w2和背景像素的平均灰度u2。最后求解最佳阈值t是使类差别最大,即使得G=w1*(u1*u)*(u1-u)+w2*(u2*u)*(u2-u)最大。G最大时,即得到了最佳阈值。Based on the C# programming language, the binarization method in the present invention we adopt is the classic binarization method Otsu algorithm. The basic idea of this algorithm is: Suppose a certain threshold is used to divide the grayscale image into target part and background part according to the gray scale. is the optimal binarization threshold. For an image of N*M pixels, first calculate the average gray level u of the image, and obtain the number u(i) of pixels corresponding to gray level i in all images, so the average gray level value u of the image is =∑i*n(i)/(M*N); List the relevant variables for solving the optimal threshold t, record t as the segmentation threshold between the target and the background, record the proportion of the target pixel (grayscale greater than t) in the image as w 1 : w 1 =W 1 /(M*N), where W 1 is the statistical number whose gray value is greater than t; record the average gray level of the target pixel as u 1 : u 1 =∑i*n(i)/ W 1 , i>t, similarly, the ratio w 2 of the background pixels to the image and the average gray level u 2 of the background pixels are obtained. Finally, to solve the optimal threshold t is to maximize the class difference, that is, to maximize G=w 1 *(u 1 *u)*(u 1 -u)+w 2 *(u 2 *u)*(u 2 -u) . When G is the largest, the optimal threshold is obtained.

13)人脸表情图像特征提取13) Feature extraction of facial expression images

通过上述步骤E图像分割得到眼部和嘴部的图像,采用几何特征进行特征提取,对眼睛、嘴巴的位置变化进行定位、测量,确定其大小、距离、形状及相互比例等特征。在本发明中进行表人脸表情识别算法中,需要提取几何特征,我们采用的是Harris算子角点提取算法,以C#程序语言为基础,但这些检测算法需要一些阈值设置,每幅图像的交点个数是不一样的。提取眼睛的角点,手动进行固定选取相同的N个点(左右眼角、眼睑上下共四个点):The images of eyes and mouth are obtained through image segmentation in the above step E, feature extraction is performed using geometric features, position changes of eyes and mouth are positioned and measured, and characteristics such as size, distance, shape and mutual proportion are determined. Carry out in the present invention in the face expression recognition algorithm of table, need to extract geometric feature, what we adopted is the Harris operator corner point extraction algorithm, is based on C# programming language, but these detection algorithms need some threshold value settings, and every piece of image The number of intersection points is different. Extract the corners of the eyes, manually fix and select the same N points (four points in the left and right corners of the eyes, and the upper and lower eyelids):

第一步对每一像素点计算相关矩阵M,Harris算子用高斯函数代替二值窗口函数,对离中心点越近的像素赋于越大的权重,以减少噪声影响。The first step is to calculate the correlation matrix M for each pixel. The Harris operator replaces the binary window function with a Gaussian function, and assigns a greater weight to the pixel closer to the center point to reduce the impact of noise.

其中, in,

其中:Ix为x方向的差分,Iy为y方向的差分,w(x,y)为高斯函数,具体为:Wherein: I x is the difference in the x direction, I y is the difference in the y direction, and w(x, y) is a Gaussian function, specifically:

第二步计算每像素点的Harris角点响应。The second step computes the Harris corner response per pixel.

R=(AB-CD)2-k(A+B)2 R=(AB-CD) 2 -k(A+B) 2

第三步在w*w范围内寻找极大值点,如图3所示,若Harris角点响应大于阈值,阈值一般为w*w范围内极大值,则视为角点。这样,就将眼部和嘴部的特征提取出来。The third step is to find the maximum value point in the range of w*w, as shown in Figure 3, if the Harris corner point response is greater than the threshold value, the threshold value is generally the maximum value in the range of w*w, it is regarded as a corner point. In this way, the features of the eyes and mouth are extracted.

人类有六种主要情感,每种情感以唯一的表情来反映人的一种独特的心理活动。这六种情感被称为基本情感,由愤怒(anger)、高兴(smile)、悲伤(sadness)、惊讶(surprise)、厌恶(disgust)和恐惧(fear)组成。采集表示这六种表情的人脸表情图像,经过步骤12)、13)处理之后,将代表不同表情的特征参数存入到规则库中。每种表情的特征参数都用相应的字母代替,例如代表高兴地特征参数,就用happiness标注。Human beings have six main emotions, and each emotion reflects a unique psychological activity of a person with a unique expression. These six emotions are called basic emotions and consist of anger, smile, sadness, surprise, disgust and fear. The facial expression images representing these six expressions are collected, and after processing in steps 12) and 13), the characteristic parameters representing different expressions are stored in the rule base. The characteristic parameters of each expression are replaced by corresponding letters. For example, the characteristic parameters representing happiness are marked with happiness.

这样,通过上述步骤,就建立了识别人脸表情专家系统。In this way, through the above steps, an expert system for recognizing human facial expressions is established.

(二)表情识别获取人脸表情图像识别结果(2) Facial expression recognition to obtain facial expression image recognition results

21)获取用户的表情图像:21) Obtain the user's expression image:

视频信息接收后,从视频信息中捕捉图像,获取用户的表情图像。After the video information is received, an image is captured from the video information to obtain the user's facial expression image.

22)图像预处理:22) Image preprocessing:

对步骤21)中获取的人脸表情图像,首先经过步骤12),对图像进行预处理,对人脸进行检测,对人脸进行定位,然后对图像进行切割,最终获取图像的眼部和嘴部的图像特征。For the facial expression image obtained in step 21), first through step 12), the image is preprocessed, the face is detected, the face is positioned, then the image is cut, and finally the eyes and mouth of the image are obtained part of the image features.

23)图像特征提取:23) Image feature extraction:

对上述步骤(2)中获得的眼部和嘴部的图像进行特征提取,采用上述步骤(3)中的几何方法提取眼部和嘴部的特征参数。Perform feature extraction on the images of the eyes and mouth obtained in the above step (2), and use the geometric method in the above step (3) to extract the feature parameters of the eyes and mouth.

24)表情识别:24) Expression recognition:

将步骤中得到的特征参数输入到识别人脸表情图像专家系统中,与专家系统规则库中存储的面部表情特征参数进行对比,然后经过专家系统中推理机的推理,最后专家系统将推理结果输出。例如,若输入的是高兴的人脸表情图像,专家系统的输出结果就是smile。Input the characteristic parameters obtained in the step into the expert system for recognizing facial expression images, compare them with the facial expression characteristic parameters stored in the expert system rule base, and then pass the reasoning of the reasoning machine in the expert system, and finally the expert system outputs the reasoning results . For example, if the input is a happy facial expression image, the output result of the expert system is smile.

如图1所示,本发明专家系统具有学习功能,识别结束后,将本次数据也加入规则库,同时专家也可以对该专家系统进行手动更新规则库和程序。As shown in Fig. 1, the expert system of the present invention has a learning function. After the recognition is completed, the current data is also added to the rule base. At the same time, experts can also manually update the rule base and programs of the expert system.

Claims (4)

1. A facial expression image recognition method based on an expert system is characterized in that the method carries out reasoning recognition of facial expressions on preprocessed images through the expert system for facial expression recognition which is established on the basis of an expression image processing method and the functions of a traditional computer program, and the method comprises the following steps:
1) capturing an image from the video, obtaining user information in the video, then performing image processing, image feature extraction and identity verification to obtain expression image feature parameters of a user, determining a user expression library, establishing a facial expression recognition expert system,
2) performing image processing and image feature extraction on an image captured from a video to obtain a feature parameter when the expression degree of a user is maximized, comparing the feature parameter with an expression training sample parameter in the user expression library determined in the step 1), and finally outputting a statistical result of expression recognition through reasoning of an expert system reasoning machine;
the expert system for identifying the facial expression image established in the step 1) specifically comprises the following steps:
11) the image of the facial expression is acquired,
12) the pre-processing is carried out on the facial expression image,
13) the characteristics of the facial expression image are extracted,
14) establishing a rule base of the facial expression expert system and storing the characteristic parameters extracted in the step 13) into the rule base;
the step 2) of obtaining the facial expression image recognition result through the expression recognition specifically comprises the following steps:
21) acquiring an expression image of a user: after the video information is received, capturing images from the video information, acquiring expression images of the user,
22) carrying out image preprocessing on the facial expression image acquired in the step 21),
23) extracting the features of the eye and mouth images obtained in step 22),
24) and (3) expression recognition: inputting the characteristic parameters obtained in the step 23) into an expert system for recognizing the facial expression images, comparing the characteristic parameters with facial expression characteristic parameters stored in an expert system rule base, and outputting an inference result through inference of an inference engine in the expert system;
the steps 12) and 22) and the preprocessing of the median facial expression image specifically comprise image denoising, scale normalization, gray level normalization, image segmentation and image binarization,
the image g (i, j) after denoising is obtained as follows:
g(i,j)=∑f(i,j)/N,(i,j)∈M
wherein: f (i, j) is the given noise-containing image, M is the coordinates of each neighboring pixel in the neighborhood taken, N is the number of neighboring pixels contained in the neighborhood,
obtaining a target image g (x, y) after the scale normalization is as follows:
g(x,y)=f(x/a+x0,y/a+y0)
wherein: f (x, y) is the image before normalization, (x)0,y0) The center of gravity of a picture target area is shown, a is a scale factor, and the scale factor is as follows:
a=sqrt(T/m)
wherein: m is the area of the target image; t is the area of the image before normalization;
the gray scale is normalized to a piecewise linear gray scale transform method,
the image segmentation technique is a serial region segmentation technique, which segments the image by direct detection of the target region of the mouth and eyes,
the image binarization algorithm is a binarization method Otsu algorithm, and a gray level image is divided into a target part and a background part by using a gray level threshold value.
2. The expert system-based facial expression image recognition method according to claim 1, wherein the facial expression image features in steps 13) and 33) include left-eye features, right-eye features and mouth features, and the facial expression image feature extraction algorithm specifically includes the following steps:
201) calculating a correlation matrix M for each pixel:
M = A D C B
wherein:
wherein: i isxIs the difference in the x direction, IyFor the difference in the y direction, w (x, y) is a gaussian function, specifically:
w ( x , y ) = 1 2 &pi;&sigma; 2 e - ( x 2 + y 2 ) / 2 &sigma; 2
202) calculating Harris angular point response of each pixel point: r ═ AB-CD)2-k(A+B)2
203) And searching a maximum value point in the w-w range, and if the Harris angular point response is greater than a threshold value which is the maximum value in the w-w range, regarding the Harris angular point response as an angular point, and extracting the characteristics of the eyes and the mouth.
3. The expert system based facial expression image recognition method as claimed in claim 1, wherein after the step 24) is finished, the system adds the task data to the rule base.
4. The expert system-based facial expression image recognition method as claimed in claim 1, wherein the image binarization process specifically comprises the steps of:
101) the average gray level u of the image is calculated as:
u=∑i*n(i)/(M*N)
wherein: m × N is the number of pixels of the image, and N (i) is the number of pixels with the gray level of i;
102) determining a gray threshold t, wherein the gray threshold t is a value which enables the inter-class variance G to be maximum, and the inter-class variance G is as follows:
G=w1*(u1-u)*(u1-u)+w2*(u2-u)*(u2-u)
wherein: w is a1Is the proportion of the target pixel in the image, w2Is the proportion of background pixels to the image, u1Is the average gray level of the target pixel, u2The average gray scale of the background pixel is the proportion w of the target pixel to the image1And the average gray level u of the target pixel1Respectively as follows:
w1=W1/(M*N) u1=Σi*n(i)/W1,i>t
wherein: w1Is a statistic of gray values greater than t,
the proportion w of the background pixels to the image2And average gray level u of background pixel2Respectively as follows:
w2=W2/(M*N) u2=Σi*n(i)/W2,i<t
wherein: w2The statistics of the gray value less than t;
103) and taking the pixel with the gray value larger than t as a target pixel, and taking the pixel with the gray value smaller than t as a background pixel to carry out binarization on the image.
CN201410333366.0A 2014-07-14 2014-07-14 Facial expression recognition method based on expert system Expired - Fee Related CN104077579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410333366.0A CN104077579B (en) 2014-07-14 2014-07-14 Facial expression recognition method based on expert system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410333366.0A CN104077579B (en) 2014-07-14 2014-07-14 Facial expression recognition method based on expert system

Publications (2)

Publication Number Publication Date
CN104077579A CN104077579A (en) 2014-10-01
CN104077579B true CN104077579B (en) 2017-07-04

Family

ID=51598827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410333366.0A Expired - Fee Related CN104077579B (en) 2014-07-14 2014-07-14 Facial expression recognition method based on expert system

Country Status (1)

Country Link
CN (1) CN104077579B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318221A (en) * 2014-11-05 2015-01-28 中南大学 Facial expression recognition method based on ELM
CN104596266B (en) * 2014-12-25 2016-08-24 贵州永兴科技有限公司 A kind of information-based universal electric furnace with counting and face identification functions
CN104634105B (en) * 2014-12-25 2016-09-28 贵州永兴科技有限公司 A kind of flip-shell universal electric furnace switch with counting and face identification functions
CN104596250B (en) * 2014-12-25 2016-08-17 贵州永兴科技有限公司 A kind of information-based universal electric furnace with counting and face identification functions
JP2016161830A (en) * 2015-03-03 2016-09-05 カシオ計算機株式会社 Content output device, content output method, and program
CN104794444A (en) * 2015-04-16 2015-07-22 美国掌赢信息科技有限公司 Facial expression recognition method in instant video and electronic equipment
CN104899255B (en) * 2015-05-15 2018-06-26 浙江大学 Suitable for the construction method of the image data base of training depth convolutional neural networks
CN104951778A (en) * 2015-07-24 2015-09-30 上海华旌科技有限公司 Face recognition expert system based on semantic network
CN106778679B (en) * 2017-01-05 2020-10-30 唐常芳 Specific crowd video identification method based on big data machine learning
TWI731920B (en) * 2017-01-19 2021-07-01 香港商斑馬智行網絡(香港)有限公司 Image feature extraction method, device, terminal equipment and system
CN106919924A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis system based on the identification of people face
CN106919923A (en) * 2017-03-07 2017-07-04 佛山市融信通企业咨询服务有限公司 A kind of mood analysis method based on the identification of people face
US11042729B2 (en) * 2017-05-01 2021-06-22 Google Llc Classifying facial expressions using eye-tracking cameras
CN107945848A (en) * 2017-11-16 2018-04-20 百度在线网络技术(北京)有限公司 A kind of exercise guide implementation method, device, equipment and medium
CN109034079B (en) * 2018-08-01 2022-03-11 中国科学院合肥物质科学研究院 A facial expression recognition method for non-standard face poses
CN109159129A (en) * 2018-08-03 2019-01-08 深圳市益鑫智能科技有限公司 A kind of intelligence company robot based on facial expression recognition
CN112968999B (en) * 2021-02-25 2021-11-12 上海吉盛网络技术有限公司 A digital-analog hybrid elevator multi-party communication device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880855A (en) * 2011-08-16 2013-01-16 武汉大学 Cloud-model-based facial expression recognition method
CN103268150A (en) * 2013-05-13 2013-08-28 苏州福丰科技有限公司 Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition
CN103514441A (en) * 2013-09-21 2014-01-15 南京信息工程大学 Facial feature point locating tracking method based on mobile platform
CN103824059A (en) * 2014-02-28 2014-05-28 东南大学 Facial expression recognition method based on video image sequence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003302974A1 (en) * 2002-12-13 2004-07-09 Koninklijke Philips Electronics N.V. Expression invariant face recognition

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880855A (en) * 2011-08-16 2013-01-16 武汉大学 Cloud-model-based facial expression recognition method
CN103268150A (en) * 2013-05-13 2013-08-28 苏州福丰科技有限公司 Intelligent robot management and control system and intelligent robot management and control method on basis of facial expression recognition
CN103514441A (en) * 2013-09-21 2014-01-15 南京信息工程大学 Facial feature point locating tracking method based on mobile platform
CN103824059A (en) * 2014-02-28 2014-05-28 东南大学 Facial expression recognition method based on video image sequence

Also Published As

Publication number Publication date
CN104077579A (en) 2014-10-01

Similar Documents

Publication Publication Date Title
CN104077579B (en) Facial expression recognition method based on expert system
CN107491726B (en) A real-time expression recognition method based on multi-channel parallel convolutional neural network
CN111563417B (en) Pyramid structure convolutional neural network-based facial expression recognition method
CN108229268A (en) Expression recognition and convolutional neural network model training method and device and electronic equipment
WO2019237567A1 (en) Convolutional neural network based tumble detection method
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN106372622A (en) Facial expression classification method and device
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN108009520A (en) A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net
CN106845384B (en) A Gesture Recognition Method Based on Recursive Model
CN106960181B (en) RGBD data-based pedestrian attribute identification method
CN108647625A (en) A kind of expression recognition method and device
Saeed A framework for recognition of facial expression using HOG features
Thongtawee et al. A novel feature extraction for American sign language recognition using webcam
CN110555380A (en) Finger vein identification method based on Center Loss function
CN108765392A (en) A kind of digestive endoscope lesion detection and recognition methods based on sliding window
CN109325408A (en) A gesture judgment method and storage medium
CN110574036A (en) Detection of nerves in a series of echographic images
CN109409227A (en) A kind of finger vena plot quality appraisal procedure and its device based on multichannel CNN
CN108229432A (en) Face calibration method and device
CN109522865A (en) A kind of characteristic weighing fusion face identification method based on deep neural network
CN112101293A (en) Facial expression recognition method, device, equipment and storage medium
CN115862120A (en) Separable variation self-encoder decoupled face action unit identification method and equipment
Jindal et al. Sign language detection using convolutional neural network (CNN)
Vezzetti et al. Application of geometry to rgb images for facial landmark localisation-a preliminary approach

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170704