1-Consumer Behavior W1
1-Consumer Behavior W1
6, December 2012
ABSTRACT:
The Automatic Facial Expression Recognition has been one of the latest research topic since
1990’s.There have been recent advances in detecting face, facial expression recognition and
classification. There are multiple methods devised for facial feature extraction which helps in identifying
face and facial expressions. This paper surveys some of the published work since 2003 till date. Various
methods are analysed to identify the Facial expression. The Paper also discusses about the facial
parameterization using Facial Action Coding System(FACS) action units and the methods which
recognizes the action units parameters using facial expression data that are extracted. Various kinds of
facial expressions are present in human face which can be identified based on their geometric features,
appearance features and hybrid features . The two basic concepts of extracting features are based on
facial deformation and facial motion. This article also identifies the techniques based on the
characteristics of expressions and classifies the suitable methods that can be implemented.
KEYWORDS:
Facial Expression, FACS, Geometric Features, Appearance Features, Deformation,
Facial Motion.
1. INTRODUCTION:
DOI : 10.5121/ijcses.2012.3604 47
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
methods of the published work after 2003 in extracting those expressions which helps in
recognizing the non-verbal communication in Human.
48
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
49
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
facial features are extracted to identify the facial expression. Facial expression can be classified
into two types namely Geometric or InTransient features and Appearance Features or Transient
Features.
Geometric or Intransient Features:The features that are always present in the face but may be
deformed due to any kind of facial expression.eg)Eyes,Eyebrows,Mouth,Tissue Textures,Nose.
The facial components or facial feature points are extracted to form a feature vector that
represents the face geometry
Appearance or transient Features:The features that appear temporarily in the face during any
kind of Facial Expression. Eg)Different kinds of wrinkles,bulges,forefront,regions surrounding
the mouth and eyes. With appearance-based methods, image filters, such as Gabor wavelets, are
applied to either the whole-face or specific regions in a face image to extract a feature vector.
Facial Expression Recognition is the last step in facial expression analysis where the
extracted features are recognized based on the action units. The Recognizer identifies not only
the basic emotions like anger,happy,surprise,sad[13] but also identifies the expression caused
due to pain[14],temporal dynamics [15] , Intensity of Expression[16],Spontaneous Expression
[17].
Geometric
Appearance
Based
Face Detection
Frame Sequence
Head Pose
The two basic face acquistion methods are to detect faces both in frontal view images
and near frontal view images. To detect the faces, two methods are used namely face detection
and head pose estimation.
To handle the out-of-plane head motion, head pose estimation can be employed.The
methods for estimating head pose can be classified as 3D model-based methods [26,27] and 2D
image-based methods [28].In 3D Model based method Bartlett used a canonical wire-mesh
face model to estimate face geometry and 3D pose from hand-labelled feature points.In 2D
image based method To handle the full range of head motion for expression analysis, Tian et al.
[28] detected the head instead of the face. The head is identified using the smoothed silhouette
of the foreground object as a segment using background subtraction and computing the negative
curvature minima (NCM) points of the silhouette.
The extraction is basically based on the type of features, Geometric Features and
Appearance Features. The two basic concepts employed for extracting features are based on
identifying facial deformation and facial motion. The deformation based features recognize the
Action Units, and the classifier is trained to differentiate human emotional states based on
identified Action Units. The Deformation kind of extraction is applied to images [29] and to
image sequences [30]. The motion based features exploit the temporal correlation of facial
expressions to identify variations within a probabilistic framework[15]. Image based models
extract features from images, or reduced dimensional facial components[31]. Model based
features are usually shape or texture models that fit human faces. The output of the feature
extractor stage must contain separable and classifiable vectors. Active appearance models[32]
and point distribution models[33] are used to fit on the shapes of interest. These shapes
constitute the feature vectors. The expression extraction methods are widely classified under
two kinds namely deformation extraction and motion extraction. As for motion extraction
techniques, some commonly used methods are dense optical flow[34], feature point
tracking[35], and difference images[36].
The Various techniques under facial expression extraction methods are tabulated in the
table.
51
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
Geometric Extraction is to detect and track changes of facial components in near frontal
face images. Tian et al. develop multi-state models to extract the geometric facial features. A
three-state lip model describes the lip state: open, closed, tightly closed. A two-state model
(open or closed) is used for each of the eyes. Each brow and cheek has a one-state model. Some
appearance features, such as nasolabial furrows and crows-feet wrinkles (Fig.5), are represented
explicitly by using two states: present and absent.
Model Based:
Automatic Active Appearance Model (AAM) mapping can be employed to reduce the
manual preprocessing of the geometric feature initialization . Xiao et al. [43] performed the 3D
head tracking to handle large out-of plane head motion and track nonrigid features. Once the
head pose is recovered, the face region is stabilized by transforming the image to a common
orientation for expression recognition [42].
Image Sequence:
Given an image sequence, the region of the face and approximate location of individual
face features are detected automatically in the initial frame. The contours of the face features
and components then are adjusted manually in the initial frame. After the initialization, all face
feature changes are automatically detected and tracked in the image sequence. The system
groups 15 parameters for the upper face[11] and 9 parameters for the lower face[12], which
describe shape, motion, and state of face components and furrows. To remove the effects of
variation in planar head motion and scale between image sequences in face size, all parameters
are computed as ratios of their current values to that in the reference frame.
52
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
Figure 6: Facial Model Figure 7:Feature points in the facial model: fiducial
points marked by circles(global) and big black dots
(local), and contour points marked by small black dots.
The Preprocessing procedure steps for the Gabor Filters are 1). detecting facial feature
points manually including eyes, nose and mouth; 2). rotating to line up the eye coordinates;
3) locating and cropping the face region using a rectangle according to face model as shown in
Figure 6.
Image Sequence :
Techniques like Haar-like Feature, Facial Feature tracking are used to identify the facial
features that produces the expressions. A multi-modal tracking [24]approach is required to
enable the state switching of facial components during the feature tracking process. Twenty-six
fiducial points and 56 contour points are used in the facial model. Using the Facial model the
fiducial points are marked for an image sequence using feature tracking method. The marked
features in an image sequence is shown in the figure 8.
2.3EXPRESSION RECOGNITION:
Recognizing the facial expressions is the last step in Facial Expression Analysis.
This is classified into basic categories namely Frame Based Expression recognition and
Sequence Based Expression recognition.
53
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
54
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
3.DISCUSSION:
In this survey on automatic facial expression analysis, facial expression analysis with
regard to different motion and deformation-based extraction methods,model and image-based
representation techniques as well as recognition and interpretation-based classification
approaches are discussed.While trying to classify the facial expression ,two approaches had
been handled namely,classifying expressions based on facial action coding system[1][2],and
direct and indirect interpretation of facial expressions. Few recent articles are discussed for its
recognition rate and action unit detection. The various techniques and their classification rates
are listed below.
55
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
4 86.7%
P.Geetha, 2010 24 10 2D-Principal Dynamic 94.13%
Dr.Vasumathi Component 2D Cellular
Narayanan Analysis Automata
video frames
Sander 2010 27 - Quadtree Hidden 94.3%
Koelstra,student Decompositio Markov model
member n
IEEE,Maja
Pantic,Senior
Member,IEEE,I
oannis
Patras,member
IEEE
Peng 2009 8 6 Haar like Adaboost 96.6%
Yang,Qingshan feature
liu,Dimitris
N.Mexatras
Le Hoang 2011 - 6 PCA Neural 85.7%
Thai,Nguyen Network
Do Thai
Nguyen,Tran
son hai,
Member
IACSIT
Guoying 2009 - 6 Adaboost Boosted Multi 93.85%
Zhao,Matti resolution
Pretikainen spatio
temporal
descriptors
Pooja Sharma 2011 - 6 Pattern Optical flow 83.33%
Tracking based analysis
Table 3: List of recent work for recognition rate.
In the above table the first four system performs facial expression classification based
on facial action elements that is detected. The remaining systems perform direct or indirect
interpretations of facial expression. Furthermore , the facial expression intensities are studied in
some systems for smile detection [16], pain detection[14],identification of posed and Genuine
pain[49].Much more algorithms are focussing on extraction algorithms and in classification for
an optimal recognition rate.
expression intensities helps in identifying the faces of pain like chronic or acute. The Expression
system is used in many domains like Telecommunications, Behavioural Science, Video Games,
Animations, Psychiatry, Automobile Safety, Affect sensitive music juke boxes and televisions,
Educational Software, etc .
5.CONCLUSION:
The objective of this paper is to show a clean survey on the structure of analysing the
facial expression.The steps involved in expression analysis like face acquisition, feature
extraction and expression classification had been discussed. Each step is discussed with the
approaches and methods that can be applied to attain the required goal. The expression
recognition based on FACS and direct or indirect interpretation are also discussed with some of
the recent research work. Although many researchers have been investigating facial expressions,
basic expressions like happy,sad,disgust,surprise had been the interesting topic that is been
widely discussed.Topics like Expressions recognition during spontaneous movement, intensity
of expressions, combination of facial action elements detection ,temporal segmentation, pain
analysis are still some topics of interest that are under the cover which needs to be unwrapped .
REFERENCES:
[1] G.Donato,M.S.Barlett,J.C.Hager,P.Keman,T.JSejnowski,”Classifying Facial actions”, IEEE
Trans.Pattern Analysis and Machine Intelligence,Vol.21 No.10 PP.974-989 ,1999.
[2] P.Ekman and W.V.Friesen.”Facial Action Coding System” .Consulting Pshychologists Press
Inc.,577 College Avenue,Palo Alto,California 94306,1978.
[3] A.Mehrabian ,”Communication without Words,” Psychology Today ,Vol.2,no.4 ,pp.53-56,1968.
[4] P. Dulguerov, F. Marchal, D. Wang, C. Gysin, P. Gidley, B.Gantz, J. Rubinstein, S. Sei7, L.
Poon, K. Lun, Y. Ng, “Review Of objective topographic facial nerve evaluation methods”, Am.J.
Otol. 20 (5) (1999) 672–678.
[5] J.Ostermann,“Animation of synthetic faces in Mpeg-4”, Computer Animation, pp. 49-51,
Philadelphia, Pennsylvania,June 8-10, 1998
[6] B. Fasel,Juergen Luettin,”Automatic facial expression analysis: a survey, Pattern Recognition
(2003) 259 – 275.
[7] Maja Pantic, Student Member, IEEE, and Leon J.M. Rothkrantz, “Automatic Analysis of Facial
Expressions:The State of the Art”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 22, No. 12, December 2000
[8] Vinay Kumar Bettadapura, “Face Expression Recognition and Analysis:The State of the Art”.
College of Computing, Georgia Institute of Technology.
[9] P.Ekman and W.V.Friesen , “Manual for the facial action coding system,”Consulting
Psychologists Press,1977.
[10] P. Ekman, W.V. Friesen, J.C. Hager, “Facial Action Coding System Investigator’s Guide,” A
Human Face, Salt Lake City, UT, 2002.Consultant Pschologists Press
[11] Yingli Tian, Takeo Kanade and Jeffrey F. Cohn,” Recognizing Upper Face Action Units for
Facial Expression Analysis”. Consultant Pschologists Press
[12] Ying-li Tian , Takeo Kanade, Jeffrey F.Cohn,” Recognizing Lower Face Action Units for Facial
Expression Analysis”. Consultant Pschologists Press
[13] Anastasios C. Koutlas, Dimitrios I. Fotiadis “A Region Based Methodology for facial expression
recognition.” Systems, Man and Cybernetics, 2008. SMC 2008.
57
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
[14] Ahmed Bilal Ashraf, Simon Lucey, Jeffrey F. Cohn, Tsuhan Chen, Zara Ambadar, Kenneth M.
Prkachin,Patricia E. Solomon”The painful face – Pain expression recognition using active
appearance models”, Image and Vision Computing 27 (2009) 1788–1796
[15] Maja Pantic,Ioannsi Patras , “Dynamics of facial expression and their temporal segments from
face profile image sequences”. IEEE Transactions on Systems,Man ande Cybernetics .
[16] Jacob Whitehill ,Gwen Littlewort ,Ian Fasel,Marian Bartlett, Member IEEE,Javier Movellan.
“Toward Practical Smile Detection” , IEEE Transactions on Pattern Analysis and Machine
Intelligence , Vol 31.No11. November 2009.
[17] Marian Stewart Bartlett,Gwen C.Littlewort , Mark.G.Frank,Claudia Lainscsek,Ian R.Fasel,Javier
Movellan,”Automatic Recognition of facial actions in spontaneous expressions”,Journal of
Multimedia Vol 1,No.6 September 2006.
[18] M.S.Bartlett,G.Littlewort,I.Fasel,J.R.Movellan, “Real time face detection and expression
recognition:Development and application to human-computer interaction,Proceedings” .CVPR
Worshop on computer vision and Pattern recognition for human-computer interaction
[19] H.Rowley,S.Baluja,T.Kanade “Neural Network based face detection” ,IEEE Trans.Pattern
Analysis and Machine Intelligence,Vol.20,no.1pp 23-28.
[20] K.K.Sung & T.Poggio “Example based learning for view based human face detection”.IEEE
Transactions Pattern analysis and machine intelligence,Vol.20,No.1 pp: 39-51
[21] P.Viola,M.Jones .”Robust real time face detection”,Computer vision 2004,vol.57 no.2 pp 137-
154
[22] P.Wang , Q.Ji “Multiview face detection under complex scene based on combined
SVMs”,Proceedings IEEE International conference on Pattern recognition 2004,vol.4pp174-182
[23] Mohammed Yeasin,Senior Member IEEE,Baptiste Bullot,Rajeev Sharma,Member IEEE
“Recognition of Facial Expressions and Measurement of Levels of Interest from video”.IEEE
Transactions on Multimedia Vol.8 No.3,June 2006
[24] Yan Tong ,Yang Wang,Zhiwei Zhu,Qiang Ji ,”Robust Facial Feature Tracking under varying
face pose and facial expression”,Pattern Recognition (40) 2007.
[25] L. Wiskott, J.M. Fellous, N. Krüger, C.V. der Malsburg, “Face recognition by elastic bunch
graph matching”, IEEE Trans. Pattern Anal. Mach. Intell. 19 (7) (1997) 775–779
[26] Iodanis Mpiperis,Soteris Malassiotis and Michael G. Strintzis , “Bilinear Models for 3D face
and facial expression recognition”.IEEE transactions on Information forensics and security.
[27] Jun Wang,Lijun Yin,Xialozhou Wei and Yi sun, “3D facial expression recognition based on
primitive surface feature distribution.” Department of Computer Science State University of
New York at Binghamton
[28] Tian, Y.-L., Brown, L., Hampapur, A., Pankanti, S., Senior, A., Bolle, R.: “Real world realtime
automatic recognition of facial expressions”. In: Proceedings of IEEE Workshop on
Performance Evaluation of Tracking and Surveillance, Graz, Austria (2003)
[29] Maja Pantic,Leon J.M Rothkrantz ,”Facial Action Recognition for Facial Expression Analysis
from static face Images” IEEE Transactions on System and Cybernetics Vol 34.No.3 2004.
[30] Irane Kotsia and Ioannis Patras,Senior Member IEEE .” Facial Expression Recognition in Image
Sequences using Geometric Deformation Features and SVM”, IEEE Transactions on Image
Processing Vol16.No.1 January 2007.
[31] Hong-Bo Deng ,Lian – Wen Jin ,Li-Xin Zhen, Jian –Cheng Huang, “A New Facial Expression
Recognition Method based on Local Gabor Filter Bank and PCA plus LDA” . International
Journal of Information Technology Vol. 11 No. 11 2005
[32] S. Lucey, A. Ashraf, and J. Cohn, “Investigating Spontaneous Facial Action Recognition
through AAM Representations of the Face,” Face Recognition, K. Delac and M. Grgic, eds., pp.
275-286, I-Tech Education and Publishing, 2007.
58
International Journal of Computer Science & Engineering Survey (IJCSES) Vol.3, No.6, December 2012
[33] C. Huang, Y, Huang,”Facial expression recognition using model-based feature extraction and
action parameters classification”, J. Visual Commun. Image Representation 8 (3)1997.
[34] Gabriele Fanelli, Angela Yao, Pierre-Luc Noel, Juergen Gall, and Luc Van Gool, “Hough
Forest-based Facial Expression Recognition from Video Sequences”. International Workshop on
Sign, Gesture and Activity (SGA) 2010, in conjunction with ECCV 2010.September 2010.
[35] Pooja Sharma, Feature Based Method for “Human Facial Emotion Detection using optical Flow
Based Analysis”, International Journal of Research in Computer Science eISSN 2249-8265
Volume 1 Issue 1 (2011) pp. 31-38
[36] Sander Koelstra, Student Member, IEEE, Maja Pantic, Senior Member, IEEE, and Ioannis
(Yiannis) Patras, Member, IEEE, “A Dynamic Texture-Based Approach to Recognition of Facial
Actions and Their Temporal Models”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 32, No. 11, November 2010
[37] Devi Arumugam, Dr.S.Purushothaman, “Emotion Classification using Facial Expression”
International Journal of Advanced Computer Science and Applications Vol.2 No.7, 2011
[38] Shishir Bashyal,Ganesh k.Venayagamoorthy “Recognizing facial expressions using gabor
wavelets and vector quantization”.Engineering Application of Artificial Intelligence(21) 2008.
[39] Petar S.Aleksic,Member IEEE.Aggelos K.Katsaggelos,Fellow Member,IEEE,” Animation
Parameters and Multistream HMM’s,IEEE Transactions on Information Forensics and
Security”,Vol.1 No.1 March 2006.
[40] Marian Stewart Bartlett, Gwen Littlewort , Mark Frank , Claudia Laincsek ,Ian Fasel ,Javier
Movellan.”Recognizing Facial Expression:Machine Learningand Application to Spontaneous
Behavior “Computer Vision and Pattern Recognition 2005
[41] Peng Yang,Qingshan Liu,DimitrisN.Metaxas ,”Boosting Encoded dynamic features for facial
Expression recognition”,Pattern Recognition Letters(30)2009.
[42] Moriyama, T., Kanade, T., Cohn, J., Xiao, J., Ambadar, Z., Gao, J., Imanura, M.: “Automatic
recognition of eye blinking in spontaneously occurring behaviour”. In: Proceedings of the 16th
International Conference on Pattern Recognition (ICPR ’2002), vol. 4, pp. 78–81 (2002)
[43] Xiao, J., Moriyama, T., Kanade, T., Cohn, J.: “Robust full-motion recovery of head by dynamic
templates and re-registration techniques”. Int. J. Imaging Syst. Technol. (2003)
[44] Le Hoang Thai, Nguyen Do Thai Nguyen and Tran Son Hai,member,IACSIT, “A Facial
Expression Classification System Integrating Canny, Principal Component Analysis and
Artificial Neural Network”,International Journal of Machine Learning and Computing, Vol. 1,
No. 4, October 2011.
[45] L. Ma and K. Khorasani “Facial Expression Recognition Using Constructive Feedforward
Neural Networks”, IEEE Transactions on systems, man,and Cybernetics-Part B: Cybernetics,
Vol. 34, No. 3, June 2004
[46] Amir Jamshidnezhad, Md jan Nordin , “ A Classifier Model based on the Features Quantitative
Analysis for Facial Expression Recognition” , Proceeding of the International Conference on
Advanced Science, Engineering and Information Technology 2011
[47] Maja Pantic and Ioannis Patras, “Detecting Facial Actions and their Temporal Segments in
Nearly Frontal-View Face Image Sequences”, 2005 IEEE International Conference on Systems,
Man and Cybernetics Waikoloa, Hawaii October 10-12, 2005
[48] Yunfeng Zhu, Fernando De la Torre, Jeffrey F. Cohn, Associate Member, IEEE,and Yu-Jin
Zhang, Senior Member, IEEE”Dynamic Cascades with Bidirectional Bootstrapping for Action
Unit Detection in Spontaneous Facial Behavior”, Journal of LATEX Class Files, October 2010 .
[49] Gwen C. Littlewort, Marian Stewart Bartlett, Kang Lee, “Faces of Pain: Automated
Measurement of Spontaneous Facial Expressions of Genuine and Posed Pain”, ICMI’07,
November 12–15, 2007, Nagoya, Aichi, Japan.
59