[go: up one dir, main page]

skip to main content
survey

Facial Expression Analysis under Partial Occlusion: A Survey

Published: 18 April 2018 Publication History
  • Get Citation Alerts
  • Abstract

    Automatic machine-based Facial Expression Analysis (FEA) has made substantial progress in the past few decades driven by its importance for applications in psychology, security, health, entertainment, and human–computer interaction. The vast majority of completed FEA studies are based on nonoccluded faces collected in a controlled laboratory environment. Automatic expression recognition tolerant to partial occlusion remains less understood, particularly in real-world scenarios. In recent years, efforts investigating techniques to handle partial occlusion for FEA have seen an increase. The context is right for a comprehensive perspective of these developments and the state of the art from this perspective. This survey provides such a comprehensive review of recent advances in dataset creation, algorithm development, and investigations of the effects of occlusion critical for robust performance in FEA systems. It outlines existing challenges in overcoming partial occlusion and discusses possible opportunities in advancing the technology. To the best of our knowledge, it is the first FEA survey dedicated to occlusion and aimed at promoting better-informed and benchmarked future work.

    References

    [1]
    M. Amirian et al. 2017. Support vector regression of sparse dictionary-based features for view-independent action unit intensity estimation. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 854--859.
    [2]
    A. Asthana et al. 2014. Incremental face alignment in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1859--1866.
    [3]
    A. Asthana et al. 2015. From pixels to response maps: Discriminative image filtering for face alignment in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1312--1320.
    [4]
    A. Azeem et al. 2014. A survey: face recognition techniques under partial occlusion. International Arab Journal of Information Technology 11, 1, 1--10.
    [5]
    R. Azmi and S. Yegane. 2012. Facial expression recognition in the presence of occlusion using local Gabor binary patterns. In Proceedings of the 20th Iranian Conference on Electrical Engineering. IEEE, Tehran, Iran, 742--747.
    [6]
    Y. Ban et al. 2014. Face detection based on skin color likelihood. Pattern Recognition 47, 4, 1573--1585.
    [7]
    J. N. Bassili. 1979. Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37, 11, 2049--2058.
    [8]
    J. C. Batista et al. 2017. AUMPNet: Simultaneous action units detection and intensity estimation on multipose facial images using a single convolutional neural network. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 866--871.
    [9]
    C. F. Benitez-Quiroz et al. 2017. EmotioNet challenge: Recognition of facial expressions of emotion in the wild. arXiv:1703.01210.
    [10]
    C. F. Benitez-Quiroz et al. 2016. EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Las Vegas, USA, 5562--5570.
    [11]
    V. Bettadapura. 2012. Face expression recognition and analysis: The state of the art. arXiv:1203.6722.
    [12]
    J. D. Boucher and P. Ekman. 1975. Facial areas and emotional information. Journal of Communication 25, 2, 21--29.
    [13]
    F. Bourel et al. 2002. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Washington, USA, 106--111.
    [14]
    F. Bourel, C. C. Chibelushi, A. A. Low. 2001. Recognition of facial expressions in the presence of occlusion. In Proceedings of the 12th British Machine Vision Conference. BMVA, Manchester, UK, 213--222.
    [15]
    I. Buciu et al. 2005. Facial expression analysis under partial occlusion. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, Philadelphi, USA, 453--456.
    [16]
    X. P. Burgos-Artizzu et al. 2013. Robust face landmark estimation under occlusion. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, Sydney, Australia, 1513--1520.
    [17]
    R. A. Calvo and S. D'Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1, 1, 18--37.
    [18]
    S. Canavan et al. 2015. Landmark localization on 3D/4D range data using a shape index-based statistical shape model with global and local constraints. Computer Vision and Image Understanding 139, 136--148.
    [19]
    X. Cao et al. 2014. Face alignment by explicit shape regression. International Journal of Computer Vision 107, 2, 177--190.
    [20]
    W. Y. Chang et al. 2017. FATAUVA-Net: An integrated deep learning framework for facial attribute recognition, action unit detection, and valence-arousal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Honolulu, USA, 1963--1971.
    [21]
    Y. Cheng et al. 2014. A deep structure for facial expression recognition under partial occlusion. In Proceedings of the 10th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Kitakyushu, Japan, 211--214.
    [22]
    A. Colombo et al. 2010. Three-dimensional occlusion detection and restoration of partially occluded faces. Journal of Mathematical Imaging and Vision, 1--15.
    [23]
    A. Colombo et al. 2011. UMB-DB: A database of partially occluded 3D faces. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2113--2119.
    [24]
    C. A. Corneanu et al. 2016. Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 8, 1548--1568.
    [25]
    J. R. Cornejo et al. 2015. Facial expression recognition with occlusions based on geometric representation. In Proceedings of the Iberoamerican Congress on Pattern Recognition. Springer, Montevideo, Uruguay, 263--270.
    [26]
    J. Y. R. Cornejo and H. Pedrini. 2016. Recognition of occluded facial expressions based on CENTRIST features. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Shanghai, China, 1298--1302.
    [27]
    S. F. Cotter. 2010a. Sparse representation for accurate classification of corrupted and occluded facial expressions. Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing. IEEE, Dallas, USA, 838--841.
    [28]
    S. F. Cotter. 2010b. Weighted voting of sparse representation classifiers for facial expression recognition. In Proceedings of the 18th European Signal Processing Conference. IEEE, Aalborg, Denmark, 1164--1168.
    [29]
    S. F. Cotter. 2011. Recognition of occluded facial expressions using a fusion of localized sparse representation classifiers. In Proceedings of the IEEE Digital Signal Processing Workshop and IEEE Signal Processing Education Workshop. IEEE, Sedona, USA, 437--442.
    [30]
    L. Dahua and T. Xiaoou. 2007. Quality-driven face occlusion detection and recovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Minneapolis, USA, 1--7.
    [31]
    A. Dapogny et al. 2016. Confidence-weighted local expression predictions for occlusion handling in expression recognition and action unit detection. ArXiv Preprint ArXiv:1607.06290.
    [32]
    A. Dhall et al. 2015a. Automatic group happiness intensity analysis. IEEE Transactions on Affective Computing 6, 1, 13--26.
    [33]
    A. Dhall et al. 2017. From individual to group-level emotion recognition: Emotiw 5.0. Proceedings of the 19th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 524--528.
    [34]
    A. Dhall et al. 2016a. Emotion recognition in the wild challenge 2016. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 587--588.
    [35]
    A. Dhall et al. 2016b. EmotiW 2016: Video and group-level emotion recognition challenges. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, New York, USA, 427--432.
    [36]
    A. Dhall et al. 2011. Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In Proceedings of the IEEE International Conference on Computer Vision Workshops. IEEE, Barcelona, Spain, 2106--2112.
    [37]
    A. Dhall et al. 2012. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia 19, 3, 34--41.
    [38]
    A. Dhall et al. 2013. Finding happiest moments in a social context. In Proceedings of the 11th Asian Conference on Computer Vision. Springer, Berlin Heidelberg, 613--626.
    [39]
    A. Dhall et al. 2015b. The more the merrier: Analysing the affect of a group of people in images. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.
    [40]
    A. Dhall et al. 2015c. Video and image based emotion recognition challenges in the wild: EmotiW 2015. In Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Seattle, USA, 423--426.
    [41]
    H. Drira et al. 2013. 3D Face recognition under expressions, occlusions, and pose variations. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 9, 2270--2283.
    [42]
    M. Dunja et al. 2004. Feature selection using linear classifier weights: Interaction with classification models. Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, Sheffield, UK, 234--241.
    [43]
    K. Dunlap. 1927. The role of eye-muscles and mouth-muscles in the expression of the emotions. Genetic Psychology Monographs 2, 3 (1927), 196--233.
    [44]
    H. Ekenel and R. Stiefelhagen. 2009. Why Is facial occlusion a challenging problem? Proceedings of the International Conference on Biometrics. Springer, Alghero, Italy, 299--308.
    [45]
    P. Ekman. 1994. Strong evidence for universals in facial expressions: A reply to Russells mistaken critique. Psychological Bulletin 115, 2, 268--287.
    [46]
    P. Ekman. 2003. METT. Micro expression training tool. CD-ROM.
    [47]
    P. Ekman et al. 2002. Facial Action Coding System: The Manual on CD ROM. A Human Face, Salt Lake City.
    [48]
    P. Ekman et al. 2013. Emotion in the human face: Guidelines for research and an integration of findings. Pergamon Press, New York, USA.
    [49]
    P. Ekman and W. Friesen. 1978. The facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto, CA, 274--280.
    [50]
    Y. Fan et al. 2016. Video-based emotion recognition using CNN-RNN and C3D hybrid networks. Proceedings of the ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 445--450.
    [51]
    B. Fasel and J. Luettin. 2003. Automatic facial expression analysis: A survey. Pattern Recognition 36, 1, 259--275.
    [52]
    G. Ghiasi and C. C. Fowlkes. 2014. Occlusion coherence: Localizing occluded faces with a hierarchical deformable part model. IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1899--1906.
    [53]
    H. Gunes and B. Schuller. 2013. Categorical and dimensional affect analysis in continuous input: Current trends and future directions. Image and Vision Computing 31, 2, 120--136.
    [54]
    H. Gunes et al. 2011. Emotion representation, analysis and synthesis in continuous space: A survey. In Proceedings of the IEEE International Conference on Automatic Face 8 Gesture Recognition and Workshops. IEEE, Santa Barbara, USA, 827--834.
    [55]
    L. A. Halliday. 2008. Emotion detection: Can perceivers identify an emotion from limited information? Master's thesis, University of Canterbury.
    [56]
    Z. Hammal et al. 2009. Comparing a novel model based on the transferable belief model with humans during the recognition of partially occluded facial expressions. Journal of Vision 9, 2, 1--19.
    [57]
    N. G. Hanawalt. 1942. The role of the upper and lower parts of the face as a basis for judging facial expressions: I. In painting and sculpture. The Journal of General Psychology 27, 2, 331--346.
    [58]
    Y. Heng et al. 2015. Robust face alignment under occlusion via regional predictive power estimation. IEEE Transactions on Image Processing 24, 8, 2393--2403.
    [59]
    Y. Hu et al. 2008. Multi-view facial expression recognition. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.
    [60]
    X. Huang et al. 2015. Riesz-based volume local binary pattern and a novel group expression model for group happiness intensity analysis. In Proceedings of the British Machine Vision Conference. BMVA, Swansea, UK, 1--13.
    [61]
    X. Huang et al. 2012. Towards a dynamic expression recognition system under facial occlusion. Pattern Recognition Letters 33, 16, 2181--2191.
    [62]
    C. E. Izard et al. 1979. Maximally discriminative facial movement coding system. University of Delaware, Instructional Resources Center, Newark, USA.
    [63]
    B. Jiang and K.-B. Jia. 2011. Research of robust facial expression recognition under facial occlusion condition. In Proceedings of the International Conference on Active Media Technology. Springer, Lanzhou, China, 92--100.
    [64]
    K. Jongsun et al. 2005. Effective representation using ICA for face recognition robust to local distortion and partial occlusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 12, 1977--1981.
    [65]
    S. E. Kahou et al. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM International Conference on Multimodal Interaction. ACM, Sydney, Australia, 543--550.
    [66]
    S. Kaltwang et al. 2015. Doubly sparse relevance vector machine for continuous facial behavior estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 38, 9, 1748--1761.
    [67]
    T. Kanade et al. 2000. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Grenoble, France, 46--53.
    [68]
    W. Kangkan et al. 2014. A two-stage framework for 3D face reconstruction from RGBD images. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 8, 1493--1504.
    [69]
    A. Kapoor et al. 2003. Fully automatic upper facial action recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures. IEEE, Nice, France, 195--202.
    [70]
    D. Keltner et al. 2003. Facial expression of emotion. Oxford University Press, New York.
    [71]
    T. Kenade. 1973. Picture processing system by computer complex and recognition of human faces. Doctoral dissertation, Kyoto University.
    [72]
    I. Kotsia et al. 2008. An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing 26, 7, 1052--1067.
    [73]
    S. M. Lajevardi and W. Hong Ren. 2012. Facial expression recognition in perceptual color space. IEEE Transactions on Image Processing 21, 8, 3721--3733.
    [74]
    H. Li et al. 2015. An efficient multimodal 2D + 3D feature-based approach to automatic facial expression recognition. Computer Vision and Image Understanding 140, 83--92.
    [75]
    J. Li et al. 2016. Happiness level prediction with sequential inputs via multiple regressions. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 487--493.
    [76]
    X. Li et al. 2017a. Facial action units detection with multi-features and -AUs fusion. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 860--865.
    [77]
    X. Li et al. 2017b. Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting and recognition methods. IEEE Transactions on Affective Computing (in press).
    [78]
    Y. Lijun et al. 2006. A 3D facial expression database for facial behavior research. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition. IEEE, Southampton, UK, 211--216.
    [79]
    D.-T. Lin and M.-J. Liu. 2006. Face occlusion detection for automated teller machine surveillance. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology. Springer, Hsinchu, Taiwan, 641--651.
    [80]
    J.-C. Lin et al. 2013. Facial action unit prediction under partial occlusion based on error weighted cross-correlation model. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Vancouver, Canada, 3482--3486.
    [81]
    M. Liu et al. 2014a. Combining multiple kernel methods on riemannian manifold for emotion recognition in the wild. In Proceedings of the 16th International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 494--501.
    [82]
    P. Liu et al. 2014b. Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Columbus, USA, 1805--1812.
    [83]
    S. Liu et al. 2014c. Facial expression recognition under partial occlusion based on Weber local descriptor histogram and decision fusion. In Proceedings of the 33rd Chinese Control Conference. IEEE, Nanjing, China, 4664--4668.
    [84]
    S. S. Liu et al. 2014d. Facial expression recognition under random block occlusion based on maximum likelihood estimation sparse representation. In Proceedings of the International Joint Conference on Neural Networks. IEEE, Beijing, China, 1285--1290.
    [85]
    P. Lucey et al. 2010. The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, San Francisco, USA, 94--101.
    [86]
    M. Lyons et al. 1998. Coding facial expressions with Gabor wavelets. In Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. IEEE, Nara, Japan, 200--205.
    [87]
    M. Mahmoud et al. 2011. 3D Corpus of spontaneous complex mental states. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Memphis, USA, 205--214.
    [88]
    M. M. Mahmoud et al. 2014. Automatic detection of naturalistic hand-over-face gesture descriptors. In Proceedings of the 16th ACM International Conference on Multimodal Interaction. ACM, Istanbul, Turkey, 319--326.
    [89]
    P. Maja et al. 2005. Affective multimodal human-computer interaction. In Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM, Hilton, Singapore, 669--676.
    [90]
    B. Martinez et al. 2017. Automatic analysis of facial actions: A survey. IEEE Transactions on Affective Computing (in press).
    [91]
    Y. Miyakoshi and S. Kato. 2011. Facial emotion detection considering partial occlusion of face using Bayesian network. In Proceedings of the IEEE Symposium on Computers 8 Informatics. IEEE, Kuala Lumpur, Malaysia, 96--101.
    [92]
    S. Moore and R. Bowden, 2011. Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding 115, 4, 541--558.
    [93]
    K. Nakajima et al. 2017. Interaction between facial expression and color. Scientific Reports 7, 41019.
    [94]
    D. T. Nguyen et al. 2016. Human detection from images and videos: A survey. Pattern Recognition 51, 148--175.
    [95]
    M. Nusseck et al. 2008. The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision 8, 8, 1--23.
    [96]
    Y. Ouyang et al. 2013. Robust automatic facial expression detection method based on sparse representation plus LBP map. Optik - International Journal for Light and Electron Optics 124, 24, 6827--6833.
    [97]
    E. Owusu et al. 2015. Facial expression recognition--a comprehensive review. International Journal of Technology and Management Research 1, 4, 29--46.
    [98]
    M. Pantic and L. J. M. Rothkrantz. 2000. Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 12, 1424--1445.
    [99]
    H. Patil et al. 2015. 3-D face recognition: features, databases, algorithms and challenges. Artificial Intelligence Review 44, 3, 393--441.
    [100]
    G. A. Ramirez et al. 2014. Color analysis of facial skin: Detection of emotional state. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Columbus, USA, 474--479.
    [101]
    M. Ranzato et al. 2011. On deep generative models with applications to recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Colorado Springs, USA, 2857--2864.
    [102]
    F. Ringeval et al. 2015. AVEC 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proceedings of the 5th ACM International Workshop on Audio/Visual Emotion Challenge. ACM, Brisbane, Australia, 3--8.
    [103]
    D. Roberson et al. 2012. Shades of emotion: What the addition of sunglasses or masks to faces reveals about the development of facial expression processing. Cognition 125, 2, 195--206.
    [104]
    P. Rodriguez et al. 2017. Deep pain: Exploiting long short-term memory networks for facial expression classification. IEEE Transactions on Cybernetics (in press).
    [105]
    C. A. Ruckmick. 1921. A preliminary study of the emotions. Psychological Monographs 30, 3, 30--35.
    [106]
    O. Rudovic et al. 2010. Regression-based multi-view facial expression recognition. In Proceedings of the 20th International Conference on Pattern Recognition. IEEE, Istanbul, Turkey, 4121--4124.
    [107]
    O. Rudovic et al. 2015. Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 5, 944--958.
    [108]
    J. A. Russell. 1980. A circumplex model of affect. Journal of Personality and Social Psychology 39, 6, 1161--1178.
    [109]
    G. Sandbach et al. 2012. Static and dynamic 3D facial expression recognition: A comprehensive survey. Image and Vision Computing 30, 10, 683--697.
    [110]
    E. Sariyanidi et al. 2015. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 6, 1113--1133.
    [111]
    A. Savran et al. 2008. Bosphorus database for 3D face analysis. In Proceedings of the European Workshop on Biometrics and Identity Management. Springer, Roskilde, Denmark, 47--56.
    [112]
    L. Shuai-Shi et al. 2013. Facial expression recognition under partial occlusion based on Gabor multi-orientation features fusion and local Gabor binary pattern histogram sequence. In Proceedings of the 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, Beijing, China, 218--222.
    [113]
    G. Song and R. Qiuqi. 2011. Facial expression recognition using local binary covariance matrices. In Proceedings of the 4th IET International Conference on Wireless, Mobile 8 Multimedia Networks. IET, Beijing, China, 237--242.
    [114]
    B. Sun et al. 2016. LSTM for dynamic emotion and group emotion recognition in the wild. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, Tokyo, Japan, 451--457.
    [115]
    Y. Sun and L. Yin. 2008. Facial expression recognition based on 3D dynamic range model sequences. In Proceedings of the European Conference on Computer Vision. Springer, Marseille, France, 58--71.
    [116]
    M. Suwa et al. 1978. A preliminary note on pattern recognition of human emotional expression. In Proceedings of the International Joint Conference on Pattern Recognition. IEEE, Kyoto, Japan, 408--410.
    [117]
    N. Tan Dat and S. Ranganath. 2008. Tracking facial features under occlusions and recognizing facial expressions in sign language. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--7.
    [118]
    N. Tan Dat and R. Surendra. 2008. Towards recognition of facial expressions in sign language: Tracking facial features under occlusion. In Proceedings of the 15th IEEE International Conference on Image Processing. IEEE, San Diego, USA, 3228--3231.
    [119]
    C. Tang et al. 2017. View-independent facial action unit detection. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 878--882.
    [120]
    U. Tariq et al. 2012. Multi-view facial expression recognition analysis with generic sparse coding feature. In Proceedings of the European Conference on Computer Vision. Springer, Florence, Italy, 578--588.
    [121]
    F. D. L. Torre et al. 2015. IntraFace. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1--8.
    [122]
    Z. Tősér et al. 2016. Deep learning for facial action unit detection under large head poses. In Proceedings of the European Conference on Computer Vision. Springer, Amsterdam, Netherlands, 359--371.
    [123]
    H. Towner and M. Slater. 2007. Reconstruction and recognition of occluded facial expressions using PCA. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction. Springer, Lisbon, Portugal, 36--47.
    [124]
    M. Valstar et al. 2016. AVEC 2016: Depression, mood, and emotion recognition workshop and challenge. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, Amsterdam, Netherlands, 3--10.
    [125]
    M. F. Valstar et al. 2015. FERA 2015: Second facial expression recognition and analysis challenge. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 6, 1--8.
    [126]
    M. F. Valstar et al. 2017. FERA 2017: Addressing head pose in the third facial expression recognition and analysis challenge. ArXiv Preprint arXiv:1702.04174.
    [127]
    R. L. Vieriu et al. 2015. Facial expression recognition under a wide range of head poses. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. IEEE, Ljubljana, Slovenia, 1, 1--7.
    [128]
    J. Wright et al. 2009. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 2, 210--227.
    [129]
    Q. Wu et al. 2011. The machine knows what you are hiding: An automatic micro-expression recognition system. Affective Computing and Intelligent Interaction, 152--162.
    [130]
    Z. Xi et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41, 5, 1417--1428.
    [131]
    M. Xia et al. 2009. Robust facial expression recognition based on RPCA and AdaBoost. In Proceedings of the 10th Workshop on Image Analysis for Multimedia Interactive Services. IEEE, London, UK, 113--116.
    [132]
    L. Yin et al. 2008. A high-resolution 3D dynamic facial expression database. In Proceedings of the 8th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Amsterdam, Netherlands, 1--6.
    [133]
    Z. Yongmian and J. Qiang. 2005. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 5, 699--714.
    [134]
    X. Yu-Li et al. 2006. Beihang University facial expression database and multiple facial expression recognition. In Proceedings of the International Conference on Machine Learning and Cybernetics. IEEE, Dalian, China, 3282--3287.
    [135]
    S. Zafeiriou et al. 2016. Facial affect “in-the-wild”: A survey and a new database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, Las Vegas, USA, 1487--1498.
    [136]
    S. Zafeiriou et al. 2015. A survey on face detection in the wild: Past, present and future. Computer Vision and Image Understanding 138, 1--24.
    [137]
    Z. Zeng et al. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1, 39--58.
    [138]
    S. Zhalehpour et al. 2016. BAUM-1: A spontaneous audio-visual face database of affective and mental states. IEEE Transactions on Affective Computing 8, 3, 300--313.
    [139]
    K. Zhang et al. 2017. Facial expression recognition based on deep evolutional spatial-temporal networks. IEEE Transactions on Image Processing 26, 9, 4193--4203.
    [140]
    L. Zhang et al. 2015. Adaptive facial point detection and emotion recognition for a humanoid robot. Computer Vision and Image Understanding 140, 93--114.
    [141]
    L. Zhang et al. 2011. Toward a more robust facial expression recognition in occluded images using randomly sampled Gabor based templates. In Proceedings of the IEEE International Conference on Multimedia and Expo. IEEE, Barcelona, Spain, 1--6.
    [142]
    L. Zhang et al. 2014a. Facial expression recognition experiments with data from television broadcasts and the World Wide Web. Image and Vision Computing 32, 2, 107--119.
    [143]
    L. Zhang et al. 2014b. Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 145, 0, 451--464.
    [144]
    L. Zhang et al. 2016. Towards robust automatic affective classification of images using facial expressions for practical applications. Multimedia Tools and Applications 75, 8, 4669--4695.
    [145]
    S. Zhang et al. 2012. Robust facial expression recognition via compressive sensing. Sensors 12, 3, 3747--3761.
    [146]
    X. Zhang et al. 2014c. BP4D-Spontaneous: A high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing 32, 10, 692--706.
    [147]
    X. Zhao et al. 2011. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 5, 1417--1428.
    [148]
    X. Zhao et al. 2016. Automatic 2.5-D facial landmarking and emotion annotation for social interaction assistance. IEEE Transactions on Cybernetics 46, 9, 2042--2055.
    [149]
    R. Zhi et al. 2011. Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 41, 1, 38--52.
    [150]
    Y. Zhou et al. 2017. Pose-independent facial action unit intensity regression based on multi-task deep transfer learning. In Proceedings of the 12th IEEE International Conference on Automatic Face 8 Gesture Recognition. IEEE, Washington, USA, 872--877.
    [151]
    W. Ziheng et al. 2013. Capturing complex spatio-temporal relations among facial muscles for facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Portland, USA, 3422--3429.

    Cited By

    View all
    • (2025)Research on the E-learning platform for art teaching and immersive digital entertainment experience based on improved neural networksEntertainment Computing10.1016/j.entcom.2024.10076852(100768)Online publication date: Jan-2025
    • (2024)The Impact of Social Context on Motor, Cognitive, and Affective Behaviors: A Pilot Study Among Physical Education StudentsAnnals of Applied Sport Science10.61186/aassjournal.123912:Spring Supplementary(0-0)Online publication date: 1-Mar-2024
    • (2024)Facial mask-wearing prediction and adaptive gender classification using convolutional neural networksEAI Endorsed Transactions on Industrial Networks and Intelligent Systems10.4108/eetinis.v11i2.431811:2(e3)Online publication date: 13-Mar-2024
    • Show More Cited By

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 51, Issue 2
    March 2019
    748 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/3186333
    • Editor:
    • Sartaj Sahni
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 18 April 2018
    Accepted: 01 November 2017
    Revised: 01 November 2017
    Received: 01 March 2017
    Published in CSUR Volume 51, Issue 2

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Facial expression analysis
    2. emotion recognition
    3. overview
    4. partial occlusion
    5. survey

    Qualifiers

    • Survey
    • Research
    • Refereed

    Funding Sources

    • Australian Research Council's Linkage Projects funding scheme

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)122
    • Downloads (Last 6 weeks)8
    Reflects downloads up to 06 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Research on the E-learning platform for art teaching and immersive digital entertainment experience based on improved neural networksEntertainment Computing10.1016/j.entcom.2024.10076852(100768)Online publication date: Jan-2025
    • (2024)The Impact of Social Context on Motor, Cognitive, and Affective Behaviors: A Pilot Study Among Physical Education StudentsAnnals of Applied Sport Science10.61186/aassjournal.123912:Spring Supplementary(0-0)Online publication date: 1-Mar-2024
    • (2024)Facial mask-wearing prediction and adaptive gender classification using convolutional neural networksEAI Endorsed Transactions on Industrial Networks and Intelligent Systems10.4108/eetinis.v11i2.431811:2(e3)Online publication date: 13-Mar-2024
    • (2024)An Empirical Study on Personalized Product Recommendation Based on Cross-Border E-Commerce Customer Data AnalysisJournal of Organizational and End User Computing10.4018/JOEUC.33549836:1(1-16)Online publication date: 30-Jan-2024
    • (2024)Handling Imbalanced Data With Weighted Logistic Regression and Propensity Score Matching methodsJournal of Database Management10.4018/JDM.33588835:1(1-37)Online publication date: 7-Jan-2024
    • (2024)Review of the Literature on Using Machine and Deep Learning Techniques to Improve IoT SecurityRevolutionizing Automated Waste Treatment Systems10.4018/979-8-3693-6016-3.ch018(273-300)Online publication date: 14-Jun-2024
    • (2024)Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunitiesAIMS Public Health10.3934/publichealth.202400411:1(58-109)Online publication date: 2024
    • (2024)Incorporating eyebrow and eye state information for facial expression recognition in mask-obscured scenesElectronic Research Archive10.3934/era.202412432:4(2745-2771)Online publication date: 2024
    • (2024)Sea Shield: A Blockchain Technology Consensus to Improve Proof-of-Stake-Based Consensus Blockchain SafetyMathematics10.3390/math1206083312:6(833)Online publication date: 12-Mar-2024
    • (2024)IUAutoTimeSVD++: A Hybrid Temporal Recommender System Integrating Item and User Features Using a Contractive AutoencoderInformation10.3390/info1504020415:4(204)Online publication date: 5-Apr-2024
    • Show More Cited By

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media