[go: up one dir, main page]

Skip to main content
Log in

Facial Landmark-Based Human Emotion Recognition Technique for Oriented Viewpoints in the Presence of Facial Attributes

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

With the expansion of machine learning and deep learning technology, facial expression recognition methods have become more accurate and precise. However, in a real-case scenario, the presence of facial attributes, weakly posed expressions and variation in viewpoint can significantly reduce the performance of those systems designed only for frontal faces without facial attributes. A facial landmark distance-based model is proposed in this paper to explore a new method that can effectively recognize emotions in oriented faces with facial attributes. The proposed model computes distance-based features utilizing the inter-spaces between facial landmarks generated by a face mesh algorithm from pre-processed images. These features are normalized and ranked to find the optimal features for classifying emotions. The experimental results exhibit that the proposed model can effectively classify different emotions in the IIITM Face dataset with an overall accuracy of 61% using the SVM classifier (vertically oriented with different facial attributes). The model also classifies emotions posed in front, up, and down orientations with 70%, 58%, and 55% accuracy, respectively. The efficacy test of the model on laterally oriented faces from the KDEF database results in an overall accuracy of 80%. Comparison with the existing CNN and facial landmark-based method reveals that the proposed model exhibits an improved recognition rate for the oriented viewpoints with facial attributes. Considering the results of the proposed model, it is apparent that vertical viewpoints, forcing level of expression, and facial attributes can limit the performance of emotion recognition algorithm in realistic situations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

All data generated or analyzed during this study are included in this manuscript. The IIITM Face Emotion dataset is available freely for research purpose at https://www.sensigi.com/resources/research.

References

  1. Joseph A, Geetha P. Facial emotion detection using modified eyemap-mouthmap algorithm on an enhanced image and classification with tensorflow. Visual Comput. 2020;36(3):529–39. https://doi.org/10.1007/s00371-019-01628-3.

    Article  Google Scholar 

  2. Fragopanagos N, Taylor JG. Emotion recognition in human-computer interaction. Neural Networks. 2005;18(4):389–405.

    Article  Google Scholar 

  3. Greche L, Akil M, Kachouri R, Es-sbai N. A new pipeline for the recognition of universal expressions of multiple faces in a video sequence. J Real Time Image Process. 2020;17(5):1389–402.

    Article  Google Scholar 

  4. Iyer A, Das SS, Teotia R, Maheshwari S, Sharma RR. CNN and LSTM based ensemble learning for human emotion recognition using EEG recordings. Multimedia Tools Appl. 2022.

  5. Chen T, Yin H, Yuan X, Gu Y, Ren F, Sun X. Emotion recognition based on fusion of long short-term memory networks and SVMs. Digital Signal Process. 2021;117: 103153.

    Article  Google Scholar 

  6. Chen Y, Yang Z, Wang J. Eyebrow emotional expression recognition using surface EMG signals. Neurocomputing. 2015;168:871–9.

    Article  Google Scholar 

  7. Wang K, An N, Li BN, Zhang Y, Li L. Speech emotion recognition using Fourier parameters. IEEE Trans Affect Comput. 2015;6(1):69–75.

    Article  Google Scholar 

  8. Pabba C, Kumar P. An intelligent system for monitoring students’ engagement in large classroom teaching through facial expression recognition. Expert Syst. 2022;39(1): e12839.

    Article  Google Scholar 

  9. Sukhavasi SB, Sukhavasi SB, Elleithy K, El-Sayed A, Elleithy A. A hybrid model for driver emotion detection using feature fusion approach. Int J Environ Res Public Health. 2022;19(5):3085.

    Article  Google Scholar 

  10. Samadiani N, Huang G, Luo W, Chi CH, Shu Y, Wang R, Kocaturk T. A multiple feature fusion framework for video emotion recognition in the wild. Concurren Comput. 2022;34(8): e5764.

    Google Scholar 

  11. Savin AV, Sablina VA, Nikiforov MB. Comparison of facial landmark detection methods for micro-expressions analysis. In: 2021 10th Mediterranean Conference on Embedded Computing (MECO), IEEE. 2021;pp. 1–4.

  12. Siam AI, Soliman NF, Algarni AD, El-Samie A, Fathi E, Sedik A. Deploying machine learning techniques for human emotion detection. Comput Intel Neurosci. 2022.

  13. Gomez LF, Morales A, Orozco-Arroyave JR, Daza R, Fierrez J. Improving parkinson detection using dynamic features from evoked expressions in video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021;pp. 1562–1570.

  14. Rao Q, Qu X, Mao Q, Zhan Y. Multi-pose facial expression recognition based on surf boosting. In: 2015 international conference on affective computing and intelligent interaction (ACII). IEEE 2015;pp. 630–635.

  15. Majumder A, Behera L, Subramanian VK. Emotion recognition from geometric facial features using self-organizing map. Pattern Recogn. 2014;47(3):1282–93.

    Article  Google Scholar 

  16. Sun N, Li Q, Huan R, Liu J, Han G. Deep spatial-temporal feature fusion for facial expression recognition in static images. Pattern Recogn Lett. 2019;119:49–61.

    Article  Google Scholar 

  17. Rudovic O, Pantic M, Patras I. Coupled Gaussian processes for pose-invariant facial expression recognition. IEEE Trans Pattern Anal Mach Intel. 2012;35(6):1357–69.

    Article  Google Scholar 

  18. Zhang T, Zheng W, Cui Z, Zong Y, Yan J, Yan K. A deep neural network-driven feature learning method for multi-view facial expression recognition. IEEE Trans Multimedia. 2016;18(12):2528–36.

    Article  Google Scholar 

  19. Gera D, Balasubramanian S, Jami A. Cern: Compact facial expression recognition net. Pattern Recogn Lett. 2022;155:9–18.

    Article  Google Scholar 

  20. Hariri W, Farah N. Recognition of 3D emotional facial expression based on handcrafted and deep feature combination. Pattern Recogn Lett. 2021;148:84–91.

    Article  Google Scholar 

  21. Langner O, Dotsch R, Bijlstra G, Wigboldus DH, Hawk ST, Van Knippenberg A. Presentation and validation of the radboud faces database. Cogn Emot. 2010;24(8):1377–88.

    Article  Google Scholar 

  22. Lyons M, Kamachi M, Gyoba J.The Japanese Female Facial Expression (JAFFE) Dataset 1998. 10.5281/zenodo.3451524

  23. Zhao G, Huang X, Taini M, Li SZ, PietikäInen M. Facial expression recognition from near-infrared videos. Image Vis Comput. 2011;29(9):607–19.

    Article  Google Scholar 

  24. Kanade T, Cohn JF, Tian Y.Comprehensive database for facial expression analysis. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580. IEEE 2000;pp. 46–53

  25. Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I.The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 ieee computer society conference on computer vision and pattern recognition-workshops. IEEE. 2010;pp. 94–101.

  26. Yin L, Wei X, Sun Y, Wang J, Rosato MJ. A 3d facial expression database for facial behavior research. In: 7th international conference on automatic face and gesture recognition (FGR06). IEEE 2006;pp. 211–216.

  27. Valstar M, Pantic M, et al. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect. Paris, France. 2010;p. 65.

  28. Lundqvist D, Flykt A, Öhman A. Karolinska directed emotional faces. Cognition and Emotion. 1998.

  29. Arya KVS, Gupta RK, Agarwal S, Gupta P. IIITM Face: A database for facial attribute detection in constrained and simulated unconstrained environments. In: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD. 2020;pp. 185–189.

  30. Lugaresi C, Tang J, Nash H, McClanahan C, Uboweja E, Hays M, Zhang F, Chang CL, Yong MG, Lee J, et al. Mediapipe: A framework for building perception pipelines. arXiv preprint. 2019. arXiv:1906.08172.

  31. Kartynnik Y, Ablavatski A, Grishchenko I, Grundmann M. Real-time facial surface geometry from monocular video on mobile GPUs. arXiv preprint. 2019. arXiv:1907.06724.

  32. Theodoridis S, Koutroumbas K. Pattern recognition. Elsevier. 2006.

  33. Liu H, Motoda H. Feature selection for knowledge discovery and data mining. Springer Science & Business Media. 2012;vol. 454.

  34. Cortes C, Vapnik V. Support-vector networks Machine learning. 1995;20(3):273–97.

  35. Suykens JA, Vandewalle J. Least squares support vector machine classifiers. Neural Process Lett. 1999;9(3):293–300.

    Article  Google Scholar 

  36. Maheshwari S, Sharma RR, Kumar M. Lbp-based information assisted intelligent system for Covid-19 identification. Comput Biol Med. 2021;134: 104453.

    Article  Google Scholar 

  37. Goodfellow IJ, Erhan D, Carrier PL, Courville A, Mirza M, Hamner B, Cukierski W, Tang Y, Thaler D, Lee DH, et al. Challenges in representation learning: A report on three machine learning contests. In: International conference on neural information processing. Springer. 2013;pp. 117–124.

  38. Tan M, Le Q. Efficientnetv2: Smaller models and faster training. In: International Conference on Machine Learning. PMLR. 2021;pp. 10096–10106.

  39. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016;pp. 770–778.

  40. Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017;pp. 1251–1258.

  41. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016;pp. 2818–2826.

  42. Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-first AAAI conference on artificial intelligence. 2017.

  43. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint. 2014. arXiv:1409.1556.

Download references

Funding

No funds, grants, or other support was received.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rishi Raj Sharma.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose. The authors also have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharma, U., Faisal, K.N., Sharma, R.R. et al. Facial Landmark-Based Human Emotion Recognition Technique for Oriented Viewpoints in the Presence of Facial Attributes. SN COMPUT. SCI. 4, 273 (2023). https://doi.org/10.1007/s42979-023-01727-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-023-01727-y

Keywords

Navigation