[go: up one dir, main page]

skip to main content
article

Fusion of facial expressions and EEG for implicit affective tagging

Published: 01 February 2013 Publication History

Abstract

The explosion of user-generated, untagged multimedia data in recent years, generates a strong need for efficient search and retrieval of this data. The predominant method for content-based tagging is through slow, labor-intensive manual annotation. Consequently, automatic tagging is currently a subject of intensive research. However, it is clear that the process will not be fully automated in the foreseeable future. We propose to involve the user and investigate methods for implicit tagging, wherein users' responses to the interaction with the multimedia content are analyzed in order to generate descriptive tags. Here, we present a multi-modal approach that analyses both facial expressions and electroencephalography (EEG) signals for the generation of affective tags. We perform classification and regression in the valence-arousal space and present results for both feature-level and decision-level fusion. We demonstrate improvement in the results when using both modalities, suggesting the modalities contain complementary information.

References

[1]
Vinciarelli, A., Suditu, N. and Pantic, M., Implicit human-centered tagging. In: IEEE Int'l Conf. Multimedia and Expo, pp. 1428-1431.
[2]
von Ahn, L. and Dabbish, L., Labeling images with a computer game. In: Proc. Conf. Human Factors in Computing Systems, pp. 319-326.
[3]
von Ahn, L., Liu, R. and Blum, M., Peekaboom. In: Proc. Conf. Human Factors in Computing Systems, pp. 55-64.
[4]
Mandel, M. and Ellis, D., A web-based game for collecting music metadata. J. New Music Res. v37 i2. 151-165.
[5]
Kim, Y., Schmidt, E. and Emelle, L., Moodswings: a collaborative game for music mood label collection. In: Proc. Int'l Conf. Music Information Retrieval, pp. 231-236.
[6]
Gerson, A.D., Parra, L.C. and Sajda, P., Cortically coupled computer vision for rapid image search. IEEE Trans. Neural Syst. Rehabil. Eng. v14 i2. 174-179.
[7]
Kapoor, A., Shenoy, P. and Tan, D., Combining brain computer interfaces with vision for object categorization. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1-8.
[8]
Kapoor, A., Tan, D., Shenoy, P. and Horvitz, E., . In: Proc. IEEE Conf. Face and Gesture Recognition, 18 (4). pp. 1-7.
[9]
Cowell, A.J., Hale, K., Berka, C., Fuchs, S., Baskin, A., Jones, D., Davis, G., Johnson, R., Patch, R. and Marshall, E., Brainwave-based imagery analysis, digital human modeling: trends in human algorithms. Lect. Notes Comput. Sci. v4650. 17-27.
[10]
Koelstra, S., Mühl, C. and Patras, I., EEG analysis for implicit tagging of video data. In: Proc. Workshop on Affective Brain-Computer Interfaces, pp. 27-32.
[11]
Chanel, G., Kierkels, J.J.M., Soleymani, M. and Pun, T., Short-term emotion assessment in a recall paradigm. Int. J. Hum. Comput. Stud. v67 i8. 607-627.
[12]
DEAP: a database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. v3 i1. 18-31.
[13]
Soleymani, M., Koelstra, S., Patras, I. and Pun, T., Continuous emotion detection in response to music Videos. In: IEEE Int'l Conf. Automatic Face&Gesture Recognition and Workshops, pp. 803-808.
[14]
Takahashi, K. and Tsukaguchi, A., . In: IEEE Int'l Conf. Systems, Man and Cybernetics, vol. 2. pp. 1654-1659.
[15]
Olofsson, J.K., Nordin, S., Sequeira, H. and Polich, J., Affective picture processing: an integrative review of ERP findings. Biol. Psychol. v77 i3. 247-265.
[16]
Cuthbert, B., Schupp, H., Bradley, M., Birbaumer, N. and Lang, P., Brain potentials in affective picture processing: covariation with autonomic arousal and affective report. Biol. Psychol. v52 i2. 95-111.
[17]
Polich, J., Updating P300: an integrative theory of P3a and P3b. Clin. Neurophysiol. v118 i10. 2128-2148.
[18]
Automatic analysis of facial expressions - the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. v22 i12. 1424-1445.
[19]
Pantic, M. and Bartlett, M.S., Machine Analysis of Facial Expressions. 2007. I-Tech Education and Publishing, Vienna, Austria.
[20]
Zeng, Z., Pantic, M., Roisman, G.I. and Huang, T.S., A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. v31 i1. 39-58.
[21]
Ekman, P., Friesen, W.V. and Hager, J.C., The Facial Action Coding System: A Technique for the Measurement of Facial Movement, A Human Face, Salt Lake City, UT. 2002.
[22]
Ekman, P. and Rosenberg, E.L., What the Face Reveals: Basic and Applied Studies of Spontaneous Expression using the Facial Action Coding System (FACS). 2005. Oxford University Press.
[23]
Facial expression of pain: an evolutionary account. Behav. Brain Sci. v25 i4. 439-488.
[24]
Kaliouby, R., Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T. (Eds.), Real-time Vision for Human-Computer Interaction, Springer, US. pp. 181-200.
[25]
Gunes, H., Schuller, B., Pantic, M. and Cowie, R., Emotion representation, analysis and synthesis in continuous space: a survey. In: Proc. IEEE Conf. Face and Gesture Recognition, pp. 827-834.
[26]
Russell, J., A circumplex model of affect. J. Pers. Soc. Psychol. v39 i6. 1161-1178.
[27]
McDuff, D. and Kaliouby, R.E., Affect valence inference from facial action unit spectrograms. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 17-24.
[28]
Hussein, M. and Elsayed, T., Studying Facial Expressions as an Implicit Feedback in Information Retrieval Systems. 2008.
[29]
Arapakis, I., Konstas, I. and Jose, J.M., Using facial expressions and peripheral physiological signals as implicit indicators of topical relevance. In: Proc. ACM Int'l Conf. Multimedia, pp. 461-470.
[30]
Jiao, J. and Pantic, M., Implicit image tagging via facial information. In: Proc. Int'l Workshop on Social Signal Processing, pp. 59-64.
[31]
Lee, J.-S. and Park, C.H., Robust audio-visual speech recognition based on late integration. IEEE Trans. Multimed. v10 i5. 767-779.
[32]
Atrey, P.K., Hossain, M.A., El Saddik, A. and Kankanhalli, M.S., Multimodal fusion for multimedia analysis: a survey. Multimed. Syst. v16 i6. 345-379.
[33]
Soleymani, M., Lichtenauer, J., Pun, T. and Pantic, M., A multi-modal affective database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. v3 i1. 42-55.
[34]
Multimodal emotion recognition. In: Handbook of Pattern Recognition and Computer Vision, pp. 387-419
[35]
Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry. v25 i1. 49-59.
[36]
Morris, J.D., Observations: SAM: the Self-Assessment Manikin; an efficient cross-cultural measurement of emotional response. J. Advert. Res. v35 i8. 38-63.
[37]
Koelstra, S., Pantic, M. and Patras, I., A dynamic texture-based approach to recognition of facial actions and their temporal models. IEEE Trans. Pattern Anal. Mach. Intell. v32 i11. 1940-1954.
[38]
Scherer, K., What are emotions? And how can they be measured?. Soc. Sci. Inf. v44 i4. 695-729.
[39]
Bell, A. and Sejnowski, T., An information-maximization approach to blind separation and blind deconvolution. Neural Comput. v7 i6. 1129-1159.
[40]
Hyvärinen, A., Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. v10 i3. 626-634.
[41]
Mackay, D., Bayesian interpolation. Neural Comput. v4 i3. 415-447.
[42]
Koelstra, S., Yazdani, A., Soleymani, M., Muehl, C., Lee, J., Nijholt, A., Pun, T., Ebrahimi, T. and Patras, I., Single trial classification of EEG and peripheral physiological signals for recognition of emotions induced by music videos. In: Proc. Brain Informatics, pp. 89-100.
[43]
Barry, R.J., Clarke, A.R., Johnstone, S.J., Magee, C.A. and Rushby, J.A., EEG differences between eyes-closed and eyes-open resting conditions. Clin. Neurophysiol. v118 i12. 2765-2773.
[44]
Onton, J. and Makeig, S., High-frequency broadband modulations of electroencephalographic spectra. Front. Hum. Neurosci. v3 i61. 1-18.
[45]
Lang, P., Bradley, M. and Cuthbert, B., International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual, Tech. Rep. A-8, University of Florida, USA. 2008.

Cited By

View all
  • (2024)An (E)Affective Bind: Situated Affectivity and the Prospect of Affect RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.328106915:1(18-29)Online publication date: 1-Jan-2024
  • (2023)A comprehensive survey on emotion recognition based on electroencephalograph (EEG) signalsMultimedia Tools and Applications10.1007/s11042-023-14489-982:18(27269-27304)Online publication date: 9-Feb-2023
  • (2022)EEG Based Emotion Recognition: A Tutorial and ReviewACM Computing Surveys10.1145/352449955:4(1-57)Online publication date: 21-Nov-2022
  • Show More Cited By

Index Terms

  1. Fusion of facial expressions and EEG for implicit affective tagging
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Image and Vision Computing
    Image and Vision Computing  Volume 31, Issue 2
    February, 2013
    107 pages

    Publisher

    Butterworth-Heinemann

    United States

    Publication History

    Published: 01 February 2013

    Author Tags

    1. Affective computing
    2. EEG
    3. Emotion classification
    4. Facial expressions
    5. Pattern classification
    6. Signal processing

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 11 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)An (E)Affective Bind: Situated Affectivity and the Prospect of Affect RecognitionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.328106915:1(18-29)Online publication date: 1-Jan-2024
    • (2023)A comprehensive survey on emotion recognition based on electroencephalograph (EEG) signalsMultimedia Tools and Applications10.1007/s11042-023-14489-982:18(27269-27304)Online publication date: 9-Feb-2023
    • (2022)EEG Based Emotion Recognition: A Tutorial and ReviewACM Computing Surveys10.1145/352449955:4(1-57)Online publication date: 21-Nov-2022
    • (2022)A Multimodal Framework for Large-Scale Emotion Recognition by Fusing Music and Electrodermal Activity SignalsACM Transactions on Multimedia Computing, Communications, and Applications10.1145/349068618:3(1-23)Online publication date: 4-Mar-2022
    • (2022)Affective Image Content Analysis: Two Decades Review and New PerspectivesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2021.309436244:10_Part_2(6729-6751)Online publication date: 1-Oct-2022
    • (2022)Automated emotion recognitionComputer Methods and Programs in Biomedicine10.1016/j.cmpb.2022.106646215:COnline publication date: 1-Mar-2022
    • (2021)Head Matters: Explainable Human-centered Trait Prediction from Head Motion DynamicsProceedings of the 2021 International Conference on Multimodal Interaction10.1145/3462244.3479901(435-443)Online publication date: 18-Oct-2021
    • (2021)Class-Based Analysis of Russell's Four-Quadrant Emotion Prediction in Virtual Reality using Multi-Layer Feedforward ANNsProceedings of the 2021 10th International Conference on Software and Computer Applications10.1145/3457784.3457809(155-161)Online publication date: 23-Feb-2021
    • (2021)Collaborative Filtering with Preferences Inferred from Brain SignalsProceedings of the Web Conference 202110.1145/3442381.3450031(602-611)Online publication date: 19-Apr-2021
    • (2021)AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and GroupsIEEE Transactions on Affective Computing10.1109/TAFFC.2018.288446112:2(479-493)Online publication date: 1-Apr-2021
    • Show More Cited By

    View Options

    View options

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media