[go: up one dir, main page]

skip to main content
10.1145/3536221.3556569acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Personalized Productive Engagement Recognition in Robot-Mediated Collaborative Learning

Published: 07 November 2022 Publication History

Abstract

In this paper, we propose and compare personalized models for Productive Engagement (PE) recognition. PE is defined as the level of engagement that maximizes learning. Previously, in the context of robot-mediated collaborative learning, a framework of productive engagement was developed by utilizing multimodal data of 32 dyads and learning profiles, namely, Expressive Explorers (EE), Calm Tinkerers (CT), and Silent Wanderers (SW) were identified which categorize learners according to their learning gain. Within the same framework, a PE score was constructed in a non-supervised manner for real-time evaluation. Here, we use these profiles and the PE score within an AutoML deep learning framework to personalize PE models. We investigate two approaches for this purpose: (1) Single-task Deep Neural Architecture Search (ST-NAS), and (2) Multitask NAS (MT-NAS). In the former approach, personalized models for each learner profile are learned from multimodal features and compared to non-personalized models. In the MT-NAS approach, we investigate whether jointly classifying the learners’ profiles with the engagement score through multi-task learning would serve as an implicit personalization of PE. Moreover, we compare the predictive power of two types of features: incremental and non-incremental features. Non-incremental features correspond to features computed from the participant’s behaviours in fixed time windows. Incremental features are computed by accounting to the behaviour from the beginning of the learning activity till the time window where productive engagement is observed. Our experimental results show that (1) personalized models improve the recognition performance with respect to non-personalized models when training models for the gainer vs. non-gainer groups, (2) multitask NAS (implicit personalization) also outperforms non-personalized models, (3) the speech modality has high contribution towards prediction, and (4) non-incremental features outperform the incremental ones overall.

References

[1]
Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme. 2016. Semi-supervised model personalization for improved detection of learner’s emotional engagement. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. 100–107.
[2]
Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, David Stanhill, Bert Arnrich, and Asli Arslan Esme. 2016. Towards an emotional engagement model: Can affective states of a learner be automatically detected in a 1: 1 learning scenario. In Proceedings of the 6th Workshop on Personalization Approaches in Learning Environments (PALE 2016). 24th conference on User Modeling, Adaptation, and Personalization (UMAP 2016), CEUR workshop proceedings, this volume.
[3]
Prakhar Bhardwaj, PK Gupta, Harsh Panwar, Mohammad Khubeb Siddiqui, Ruben Morales-Menendez, and Anubha Bhaik. 2021. Application of Deep Learning on Student Engagement in e-learning environments. Computers & Electrical Engineering 93 (2021), 107277.
[4]
Hanlin Chen, Baochang Zhang, Xiawu Zheng, Jianzhuang Liu, Rongrong Ji, David Doermann, Guodong Guo, 2021. Binarized neural architecture search for efficient object recognition. International Journal of Computer Vision 129, 2 (2021), 501–516.
[5]
Jerusha O Conner. 2009. Student engagement in an independent research project: The influence of cohort culture. Journal of Advanced Academics 21, 1 (2009), 8–38.
[6]
Diana Cordova and Mark Lepper. 1996. Intrinsic Motivation and the Process of Learning: Beneficial Effects of Contextualization, Personalization, and Choice. Journal of Educational Psychology 88 (12 1996), 715–730. https://doi.org/10.1037/0022-0663.88.4.715
[7]
Francesco Del Duchetto, Paul Baxter, and Marc Hanheide. 2020. Are You Still With Me? Continuous Engagement Assessment From a Robot’s Point of View. Frontiers in Robotics and AI 7 (2020), 116. https://doi.org/10.3389/frobt.2020.00116
[8]
Abhinav Dhall, Amanjot Kaur, Roland Goecke, and Tom Gedeon. 2018. EmotiW 2018: Audio-Video, Student Engagement and Group-Level Affect Prediction. CoRR abs/1808.07773(2018). http://arxiv.org/abs/1808.07773
[9]
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Neural architecture search: A survey. The Journal of Machine Learning Research 20, 1 (2019), 1997–2017.
[10]
Michael Feffer, Rosalind W Picard, 2018. A mixture of personalized experts for human affect estimation. In International conference on machine learning and data mining in pattern recognition. Springer, 316–330.
[11]
Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Tobias Springenberg, Manuel Blum, and Frank Hutter. 2019. Auto-sklearn: efficient and robust automated machine learning. In Automated Machine Learning. Springer, Cham, 113–134.
[12]
Abhay Gupta, Richik Jaiswal, Sagar Adhikari, and Vineeth Balasubramanian. 2016. DAISEE: Dataset for Affective States in E-Learning Environments. CoRR abs/1609.01885(2016). arxiv:1609.01885http://arxiv.org/abs/1609.01885
[13]
Natasha Jaques, Sara Taylor, Ehimwenma Nosakhare, Akane Sano, and Rosalind Picard. 2016. Multi-task learning for predicting health, stress, and happiness. In NIPS Workshop on Machine Learning for Healthcare.
[14]
Haifeng Jin, Qingquan Song, and Xia Hu. 2019. Auto-Keras: An Efficient Neural Architecture Search System. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 1946–1956.
[15]
Wafa Johal. 2020. Research Trends in Social Robots for Learning. Current Robotics Reports 1 (2020), 1–9. https://doi.org/10.1007/s43154-020-00008-3
[16]
A. Kamath, A. Biswas, and V. Balasubramanian. 2016. A crowdsourced approach to student engagement recognition in e-learning environments. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1–9. https://doi.org/10.1109/WACV.2016.7477618
[17]
Manu Kapur. 2011. Temporality matters: Advancing a method for analyzing problem-solving processes in a computer-supported collaborative environment. International Journal of Computer-Supported Collaborative Learning 6, 1(2011), 39–56.
[18]
Dan Leyzberg, Samuel Spaulding, and Brian Scassellati. 2014. Personalizing robot tutors to individuals’ learning differences. In ACM/IEEE International Conference on Human-Robot Interaction. 423–430. https://doi.org/10.1145/2559636.2559671
[19]
Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2018. Progressive neural architecture search. In Proceedings of the European conference on computer vision (ECCV). 19–34.
[20]
Dianbo Liu, Peng Fengjiao, Rosalind Picard, 2017. DeepFaceLIFT: interpretable personalized models for automatic estimation of self-reported pain. In IJCAI 2017 Workshop on Artificial Intelligence in Affective Computing. PMLR, 1–16.
[21]
Hao Lu and Hu Han. 2021. NAS-HR: Neural architecture search for heart rate estimation from face videos. Virtual Reality & Intelligent Hardware 3, 1 (2021), 33–42.
[22]
Aamir Mustafa, Amanjot Kaur, Love Mehta, and Abhinav Dhall. 2018. Prediction and Localization of Student Engagement in the Wild. CoRR abs/1804.00858(2018). arxiv:1804.00858http://arxiv.org/abs/1804.00858
[23]
Jauwairia Nasir, Barbara Bruno, Mohamed Chetouani, and Pierre Dillenbourg. 2021. What if Social Robots Look for Productive Engagement?International Journal of Social Robotics(2021), 1–17.
[24]
Jauwairia Nasir, Barbara Bruno, Mohamed Chetouani, and Pierre Dillenbourg. 2022. A Speech-based Productive Engagement Metric for Real-time Human-Robot Interaction in Collaborative Educational Contexts. IEEE Transactions on Affective Computing(2022). http://infoscience.epfl.ch/record/294035
[25]
Jauwairia Nasir, Barbara Bruno, and Pierre Dillenbourg. 2020. Is There ’ONE Way’ of Learning? A Data-Driven Approach. In Companion Publication of the 2020 International Conference on Multimodal Interaction (Virtual Event, Netherlands) (ICMI ’20 Companion). Association for Computing Machinery, New York, NY, USA, 388–391. https://doi.org/10.1145/3395035.3425200
[26]
Jauwairia Nasir, Barbara Bruno, and Pierre Dillenbourg. 2021. PE-HRI-temporal: A Multimodal Temporal Dataset in a robot mediated Collaborative Educational Setting. https://doi.org/10.5281/zenodo.5576058
[27]
Jauwairia Nasir, Aditi Kothiyal, Barbara Bruno, and Pierre Dillenbourg. 2022. Many are the ways to learn identifying multi-modal behavioral profiles of collaborative learning in constructivist activities. International Journal of Computer-Supported Collaborative Learning (2022), 1–39.
[28]
Jauwairia Nasir, Utku Norman, Barbara Bruno, and Pierre Dillenbourg. 2020. When positive perception of the robot has no effect on learning. In 2020 29th IEEE international conference on robot and human interactive communication (RO-MAN). IEEE, 313–320.
[29]
Catharine Oertel, Ginevra Castellano, Mohamed Chetouani, Jauwairia Nasir, Mohammad Obaid, Catherine Pelachaud, and Christopher Peters. 2020. Engagement in human-agent interaction: An overview. Frontiers in Robotics and AI 7 (2020), 92.
[30]
Phuong Pham and Jingtao Wang. 2016. Adaptive Review for Mobile MOOC Learning via Implicit Physiological Signal Sensing. In Proceedings of the 18th ACM International Conference on Multimodal Interaction (Tokyo, Japan) (ICMI 2016). ACM, New York, NY, USA, 37–44. https://doi.org/10.1145/2993148.2993197
[31]
A Ramachandran, C.-M. Huang, and B Scassellati. 2017. Give Me a Break!: Personalized Timing Strategies to Promote Learning in Robot-Child Tutoring. 12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017 Part F1271(2017), 146–155. https://doi.org/10.1145/a2909824.3020209
[32]
Peter Reimann. 2009. Time is precious: Variable-and event-centred approaches to process analysis in CSCL research. International Journal of Computer-Supported Collaborative Learning 4, 3(2009), 239–257.
[33]
Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang. 2020. A comprehensive survey of neural architecture search: Challenges and solutions. arXiv preprint arXiv:2006.02903(2020).
[34]
Alessandra Rossi, Mario Raiano, and Silvia Rossi. 2021. Affective, Cognitive and Behavioural Engagement Detection for Human-robot Interaction in a Bartending Scenario. In 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). 208–213. https://doi.org/10.1109/RO-MAN50785.2021.9515435
[35]
Ognjen Rudovic, Jaeryoung Lee, Miles Dai, Björn Schuller, and Rosalind W Picard. 2018. Personalized machine learning for robot perception of affect and engagement in autism therapy. Science Robotics 3, 19 (2018).
[36]
Ognjen Rudovic, Yuria Utsumi, Jaeryoung Lee, Javier Hernandez, Eduardo Castelló Ferrer, Björn Schuller, and Rosalind W Picard. 2018. Culturenet: A deep learning approach for engagement intensity estimation from face images of children with autism. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 339–346.
[37]
Hanan Salam, Oya Celiktutan, Isabelle Hupont, Hatice Gunes, and Mohamed Chetouani. 2016. Fully automatic analysis of engagement and its relationship to personality in human-robot interactions. IEEE Access 5(2016), 705–721.
[38]
Hanan Salam and Mohamed Chetouani. 2015. Engagement detection based on mutli-party cues for human robot interaction. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 341–347.
[39]
Hanan Salam, Viswonathan Manoranjan, Jian Jiang, and Oya Celiktutan. 2022. Learning Personalised Models for Automatic Self-Reported Personality Recognition. In Understanding Social Behavior in Dyadic and Small Group Interactions. PMLR, 53–73.
[40]
Mostafa Shahabinejad, Yang Wang, Yuanhao Yu, Jin Tang, and Jiani Li. 2021. Toward Personalized Emotion Recognition: A Face Recognition Based Attention Method for Facial Emotion Recognition. In Proceedings of IEEE International Conference on Face & Gesture.
[41]
Zilong Shao, Siyang Song, Shashank Jaiswal, Linlin Shen, Michel Valstar, and Hatice Gunes. 2021. Personality Recognition by Modelling Person-specific Cognitive Processes using Graph Representation. In Proceedings of the 29th ACM International Conference on Multimedia. 357–366.
[42]
Sara Taylor, Natasha Jaques, Ehimwenma Nosakhare, Akane Sano, and Rosalind Picard. 2017. Personalized multitask learning for predicting tomorrow’s mood, stress, and health. IEEE Transactions on Affective Computing 11, 2 (2017), 200–213.
[43]
Lumin Xu, Yingda Guan, Sheng Jin, Wentao Liu, Chen Qian, Ping Luo, Wanli Ouyang, and Xiaogang Wang. 2021. ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16072–16081.
[44]
Avi Zacherman and John Foubert. 2014. The relationship between engagement in cocurricular activities and academic performance: Exploring gender differences. Journal of Student Affairs Research and Practice 51, 2 (2014), 157–169.

Cited By

View all
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2024)From the Definition to the Automatic Assessment of Engagement in Human–Robot Interaction: A Systematic ReviewInternational Journal of Social Robotics10.1007/s12369-024-01146-w16:7(1641-1663)Online publication date: 4-Jun-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '22: Proceedings of the 2022 International Conference on Multimodal Interaction
November 2022
830 pages
ISBN:9781450393904
DOI:10.1145/3536221
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Embodied Interaction
  2. Engagement Prediction
  3. Human-robot/Agent Interaction
  4. Personalization
  5. Personalized Affective Computing
  6. Social Robotics in Education
  7. Social Signals

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICMI '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)73
  • Downloads (Last 6 weeks)1
Reflects downloads up to 09 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2024)From the Definition to the Automatic Assessment of Engagement in Human–Robot Interaction: A Systematic ReviewInternational Journal of Social Robotics10.1007/s12369-024-01146-w16:7(1641-1663)Online publication date: 4-Jun-2024

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media