[go: up one dir, main page]

Academia.eduAcademia.edu
3rd UK-RAS Conference for PhD Students & Early Career Researchers, Hosted virtually by University of Lincoln, April 2020 Does Expression of Grounded Affect in a Hexapod Robot Elicit More Prosocial Responses? Luke Hickton Matthew Lewis Kheng Lee Koay Lola Cañamero EECAIA Lab, ASRG, Dept. of Computer Science Univ. Hertfordshire, UK l.hickton2@herts.ac.uk EECAIA Lab, ASRG, Dept. of Computer Science Univ. Hertfordshire, UK m.lewis4@herts.ac.uk ASRG, Dept. of Computer Science Univ. Hertfordshire, UK k.l.koay@herts.ac.uk EECAIA Lab, ASRG, Dept. of Computer Science Univ. Hertfordshire, UK l.canamero@herts.ac.uk https://doi.org/10.31256/Hz3Ww4T valence to be inferred by the observer from the environmental context. Abstract—We consider how non-humanoid robots can communicate their affective state via bodily forms of communication, and the extent to which this can influence human response. We propose a simple model of grounded affect and kinesic expression and outline two experiments (N=9 and N=180) in which participants were asked to watch expressive and nonexpressive hexapod robots perform different ‘scenes’. Our preliminary findings suggest the expressive robot stimulated greater desire for interaction, and was more likely to be attributed with emotion. It also elicited more desire for prosocial behaviour. Index Terms—Human Robot Interaction (HRI), Situated Robots, Expression, Kinesics, Embodied Affect B. Expression Expression can be considered as the communication of emotion via facial and bodily movement. Some argue that such signals are principally aimed at influencing the behaviour of others within a social group [19], whilst others suggest they are accurate indicators of underlying emotional state [12]. Darwin was amongst the first to suggest that expressive communication may have arisen from mechanisms that provide adaptive benefits [20]. This work is consistent with the Darwinian perspective in that we propose kinesic responses that are primarily intended to provide adaptive benefits to the robot, rather than attempting to convey the outward aspects of discrete human emotions. I. I NTRODUCTION Research in the field of social psychology relating to the expression and interpretation of affect has typically focussed on facial expressions [1]. Most Human Robot Interaction (HRI) research tends to reflect this trend, with the generation and interpretation of facial expressions gaining more attention than studies of bodily forms of expression [2]–[5]. Furthermore, much of this work pertains to humanoid morphologies. This paper describes how animal-like forms of bodily expression, coupled with a grounded model of affect, could enable situated robots of varying shapes and sizes to effectively communicate their needs in a socially evocative manner [6]. Our approach differs from works such as [7]–[9] in that we have adopted a robot-centred approach [10] by seeking to model the underlying substrate of emotion and ground expression in actions that provide adaptive benefits to the robot. The topics of emotion, expression and context of interpretation are referred to throughout this paper, and therefore they will be introduced briefly below. C. Context The interpretation of kinesic signals does not occur in a vacuum, and the broader environmental context will determine how such information is processed by an observer. Heider and Simmel first noted the importance of situational context, noting that this element was seldom considered in studies of kinesics [21]. Whilst certain characteristics, such as origin of movement [21] and changes of speed or direction [22], have been shown to create a perception of animacy, the nature of the robot’s interactions with its environment will also determine whether it is attributed with motivations, beliefs or desires: a mode of interpretation Dennett describes as the Intentional Stance [23]. II. R ESEARCH Q UESTIONS In consideration of the points outlined above, the following hypotheses were defined in order to examine the processes humans use to make judgements about robots, make sense of their behaviour, and determine how to respond to them: • The actions of a robot are more likely to be interpreted as those of an intentional agent if it is able to express arousal. • A robot that is able to express arousal will evoke greater empathy and emotional response from human observers. • A robot that is able to express arousal will ultimately provoke greater desire for prosocial interaction. A. Emotion Emotion can be described broadly in terms of physiological arousal, expressive behaviours and conscious experience [11]. There are two predominant perspectives in terms of the classification of emotion: discrete and dimensional. Discrete theories, which include [12]–[15], propose that there are a finite number of distinguishable basic emotions whilst dimensional models, such as [16]–[18], seek to represent the key aspects of emotion using a series of continuous axes. The dimensional model utilised in this work will focus on arousal only, leaving 40 3rd UK-RAS Conference for PhD Students & Early Career Researchers, Hosted virtually by University of Lincoln, April 2020 TABLE I TABLE SUMMARISING THE PRELIMINARY RESULTS OF OUR SECOND ( N =180) EXPERIMENT Group Theme Attribution of Emotion Empathy Toward Robot Intentional Stance Adopted Interaction Envisaged Prosocial Disposition Yr4 A Yr4 B Yr5 A Yr5 B Yr6 A Yr6 B 1.7% 0% 11.2% 56.0% 26.7% 8.6% 2.6% 14.7% 67.2% 29.3% 9.5% 0.9% 17.2% 69.8% 22.4% 12.9% 4.3% 30.2% 74.1% 36.2% 9.8% 14.3% 37.5% 78.6% 27.7% 15.5% 16.4% 45.7% 84.5% 52.6% Fig. 1. Diagram illustrating the model of affect employed. environment. These interviews were intended to ascertain the mode of interpretation they had adopted when watching the robot, their feelings towards it and whether they would have liked to intervene in order to assist or hinder it. The results of this study indicated that the expressive robot was attributed with emotion roughly three times more frequently than the non-expressive one, and that expression also appeared to significantly influence desire for interaction. However, we found no evidence of a link between expression of arousal and desire for prosocial behaviour on the part of the observer. Our second experiment was conducted at a local primary school. A total of 180 children took part, selected from year groups 4-6 (age range 8-11). The event was run over six days, with a single class of approximately 30 children taking part each day. As with the previous experiment, the participants were asked to watch a hexapod robot perform a number of ‘scenes’. Group B observed the robot with its arousal model enabled, whereas the control group A viewed it with the model dormant. Videos were used to ensure repeatability. Between each scene, the children were asked to complete a brief questionnaire consisting of eight questions. The first five were multiple choice questions that were designed to establish the participant’s broad disposition toward the robot, whilst the remaining three requested short written responses describing how the video made them feel, what they would have liked to do if they were in the video and why. Our preliminary findings, summarised by Table I, suggest that group B participants were much more likely to adopt an intentional stance when describing the behaviour of the robot, and more likely to experience emotional empathy towards the robot. Consequently, they were far more likely to suggest prosocial forms of behaviour intended to help the robot when asked how they would have liked to have interacted with it. A comprehensive analysis of the results is currently underway. III. A RCHITECTURE Fig 1 illustrates our model of affect, which is loosely based on a mammalian stress response. The model features two hormones, E and C, which broadly correspond to epinephrine and cortisol in mammals. The first provides a rapid, but brief, response to relevant external stimuli whilst the second is a longer-term response to repeated stressful episodes or deficits in internal variables. These hormones directly modulate five kinesic properties: stance radius; stance height; step length; step height and movement speed [24]. Each of these properties affect the movement of the robot, providing both an adaptive benefit and an associated cost. For example, faster movement speed enables rapid response to potential threats, but also consumes more energy and places strain on the robot’s actuators. IV. E XPERIMENTS AND P RELIMINARY R ESULTS Two related experiments were conducted to test the hypotheses outlined above. The first was a qualitative study that provided detailed insights and identified areas of particular interest. A second study enabled us to build on these insights and capture data from a much larger group of participants. The dependent variables in both experiments were the participant’s general perception of the robot, their emotional response towards it, their understanding of its behaviour and motivations; and, ultimately, their willingness to actively assist it. The independent variable was the robot’s expressive capability. Therefore a between-group design was adopted for both experiments to facilitate control of this attribute, with participants being divided evenly into two groups, A and B. The additional forms of expressive responses were enabled for group B, but not for the control group A. In the first experiment, a total of nine participants were asked to observe a hexapod’s behaviour as it performed in six discrete episodes, each lasting between two and four minutes. These episodes were designed to tell a story by creating a situation for the robot that an observer could interpret and respond to: an approach that has often been adopted using human actors [25]. The stories were also intended to be comprehensible from the situational context alone, enabling them to be usable for both groups. After each episode, a brief semi-structured interview was conducted, during which participants were asked to describe what happened during the scene, any particularly key moments, how they felt about the scenario and the robot’s behaviour and whether they would have liked to have interacted with either the robot or its V. C ONCLUSIONS AND F UTURE W ORK To date, our work has focussed on kinesics in the context of open-loop interaction: participants describe how they would like to interact with the robot, but there is no continuous feedback cycle. Future work will close the loop by engaging participants in a shared task that requires continuous interaction with the robot. This task will be designed to create tension between the robot’s need to maintain homeostasis and the participant’s desire to achieve other objectives. This sets the stage for us to determine how bodily forms of expression can influence the willingness of humans to accommodate the robot’s needs, even when they may conflict with their own. 41 3rd UK-RAS Conference for PhD Students & Early Career Researchers, Hosted virtually by University of Lincoln, April 2020 R EFERENCES [10] K. Dautenhahn, “Socially intelligent robots: dimensions of human– robot interaction,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 362, no. 1480, pp. 679–704, 04 2007. [11] K. Scherer, “What are emotions? And how can they be measured?” Social Science Information, vol. 44, no. 4, pp. 695–729, 2005. [12] P. Ekman, Emotion in the human face, 2nd ed. Cambridge: Cambridge University Press, 1982. [13] J. P. Scott, “The function of emomotions in behavioural systems: A systems theory analysis,” in Emotion: Theory, Research, and Experience, H. Plutchit, R. Kellerman, Ed. New York: Academic, 1980, vol. 1, pp. 35–56. [14] C. Izard, Human Emotions. New York: Springer US, 1977. [15] S. Tomkins, Affect, Imagery, Consciousness. Springer, 1962, vol. 1. [16] J. Mehrabian, A. Russell, An Approach to Environment Psychology. Cambridge, MA: The MIT Press, 1974. [17] D. Watson and A. Tellegen, “Toward a consensual structure of mood,” Psychological bulletin, vol. 98, pp. 219–35, 10 1985. [18] H. Lövheim, “A new three-dimensional model for emotions and monoamine neurotransmitters,” Medical Hypotheses, vol. 78, no. 2, pp. 341 – 348, 2012. [19] A. Fridlund, Human facial expression: An evolutionary view. San Diego: Academic Press, 1994. [20] C. Darwin, The Expression of the Emotions in Man and Animals. London: John Murray, 1872. [21] F. Heider and M. Simmel, “An experimental study of apparent behaviour,” The American Journal of Psychology, vol. 57, no. 2, pp. 243– 259, 1944. [22] P. Tremoulet and J. Feldman, “Perception of animacy from the motion of a single object,” Perception, vol. 29, no. 8, pp. 943–951, 2000. [23] D. Dennett, The Intentional Stance. Cambridge, Mass: MIT Press, 1989. [24] L. Hickton, M. Lewis, and L. Cañamero, “A flexible component-based robot control architecture for hormonal modulation of behaviour and affect,” in Towards Autonomous Robotic Systems, Y. Gao, S. Fallah, Y. Jin, and C. Lekakou, Eds. Cham: Springer International Publishing, 2017, pp. 464–474. [25] H. Wallbot, “Bodily expression of emotion,” European Journal of Social Psychology, vol. 28, pp. 879–896, 1998. [1] B. de Gelder, “Why bodies? Twelve reasons for including bodily expressions in affective neuroscience,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3475–3484, 12 2009. [Online]. Available: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781896/ [2] A. Beck, L. Cañamero, A. Hiolle, L. Damiano, P. Cosi, F. Tesser, and G. Sommavilla, “Interpretation of emotional body language displayed by a humanoid robot: A case study with children,” International Journal of Social Robotics, vol. 5, 08 2013. [3] A. Beck, B. Stevens, K. A. Bard, and L. Cañamero, “Emotional body language displayed by artificial agents,” ACM Trans. Interact. Intell. Syst., vol. 2, no. 1, Mar. 2012. [Online]. Available: https://doi.org/10.1145/2133366.2133368 [4] M. Lewis and L. Cañamero, “Are discrete emotions useful in humanrobot interaction? Feedback from motion capture analysis,” Proceedings - 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013, pp. 97–102, 2013. [5] K. L. Koay, G. Lakatos, D. S. Syrdal, M. Gácsi, B. Bereczky, K. Dautenhahn, A. Miklósi, and M. L. Walters, “Hey! There is someone at your door. a hearing robot using visual communication signals of hearing dogs to communicate intent,” in 2013 IEEE Symposium on Artificial Life (ALife), April 2013, pp. 90–97. [6] C. Brazeal, Designing sociable robots. Cambridge, MA: MIT Press, 2002. [7] F. Kaiser, K. Glatte, and M. Lauckner, “How to make nonhumanoid mobile robots more likable: Employing kinesic courtesy cues to promote appreciation,” Applied Ergonomics, vol. 78, pp. 70–75, 07 2019. [8] G. Lakatos, M. Gácsi, V. Konok, I. Brúder, B. Bereczky, P. Korondi, and A. Miklosi, “Emotion attribution to a non-humanoid robot in different social situations,” PloS one, vol. 9, p. e114207, 12 2014. [9] J. Novikova and L. Watts, “A design model of emotional body expressions in non-humanoid robots,” in Proceedings of the Second International Conference on Human-Agent Interaction, ser. HAI ’14. New York, NY, USA: Association for Computing Machinery, 2014, pp. 353– 360. 42