[go: up one dir, main page]

Academia.eduAcademia.edu
Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) Exploring the Design Space of Robot Appearance and Behavior in an Attention-Seeking ‘Living Room’ Scenario for a Robot Companion M. L. Walters, K. Dautenhahn, R. te Boekhorst, K. L. Koay, S. N. Woods Abstract—This paper presents the results of video based Human Robot Interaction (HRI) trials which investigated people’s perceptions of different robot appearances and associated attention seeking features and behaviors displayed by the robot. The methodological approach highlights the ‘holistic’ and embodied nature of robot appearance and behavior. Results show that people tend to rate a particular behavior less favorably when the behavior is not consistent with the robot’s appearance. It is shown how participants’ ratings of robot dynamic appearance are influenced by the robot’s behavior. Relating participants’ dynamic appearance ratings of individual robots to independently rated static appearance provides support for the left hand side of Mori’s proposed “uncanny valley” diagram. We exemplify how to rate individual elements of a particular robot’s behavior and then assess the contribution of those elements to the overall perception of the robot by people. Suggestions for future work are outlined. I. INTRODUCTION M OST robots that are currently commercially available for use in a domestic environment and which possess features allowing interaction with humans are generally orientated towards toy or entertainment functions. In the future, a robot companion which is to find a more generally useful place within a human oriented domestic environment must satisfy two main criteria [1]: 1. It must be able to perform a range of useful tasks or functions. 2. It must display socially acceptable behavior. The technical challenges in getting a robot to perform useful tasks are extremely difficult and many researchers are currently researching in the areas of navigation, manipulation, vision, speech, sensing, safety, integration, physical planning and so on, that will be required to perform useful functions, e.g. in a home environment. The second criterion is arguably at least as important as the first one, because if the robot does not exhibit socially acceptable behavior (e.g. if it is annoying, irritating, unsettling or frightening to human users), Manuscript received November 10th 2006. The work described in this paper was conducted within the EU Integrated Project COGNIRON ("The Cognitive Robot Companion") and was funded by the European Commission Division FP6-IST Future and Emerging Technologies under Contract FP6002020.. M. L. Walters, K. Dautenhahn, R. te Boekhorst, K. L. Koay, S. N. Woods. All authors from the Adaptive Systems Research Group, University of Hertfordshire, College Lane, Hatfield, Herts. UK. Email: {M.L.Walters, K.Dautenhahn, R.teBoekhorst, K.L.Koay, S.N.Woods}@herts.ac.uk. 1-4244-0701-X/07/$20.00 ©2007 IEEE then people will reject the robot no matter how useful its performance. Therefore, it is important to establish how a robot can behave in a socially acceptable manner and this is the focus of much current research in the area of humanrobot interactions. An excellent overview of socially interactive robots is provided in Fong et al. [2]. Recent studies into human reactions to robots include Thrun [3], Nakauchi & Simmons [4], Goetz & Kiesler [5], SeverinsonEklundh et al. [6] and Scopelliti et al. [7]. It is to be expected that the perception of a robot’s social behavior will depend to a large extent on its appearance. It is possible to place robots on an anthropomorphic appearance scale which varies from mechanical-looking to a human-like appearance along the lines suggested by Woods et al. [8] and Goetz et al. [9]. Hinds et al. [10] have studied the effect of robot appearance on humans carrying out a joint task with a robot. Mechanical-looking robots are treated less politely than robots with a more human-like appearance. Also, humans treat mechanical-looking robots in a subservient way (i.e. less socially interactive) compared to more humanlooking robots. Moreover, expectations are lower with regard to abilities and reliability for mechanical-looking robots. Most currently commercially available research robots tend to have a somewhat mechanical appearance, though some have incorporated various humanoid features such as arms, faces, eyes and so on. Some research robots, often referred to as androids, are very human-like in appearance, though their movements and behavior falls far short of emulating that of real humans. Mori [11] proposed that people will be more familiar with robots as they exhibit increasingly human-like characteristics. However, at a certain point the effect becomes repulsive due to robots that on the one hand look very similar to humans, but on the other hand whose behavior exposes them as being not. This effect can be illustrated by means of Mori’s diagram (Fig. 1) where the shape of the curves gives rise to the term ‘uncanny valley’ to describe the repulsive effect. Mori’s original proposal claims that the ‘uncanny valley’ effect is a feature of inanimate likenesses, but is even more pronounced for robots, puppets and automata which actually exhibit movement. Therefore, according to Mori, although robot appearance is important with regard to familiarity and social acceptance, the actual quality and content of a robot’s movements are even more important. Mori argued that robot 341 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) appearance and behavior must be consistent with each other. At the extreme of high fidelity appearance, even slight inconsistencies in behavior can have a powerful unsettling effect. Many roboticists, such as Ferber [12], argue that there is conflicting evidence for the right hand side of Mori’s “Uncanny Valley” diagram and research continues into the area of human-like robots or androids. For example, Minato et al. [13] have built an android robot in order to study how humans interact with robots which have a very human-like appearance. Inspired by Mori’s [11] observations on the ‘uncanny valley’, both Goetz et al. [9] and Minato et al. [13] have proposed that if a particular robot’s appearance and behavior were consistent and more humanlike, but not to the extent that the ‘uncanny valley’ was reached, it would be more acceptable and effective at interacting with people (cf. MacDorman [14], Woods et al. [8]). [21], Khan [22], Dautenhahn [23], and Dautenhahn et al. [24]). Related to the above issues, the present study addressed two main research questions: 1) What is the importance of consistency between robot appearance and behavior for less human-looking robots? 2) Would people prefer more human-like appearance and behavior in robots that they interact with? The context chosen for the study and associated HRI trials was that of a domestic robot attracting a human’s attention using a combination of visual and audible cues. Typically, when carrying out a study of this type the various features involved (in this case appearance, sounds, flashing lights and manipulator gestures) would be isolated into a number of separate conditions and a series of tests performed with the various permutations of conditions in order to achieve statistically valid results. However, it is not possible to perform this type of study using robots since the various features of a robot (e.g. appearance, manipulator type, head type, speech or sounds etc.) cannot be isolated from each other. For example, only a robot with a human-like arm will physically be able to perform human-like gestures. Also, each particular robot (e.g. a ‘humanoid-looking robot’ or a “mechanical-looking robot’) has an overall appearance which is different than the sum of its individual parts. If any one part or behavior is changed, effectively this will create a different robot. If individual robot component parts and behaviors were examined in isolation (even in cases where this were possible, e.g. varying a robot’s speech), the concept of a ‘robot’ would be lost. It is therefore not advisable to consider any one aspect of a robot (such as a particular gesture, speech quality, sound or any other parts or behavior) in isolation from the rest of the component parts and behaviors which together make up the complete robot. Fig. 1. Mori’s uncanny valley diagram (simplified and translated by K. F. MacDorman – GFDL). Research has shown that humans do indeed respond to certain social characteristics, features or behaviors exhibited by computers and non-human-like robots (Breazeal [15], Kanda et al. [16], and Okuno et al. [17]). Or perhaps they react socially to certain characteristics of computers and nonhuman-like robots (as they do to their cars and any other contraption for that matter)? In other words, the social attitude is due to human’s attributing tendency rather than to anything “social” in the design of artifacts. Reeves and Nass [18] provided evidence that in interaction with computer technology, people exhibit aspects of social behavior towards computers. A study by Friedman et al. [19] has shown that while people in many ways view an Aibo robot like a dog, they do not treat and view it in precisely the same way as a real, living dog (e.g. with regard to moral standing). Thus, as long as robots can still be distinguished from biological organisms, which may be the case for a long time to come, it is unlikely that people will react socially to robots in exactly the same ways as they might react to other humans or other living creatures in comparable contexts (Norman [20], Dryer II. METHOD AND PROCEDURE Previously, studies of this type have employed live humanrobot experiments in which humans and real robots typically interact in various relatively controlled scenarios [25][26]. These live HRI trials are generally complicated and expensive to run and usually test a relatively small sample of possible users. The methodology chosen was adapted from that employed in previous work. In these studies, the results obtained from participants who view a video recording of another person participating in interactions with a robot, are comparable to those obtained from participants in live interactions. For full details see Woods et al. [27] and Woods et al. [28] where results justify our choice of video-based trials in this study. Applied to the present study, the method consisted of creating three video recordings which were edited to provide a video movie of exactly the same scenario, but each using a different robot. The three robots (Fig. 2) were designed by the research team. The robots’ static appearances (from photographs) were rated on an appearance scale by a panel comprised of 26 researchers from various disciplines 342 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) including physics, computer science, astronomy and various administrative staff at the University. Fig. 3 shows the mean ratings for each robot, the corresponding standard errors and the 95% confidence interval bands. The scale ranged from very mechanical-looking (1) to very human-looking (20). A Friedman non-parametric ANOVA rated the results as highly significant (Chi Sqr. (N = 27, df = 2) = 44.78431 p < .00001). In most cases, the ranking order of the robots was the same and the three robots were labeled according to their mean rate values for static appearance: Mechanical (mean = 3.67), Basic (mean = 6.63) and Humanoid (mean = 12.22). Note that these names are simply used as labels to distinguish the three robots from each other, as none actually looked particularly human-like in appearance. the human-like arms could not easily make a simple lifting or pointing gesture comparable to the actuators of the two other robots. 20 18 16 Appearance. 14 Rating on Mechanical 12 Looking (1) to 10 Human Looking (20) 8 Scale 6 4 2 Mechanical The Three Robots. Basic Humanoid Mean Mean±SE Mean±0.95 Conf. Interval Robot Appearance Fig. 3. Panel ratings of the robot static appearances on the mechanicalhuman appearance scale. Humanoid Robot Appearance Mechanical Robot Appearance Basic Robot Appearance Human-like arm Human voice Detailed head Simple gripper Beep Camera Head Simple arm Mechanical voice Simple head Fig. 2. The three robots used for the video based trials The robots’ static appearance (as judged from photographs) is not the same as the robots’ appearance experienced by the participants in the HRI trial. The robots in the trial videos were moving and the perceived robot appearance could therefore be considered to be dynamic appearance (that is, including the behavior of the robot). Thus, dynamic appearance rating is effectively an assessment of the robot as a whole; including not just the robot’s static appearance but also includes any movements or other robot behaviors and expressions observed. For creating the videos of the three scenarios, each robot displayed a repertoire of attention seeking cues and behaviors corresponding to their respective robot features. Three different attention-seeking mechanisms were used: manipulator movement, lights, and sound. The manipulators differed between the three robots: The Mechanical-looking robot was fitted with a simple one Degree of Freedom (DoF) gripper which was able to move up or down only. The Basic robot had a simple (one DoF) arm fitted with a compound movement which allowed the robot to lift the arm and make a pointing gesture. The Humanoid was fitted with two arms each of seven DoF and was able to make a more human-like waving gesture. Note that it is impossible for either the lifting or pointing arms to make a waving gesture, and conversely, In addition to the movement of the manipulator, visual cues were used as attention-attracting mechanisms: The Mechanical-looking robot was equipped with a pan and tilt camera unit, fitted with a single flashing light. The Basic robot had a simple head with two flashing lights in place of eyes, and the Humanoid robot had multiple flashing lights in the place of mouth and eyes. Each robot also provided a sound. In the case of the Mechanical-looking robot, a series of two beeps was used. The Basic robot used a poor quality synthesized voice. A high quality recorded human voice was used for the Humanoid Robot. For both synthesized and human voice, the speech content was identical and consisted of the phrase “There is someone at the door.” These various attributes to be tested for each of the three robots were therefore categorized as: (dynamic) appearance, gesture, light signal, and sound signal. It should be noted that the appearance and (attentionseeking) behavior of the robots could not be studied independently in different conditions due to the embodied nature of the robots. For example, if a robot with ‘humanoid appearance’ speaks with a mechanical voice then it violates the consistency of appearance and behavior: it will no longer be the ‘humanoid’ robot that people are judging, but ‘something else’. This ‘holistic’ nature of dynamic robot appearance does not allow a clear decomposition of different robot appearance and behavior features, an approach actually required to perform valid statistical analyses on the different independent features. This exemplifies one of the many methodological challenges that human-robot interaction researchers are faced with. At the beginning of each trial an introduction video was shown to the participants that included background information about the work of the research group, the purpose of the current trial and detailed instructions for participating in the experiment. As these instructions were 343 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) recorded, consistency in administering the tests was enhanced. An experiment supervisor was on hand to answer any further questions and to repeat the instructions if necessary. After the introductory video was played, the main trial videos were shown to the participants. The trial videos followed the same scenario which consisted of the following sequence of scenes: a) that a human response is required: light signal, gesture and sound signal. (Fig. 4e) 6) The human is then seen following the robot out of the room, and then opening the door for his visitor. (Fig. 4f) The videos were taken in the University of Hertfordshire Robot House, a naturalistic home environment for HumanRobot interaction trials [1][28]. The three videos were shown to a total of 79 undergraduate students, in three separate group sessions ranging in size from 20 – 30 individuals at a time. The participants filled in the questionnaires individually. Generally, in order to reduce social facilitation effects [29], the group sessions did not involve any discussion of the main trial videos and how participants rated the different robots. The participants signed consent forms, provided basic demographic details including, background, gender, handedness and age, before they were exposed to the introductory video. They were then shown the three main trial videos, each group in a different order, of a robot attracting attention from a person – featuring the Mechanical, Basic and Humanoid robots. After the three videos were displayed, a slide showing the three robots (Fig. 2) with their names and features was projected on the main screen as an aid to participants’ memory as to the identity of the robots in the videos. The participants were then asked to fill in a questionnaire in order to collect their opinions and preferences towards the three robots and the various attention seeking behaviors. Details of the relevant questions from the questionnaire are provided below in the Results and Analyses section. For each session, the three robot scenario videos were presented in a different order. As there were only three group video sessions, not all possible permutations of video presentation order could be covered. b) c) d) e) III. TRIAL RESULTS AND ANALYSIS f) Fig. 4. Still photographs captured from the video based HRI trial videos. 1) A person is shown who is relaxing on a sofa in the living room and listening to load music. (Fig. 4a) 2) A visitor approaches the front door and rings the doorbell. (Fig. 4b) 3) The robot (Mechanical, Basic or Humanoid for each of the three videos) responds to the doorbell, and then acts as if it had assumed that the human has not heard it. (Fig. 4c) 4) The robot enters the living room and approaches the human. This part of the scenario was shown as viewed from the position of a third party. (Fig. 4d) 5) The video then switches to the viewpoint of the human (on the sofa), looking directly at the robot. The robot then performs its respective attention seeking behaviors to indicate For reasons discussed previously, it was not possible to fully isolate and cross combine the various appearance and attention seeking behaviors as the robot features tested were not truly independent. For analysis purposes, it was assumed that dynamic robot appearance would be closest to an independent variable. The other attention seeking behaviors would then be perceived by the human test participants as either being consistent or inconsistent with the overall dynamic appearance of each robot. To measure this, each participant provided a set of ratings on a Likert scale (1 = Dislike a Lot, 3 = Neutral, 5 = Like a Lot) for their preference for each robot’s (dynamic) appearance, light signal, sound signal and gesture behavior. For example the Mechanical-looking robot exhibited a single flashing light, a beep sound and a simple lifting gripper gesture. Participants rated their preference for dynamic appearance and these three attention seeking behaviors for the Mechanical robot. In the same way the preference ratings for the twin flashing lights, the low quality synthesized voice and the pointing arm 344 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) gesture were obtained for the Basic robot. The multiple flashing eye and mouth lights, the high quality (recorded) human voice and the waving arm gesture were likewise rated for the Humanoid robot. Friedman non-parametric ANOVA for repeated measurements were performed on all the participant’s ratings. A. Robot Appearance Ratings Highly significant differences were found for the dynamic appearance scores (Chi Sqr = 33.10425, N=76, DoF=2, p< .000001). The mean results are illustrated below (Figure 5), along with a visual indication of standard error and 95% confidence interval bands. In general, the participant’s ratings of robot dynamic appearance indicated that they preferred the Humanoid robot overall, followed by the Basic robot and finally the Mechanical-looking robot. Gesture Ratings;1 = Dislike a lot, 0 = Neutral, 5 = Like a lot.; 5 Dynamic Appearance Line (for Comparison cf. Fig. 5) 4 3 2 Mean Mean±SE Mean±0.95 Conf. Interval 1 GripGest Fig. 6. 5 PointGest WaveGest Ratings of the robots’ gestures. Light Signal Ratings; 1 = Dislike a lot, 0 = Neutral, 5 = Like a lot. Appearance Preference Ratings. (1 = Dislike a lot, 0 = Neutral, 5 = Like a lot. Mean = 1.8951+0.5558*x Dynamic Appearance Line (for Comparison cf. Fig 5) 5 4 Dynamic Appearance Line (Best fit through means) 4 3 3 2 2 Mean Mean±SE Mean±0.95 Conf. Interval 1 MechApp Fig. 5. BasicApp Mean Mean±SE Mean±0.95 Conf. Interval 1 One-Light Fig. 7. HumApp Two-Light Multi-Light Ratings of the robots’ light signals. Participants’ mean appearance ratings for the three robots. B. Robot Attention Seeking Behaviors The three sets of attention seeking behavior employed by the three robots were not truly independent from each other, or from the respective robots’ appearances. However, as argued previously, the different dynamic appearances of the three robots can be considered to encapsulate the main overall impression of an individual robot by each trial participant. We therefore used the robot’s (dynamic) appearance rating as a base line for gauging the contribution of each of the individual attention seeking behaviors. For this purpose the line marking the best linear fit of the mean appearance preference ratings was drawn (see Fig. 5). (Note that this line only acts as a visual guide to allow easy comparison with the other attention seeking behaviors. Because the order of the three robot types along the horizontal axis is at most ordinal, no conclusions should be drawn about the shape of this line per se.) Sound Ratings1 = Dislike a lot, 0 = Neutral, 5 = Like a lot.: 5 Dynamic Appearance Line (for Comparison cf. Fig 5) 4 3 2 Mean Mean±SE Mean±0.95 Conf. Interval 1 BeepSound Fig. 8. SynthSound HumSound Ratings of the robots’ sounds. It can be seen that when compared to the means obtained from the overall appearance ratings, the Humanoid robot’s waving gesture is rated similar to the same mean value as dynamic appearance. For the other two robots, the mean for the lifting gripper gesture is rated better than the overall 345 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) Mechanical robot appearance rating, and the pointing gesture is rated less then the Basic robot appearance rating (Fig. 6). The differences in rating between the gestures of the three robot types were highly significant by the Friedman test (Chi Sqr =25.73799, N=76, df=2, p< .000001) The differences between the ratings of the light signal and sound signal were highly significant. (Light signal; Chi Sqr = .25.74, N=76, df =2, p < .000001. Sound signal; Chi Sqr = 62.86, N = 77, df =2, p < .000001).(Fig. 7 and Fig. 8) For the light signals, the single light of the mechanical robot and the two light of the basic robot were better liked than their respective appearance ratings. The multiple flashing lights on the Humanoid robot, however, were rated as less liked than the overall dynamic appearance rating might suggest (Fig. 7) 5 4 Dynamic Appearance Preferences Means 3 (Overall Approval) 2 Basic Robot Humanoid Robot Mechanical Robot 1 1 2 3 4 Static Appearance Ratings Means /4 5 Fig. 9. Robot static appearance ratings vs. robot dynamic appearance preferences . IV. DISCUSSION AND CONCLUSIONS In all the results above, any Likert value below 3 implies that a feature or behavior was disliked. Any value above 3 indicates that a feature was liked overall. The Basic robots attributes were all close to the neutral value of 3, implying that overall it was not particularly liked or disliked. The Mechanical robot’s attributes consistently fell into the category below 3 indicating that overall it was mildly disliked. Other interesting observations are that speech, even of poor quality, is liked in contrast to simple beeping sounds which are disliked. Overall, it can be seen that the Humanoid robot’s appearance and behaviors were all liked to some degree. However, the multiple flashing eye and mouth lights feature were not liked to the same degree as the rest of the Humanoid robot’s attributes and were actually rated as less liked overall than the twin flashing lights on the Basic robot. The left hand side of Mori’s original diagram (Figure 1) illustrates his idea that humans are more approving of robots which have more human-like appearance and behavior (up to a certain point). It is interesting here to plot the panel ratings (from Figure 2), which were purely judging robot static appearances (on a mechanical to humanlike looking scale), against the actual dynamic appearance ratings of the HRI trial participants (Figure 9). In Figure 9 the independent panel’s ratings on the mechanical-human appearance scale means (range 1 to 20) were divided by 4 in order to show them on the same scale as those for the trial participant’s dynamic appearance ratings. Fig. 9 highlights that the ratings for the robots, for both static and dynamic appearance, increase from Mechanicallooking to Basic to Humanoid robot, thus providing support for the left hand side of Mori’s diagram. The fact that participants tend to rate dynamic appearance higher than static appearance also supports Mori’s view that robot behavior is important in shaping humans’ views of robots. There are insufficient data points (and it would be questionable anyhow because the dynamic appearance ratings are based on a Likert scale which is only ordinal) to show if the relationship between increasing human-like appearance and human approval is actually linear or some other functional relationship. The labeling of the robot types (Mechanical, Basic, and Humanoid) could be open to critique, because it might have influenced the judgments of the subjects. However, the various attributes of each robot were rated separately by participants. That the flashing lights of the “Humanoid” robot were not actually liked as much as the overall appearance of the robot suggests that participants were not unduly influenced by the names used for the three robots. However, we do feel that any future trial should avoid the use of leading names for the robots to be rated by trial participants. These findings have implications for the designers of robots which must interact with humans. Where a robot behavior or feature is rated by humans as less liked or approved of than a robot’s overall appearance might suggest, there will inevitably be a degree of disappointment. This may explain why humans become rapidly discontented with toys and robots which have a very interesting and anthropomorphic visual appearance, but prove to be disappointing after actual interaction takes place. The number and range of robots tested in our study is not large enough to provide statistically hard evidence to support the whole of Mori’s diagram. Also, none of the robots had an appearance which was human-like enough to trigger the uncanny valley effect, so the results obtained here can only be taken as evidence to support the left hand side of Mori’s diagram. More experiments using finer gradations of robot appearances and behavior are required to provide more extensive evidence, to give more data sample points and to refine the parameters which govern human perception of robot appearance and behavior. However, we hope that the methods used here and results gained yield useful insights into how to calibrate robot appearance and behavior so that owners and users of domestic or companion robots in future will be less disaffected due to design feature limitations which do not live up to their initial expectations. 346 Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007) ACKNOWLEDGMENTS Thanks go to Wan Ching Ho for his help in creating the videos used in the trials. Thanks also go to all our colleagues at the University of Hertfordshire who helped administer the trials. REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] D. S. Syrdal, K. Dautenhahn, S. N. Woods, M. L. Walters, K. L. Koay, ”Doing the Right Thing Wrong - Personality and Tolerance to Uncomfortable Robot Approaches”. Proc. The 15th IEEE International Symposium on Robot and Human Interactive Communication (ROMAN06), 2006. T. Fong, I. Nourbakhsh, K. Dautenhahn, “A survey of socially interactive robots”, Robotics and Autonomous Systems, 42, pp. 143166, 2003. S.Thrun, “When robots meet people”, IEEE Intelligent Systems, 13, pp. 27–29, 1998. Y. Nakauchi and R. Simmons, “A social robot that stands in line”, Autonomous Robots, 12, pp. 313-324, 2002. J. Goetz and S. Kiesler, “Cooperation with a robotic assistant”, in Proceedings of CHI ’02 Conference on Human Factors in Computing Systems, New York, USA, pp. 578-579, 2002. K. Severinson-Eklundh, A Green and H. Hüttenrauch, “Social and collaborative aspects of interaction with a service robot”, Robotics and Autonomous Systems, 42, pp. 223-234, 2003. M. Scopelliti, M. V. Giuliani, A. M. D'Amico and F. Fornara, “If I had a robot at home.... Peoples’ representation of domestic robot”, in Designing a More Inclusive World, S. Keates, J. Clarkson, P. Langdon and P. Robinson, Eds., Cambridge, UK: Springer, pp. 257-266, 2004. S. N. Woods, K. Dautenhahn, J. Schulz, “The design space of robots: Investigating children’s views”, in Proceedings of the IEEE International Workshop on Robot and Human Interaction Communication (RO-MAN 04), September, Kurashiki, Japan, 2004. J. Goetz, S. Kiesler and A. Powers, “Matching robot appearance and behavior to tasks to improve human-robot cooperation”, in Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 03), Oct. 31-Nov. 2, Millbrae, Calif., pp. 55-60, 2003. P. Hinds, T. Roberts and H. Jones, “Whose job is it anyway? A study of human-robot interaction in a collaborative task”, Human-Computer Interaction, 19, pp.151-181, 2004. M. Mori, “Bukimi no tani (the uncanny valley)”. Energy, 7, pp. 33–35, 1970. (In Japanese; see MacDorman, [14], Appendix B, for English trans.). D. Ferber, “The Man Who Mistook His Girlfriend For A Robot”, Popular Science, September 2003. T. Minato, K. F. MacDorman, M. Shimada, S. Itakura, K. Lee and H. Ishiguro, “Evaluating human-likeness by comparing responses elicited by an android and a person”, in Proceedings of the Second International Workshop on Man-Machine Symbiotic Systems (ROMAN 04), pp. 373-383, 2004. K. F. MacDorman, “Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it?”, in Proceedings Of the CogSci 2005 Workshop: Toward Social Mechanisms of Android Science, Stresa, Italy, pp. 106-118, 2005. [15] C. L. Breazeal, Designing Sociable Robots. Cambridge, Mass. The MIT Press, 2002. [16] T. Kanda, T. Hirano and D. Eaton, “Interactive robots as social partners and peer tutors for children: A field trial”, Human-Computer Interaction, 19, pp. 61-84, 2004. [17] H. G. Okuno, K. Nakadai and H. Kitano, “Realizing Audio-Visually Triggered ELIZA-Like Non-verbal Behaviors”, PRICAI2002, LNAI 2417, Lecture Notes in Artificial Intelligence, Springer-Verlag, pp. 52562, 2002. [18] B. Reeves and C. Nass, The Media Equation: How People Treat Computers, Television and New Media like Real People and Places, New York: Cambridge University Press, 1998. [19] B. Friedman, .P. H. Kahn Jr. and J. Hagman, “Hardware companions? What online AIBO discussion forums reveal about the human-robotic relationship”, Digital Sociability, 5, pp. 273-280, 2003. [20] D. A. Norman, “How might people interact with agents”, Communications of the ACM, 37, pp. 68-71, 1994. [21] D. C. Dryer, “Getting personal with computers: How to design personalities for agents”, Applied Artificial Intelligence, 13, pp. 273295, 1999. [22] Z. Khan, Attitude towards intelligent service robots, NADA KTH Technical Report, Stockholm, 1998. [23] K. Dautenhahn, “Design spaces and niche spaces of believable social robots”, in Proceedings of the 11th Annual International Workshop on Robot and Human Interactive Communication ( RO-MAN 02), 25-27 September, Berlin, Germany, pp. 192-197, 2002. [24] K. Dautenhahn, M. L. Walters, S. N. Woods, K. L. Koay, C. L. Nehaniv, E. A. Sisbot, R. Alami and T. Simeon, “How may I serve you? A robot companion approaching a seated person in a helping context”, in Proceedings of HRI ’06, Salt Lake City, Utah, USA, pp. 172-179, 2006. [25] M. L. Walters, S. N. Woods, K. L. Koay and K. Dautenhahn, “Practical and methodological challenges in designing and conducting interaction studies with human subjects”, in Proceedings of the AISB'05 Symposium on Robot Companions Hard Problems and Open Challenges in Human-Robot Interaction, 14-15 April 2005, University of Hertfordshire, UK, pp. 110-119, 2005a. [26] M. L. Walters, K. Dautenhahn, K. L. Koay, C. Kaouri, te R. Boekhorst, C. L. Nehaniv, I. Werry and D. Lee, “Close encounters: Spatial distances between people and a robot of mechanistic appearance”, in Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids 05), December 5-7, Tsukuba, Japan, pp. 450-455, 2005c. [27] S. N. Woods, M. L. Walters, K.L. Koay, K. Dautenhahn, “Comparing human robot interaction scenarios using live and video based methods: Towards a novel methodological approach”, Proceedings of the Ninth International Workshop on Advanced Motion Control (ACM 2006), Istanbul, March 27-29, pp. 750-755, 2006. [28] S. N. Woods, M. L. Walters, K. L. Koay, K. Dautenhahn, “Methodological Issues in HRI: A Comparison of Live and VideoBased Methods in Robot to Human Approach Direction Trials”, Proceedings of 15th IEEE International Symposium on Robot and Human Interactive Communication ( RO-MAN 06), Hatfield, UK, 6-8 September 2006, University of Hertfordshire. pp 51-58. [29] S. N. Woods, K. Dautenhahn, C. Kaouri, “Is Someone Watching Me? Consideration of Social Facilitation Effects in Human-Robot Interaction Experiments.” Proceedings 2005 IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA 2005), June 27-30, 2005, Espoo, Finland, pp. 5360. 347