Prototyping Realistic Long-term Human-Robot Interaction for the
Study of Agent Migration
K. L. Koay, D. S. Syrdal, W. C. Ho, and K. Dautenhahn, Senior Member, IEEE
Abstract— This paper examines participants’ experiences of
interacting with a robotic companion (agent) that has the ability
to move its “mind” between different robotic embodiments to
take advantage of the features and functionalities associated
with the different embodiments in a process called agent
migration. In particular, we focus on identifying factors that can
help the companion retain its identity in different embodiments.
This includes examining the clarity of the migration behaviour
and how this behaviour may contribute to identity retention.
Nine participants took part in a long-term study, and interacted
with the robotic companion in the smart house twice-weekly over
a period of 5 weeks. We used Narrative-based Integrated
Episodic Scenario (NIES) framework for designing long-term
interaction scenarios that provided habituation and intervention
phases while conveying the impression of continuous long-term
interaction. The results show that NEIS allows us to explore
complex intervention scenarios and obtain a sense of continuity
of context across the long-term study. The results also suggest
that as participants become habituated with the companion, they
found the realisation of migration signaling clearer, and felt
more certain of the identity of the companion in later sessions,
and that the most important factor for this was the agent’s
continuation of tasks across embodiments. This paper is both
empirical as well as methodological in nature.
I. INTRODUCTION
Traditionally robots have had electromechanical
embodiments with different appearances ranging from humanlike or animal-like to machine-like [1]. Since their
functionalities are often constrained by their embodiments
they are often limited to the specific tasks and the particular
environments they were designed for. It would be beneficial if
the robot’s ‘mind’ (in the sense of memory, decision making,
internal states, personality etc.) was able to change its
embodiment, when needed, to have access to different
capabilities as required by a variety of tasks and working
environments [2][3][4][5][6]. This migration would allow the
migrating agent to utilise different embodiments, including
non-robotic devices such as smart phones, and travel with its
user [7], and continue to provide assistance while taking
advantage of different embodiments. This would also improve
the agent’s understanding of, and relationship with the user
therefore contributing to a sense of continuity and
companionship independently of the agent’s specific
embodiment [2]. In addition, the agent that inhabits more than
one embodiment will be less constrained to a particular
physical and information space [5]. Interacting with the user in
different embodiments and contexts allows the agent a better
understanding of the user, and a stronger sense of contextual
K. L. Koay, D. S. Syrdal and K. Dautenhahn are with the School of
Computer Science, University of Hertfordshire, Hertfordshire, Hatfield
AL109AB, United Kingdom. (e-mail: (K.L.Koay, D.S.Syrdal,
K.Dautenhahn}@herts.ac.uk).
and situational awareness. Over time the agent would establish
its own unique identity, memories and beliefs and should be
able to maintain unique and individual relationship with its
user, regardless of its embodiments [2][6][8].
From the user’s perspective the advantage of migrating an
agent is that it would allow continuous interaction with the
same agent in different robotic embodiments [9], without
having to spend a large amount of time familiarizing and
personalizing a variety of different robots or other electronic
devices.
Identity retention of a migrating agent in different
embodiments is one of the main research challenges we will
address in this paper. The fact that an agent can be present in
different embodiments may cause confusion for the user to
identify the unique identity of their agent [10] in the different
embodiments. While specific visual cues, such as a color, or a
specific symbol for identifying the individual agent have been
proposed [4][11][13][14], this may not be as straightforward
with a physically embodied artefact like a robot, in particular
if such a robot is what has been termed appearance-constrained
[12]. Also, identity retention in terms of a personal agent (i.e.
robotic companion), should go beyond simple identification,
but should also allow the trust [15] and the relationship [16]
that has been built between the user and companion to be
retained across embodiments.
We believe it is essential that migrating agents are able to
clearly communicate their migration process [4][17][18].
Specific migration signaling can play a key role during a
user/-agent habituation period, to convey and establish the
concept of the migrating agent to the user, and to distinguish it
e.g. from processes where one robot delegates a task to another
robot [17]. If the agent is able to help the user establishing the
concept of migration, it will be easier for this user to accept
and believe in the technology and learn to identify their agent
in different embodiments.
Ideally, to address the research issues highlighted above, it
is necessary to conduct a long-term human-robot interaction
(HRI) study where participants are given the opportunity to
use and integrate the migrating agent, as their companion, into
their daily routine at their own homes on a long-term basis,
beyond habituation and after the novelty effect has worn off.
However, it is often not possible to deploy home companion
research prototypes in the participants’ homes as these systems
have not reached the stability and safety levels of
commercially available systems and require constant
supervision by technical experts to intervene in the event of a
W. C. Ho was with the School of Computer Science, University of
Hertfordshire, Hertfordshire, Hatfield AL109AB, United Kingdom.
system malfunction [19]. Thus, it would be very difficult, if
not impossible, to get ethical approval for conducting such a
study or consent from the participants to install sensors or
make modifications to their homes. Furthermore, deploying
such systems at participants’ homes would require a
significant amount of resources, which are often not feasible.
To overcome these issues, we propose a new scenario
prototyping methodology for designing open-ended HRI
scenarios that aim to provide participants with immersive
long-term interaction experiences similar to those that may
have been carried out in their own homes.
Section II shows the specific research questions addressed
in this paper. The scenario prototyping methodology will be
described in section III together with the scenarios we created
and implemented for the user study. The apparatus used in this
study is presented in section IV, followed the Method, Results
and Conclusion sections in section V, VI and VII respectively.
II. RESEARCH QUESTIONS
The goal of this paper is to study issues related to long-term
interaction with a migrating agent. In particular, we would like
to explore users’ perception of the realisation of migration and
retention of identity by their companion (i.e. migrating agent)
across different embodiments. Realisation of Migration
measures the degree to which participants felt that the
migration process was successfully communicated by the
companion. Retention of Identity measures the degree to which
participants felt they were interacting with the same
companion in the different embodiments.
This section will briefly introduce the NIES framework
followed by a description of the Robot House Scenarios which
were developed using NIES framework for this study.
A. Narrative-based Integrated Episodic Scenario
Framework
The Narrative-based Integrated Scenario (NIES)
framework was developed for prototyping immersive and
open-ended HRI experimental scenarios for long-term studies
that are otherwise not feasible due to the constraints associated
with the need to embed complex research prototypes into
participants’ homes.
The NEIS scenario prototyping framework was an
improvement over the Integrated Episodic Scenario (IES)
framework [20] proposed and validated previously, to provide
a means for prototyping immersive episodic interaction
scenarios for open-ended HRI study. This is accomplished by
using an overall narrative arc to link the episodic scenarios
together, forming a chain of meaningful events that
participants can relate to and experience during the trials. It
provides a more holistic, continuous and targeted interaction
experience for the participants with the research prototypes.
This is different from the IES framework where the scenarios
are isolated, disjoined episodes.
The NIES framework introduces a three-phase structure to
guide the design of the main narrative arc, as follows:
1.
Demonstrate the research prototypes to the participants.
2.
Provide a habituation period for the participants to get
used to the experimental environment and daily routine
structure while exploring, learning and using the
prototypes.
3.
Intervene by introducing key events related to the
research questions to provide participants with
experiences on the issues to elicit well informed feedback
based on rich user experiences.
These two issues can be formulated into the following
research questions:
RQ1. Do participants feel that the process of migration is
communicated clearly to the user within the
interaction?
RQ2. Do participants feel that they are interacting with
the same agent across different embodiments?
RQ3. What are the most important aspects of an agent to
communicate identity retention?
To study these issues, we instantiated a set of scenarios in
which a human user could interact with one artificial
companion which could inhabit one of three different
embodiments depending on context.
III. SCENARIO DEVELOPMENT
The scenarios used in this study were developed using the
Narrative-based Integrated Episodic Scenario (NIES)
framework that was intended to provide participants with
similar experiences of living and continuous interaction with a
robotic migrating companion that is capable of providing
physical and cognitive assistance in a real-life domestic
environment. The scenarios emphasise long-term and openended interactions to ensure that participants are in charge of
their own interactions and that the scenarios are meaningful to
their everyday lives. This is to ensure they will gain the
necessary experiences to provide well-informed feedback
related to the research questions.
The first two phases are to create a series of scenarios (i.e.
a daily routine) for the interaction sessions that would provide
participants with experiences similar to how people explore,
habituate and utilise new technologies in their own home
environment, before introducing narrative-based intervention
to elicit their responses.
The third phase focuses on intervention-driven narrative.
Specifically, the creation of a series of scenarios that would
alter the course of events from the first two phases in order to
introduce new situations, which are directly related to the
research questions, for the participants to encounter. One of
the intervention-driven narratives used in this study is to
introduce situations where the companion’s embodiment is
broken and replaced with a different embodiment.
Interaction scenarios created from the narrative provide
real, immersive, open-ended and self-contained interaction
sessions that are narratively linking previous and subsequent
sessions together allowing for a coherent overall experience
with continuity between interaction sessions, an experience
similar or close to what they would experience if they had the
system installed and used it in their homes. With the
interaction scenarios, the participant’s goal is to take the role
of the main character, acting as themselves, taking part in a
series of interrelated narrative based interaction sessions
where the main character lives and interacts with the
technology/system that is being investigated (i.e. a robotic
migrating companion) in a naturalistic environment.
B. Overall Scenario
The overall narrative used in this study was ‘a week in the
life of a user with their newly acquired robotic companion’. It
consisted in total of 9 episodic scenarios created to support the
3 main phases, see TABLE I, to allow the user to explore,
habituate and use their new robotic companion in various
aspects of possible user activities in open-ended episodic
interaction sessions.
TABLE I. OVERALL SCENARIOS
Phase
Introduction and
Demonstration
Exploration and
Habituation
Intervention
Each episodic scenario has its own narrative, but represents
particular inter-related key events of the user’s life that are
related to the robotic companion within the overall narrative
time-line to achieve consistency, coherency and realism of the
sub-narratives (i.e. close links between different open-ended
sessions). These sub-narratives were realized into interaction
sessions.
The overall scenario for this study involves a participant,
an experimenter whose sole responsibility is taking the role of
a technician in the narrative, and the companion (a migrating
agent) with its three different robotic embodiments (see Figure
1), which are variations of the “Sunflower” robot. Throughout
these scenarios, there is only the companion that the
participant interacts with, and it only inhabits one embodiment
at any given time. Therefore, if the companion did not inhabit
a specific embodiment, the embodiment would be hibernating.
While the system could have supported multiple companions
sharing these embodiments between them, this was not
relevant to this particular usage scenario.
Introduction and Demonstration phase
This phase represents the first interaction scenario where a
user who has just bought a new robotic companion is being
introduced to the companion and its embodiments (SF1 –
Mobile Sunflower and SF2 – Stationary Sunflower) by a
technician coming to the user’s house.
During this interaction session, the technician
demonstrates the companion’s functionalities, and shows the
participants how to interact with the companion. The
participants are taught how to use the companion’s Touch
Screen Graphical User Interface (GUI) to personalise the
companion’s (SF1) expressive behaviours, to initiate
migration process for the companion to migrate itself between
two embodiments (SF1 and SF2), to approve the companion’s
request to migrate to a different embodiment to achieve its task
(e.g. from stationary SF1 to mobile SF2 in order to follow the
participant to the kitchen) and to command the companion to
carry objects with its tray and to go to various locations in the
Robot House. The first session lasted about one hour per
participant.
Exploration and Habituation phase
This phase provided participants with experience of a) a
typical daily routine, and b) the fact that the companion uses
different embodiments to perform different tasks when
assisting the user with his/her activities. Due to the open-ended
nature of these interaction sessions, each session lasted
Interaction Session
1
Introduction and demonstration.
2
Breakfast at home
3
Late afternoon
4
Paperwork afternoon
5
Departure of mobile embodiment SF1
6
A day with stationary embodiment SF2
7
Arrival of a loan mobile embodiment SF3
Morning routine with, and departure of,
embodiment SF3
Return of the original mobile embodiment
SF1
8
9
(a)
(b)
Figure 1 The three Sunflower robots used in the study. (a) The
mobile Sunflower SF1 is located infront of the sofa while the
stationary Sunflower SF2 is located behind the sofa, (b) Mobile
Sunflower SF3.
between half an hour to an hour depending on participant’s
behaviours and decisions they made within the trial sessions.
The aim was to expose and familiarise the participants with the
robotic companion technology focusing on its functionalities
and the migration concept through direct exploration and
participation.
Breakfast at home session focuses on a morning routine
where the narrative begins with the user waking up and then
sitting on the sofa. The interaction session begins with the
companion approaching and suggesting to the user to have
breakfast.
Late afternoon session focuses on a late afternoon routine
that begins with the user having just returned home and sitting
down on the sofa, resting. The interaction session begins with
the companion approaching the user and suggesting to have a
drink.
Paperwork in the afternoon session focuses on an after
lunch routine that begins with the user returning to the dining
table to continue doing paperwork. The interaction session
begins with the companion approaching the user and
suggesting to have a drink.
Intervention phase
This phase focuses on exposing participants to scenarios
related to research questions in order to elicit informed
feedback. A total of five different interaction sessions with
inter-related narratives were developed, with the goal of
enabling participants who have been habituated to the robotic
companion in order to experience:
Losing some of the companion's hardware
features and functionalities associated to a
particular (simulated) faulty embodiment when it
was removed.
A new physical presence of the companion,
associated with the replacement embodiment that
has similar physical characteristics to the original
one.
Different robot embodiments (mobile versus
static) and similar robot embodiments (mobile
versus mobile) performing the same tasks.
The process of re-acquaintance with the original
physical embodiment when it is returned after
(simulated) repair.
The advantages and disadvantages of the
migration technology.
The summary for each of the five narratives in
chronological ordered are:
Departure of mobile embodiment SF1 session is based on
the morning routine participants experienced during the
Exploration and Habituation phase. The only difference is the
inclusion of an additional intervention narrative that shows the
mobile embodiment SF1’s LED display panel malfunctioning
and the robot being required to be sent back for repair. This
involves the technician making a video call to the user and
arranging to come and retrieve SF1 for repair. The user then
witnesses the technician commanding the companion to
migrate from SF1 to the stationary embodiment SF2 before
taking SF1 away for repair.
A day with the stationary embodiment SF2 session uses the
same morning routine from the Exploration and Habituation
phase. The effect of the intervention narrative from the
previous session provides a different interaction experience for
the participants and their companion in this session. The
companion now resides in the stationary embodiment SF2 and
has lost all of its physical assistance functionalities due to the
lack of mobility. The companion is still able to provide
cognitive prosthesis functionalities but instead of exhibiting
behaviours associated with the previous embodiment SF1
(approaching the user to show the notification message), now
it has to wait for the user to approach embodiment SF2 before
it can show the notification message.
The arrival of a loan mobile embodiment SF3 session is
based on the late afternoon routine from the Exploration and
Habituation phase. The intervention narrative here is to
introduce a temporary replacement mobile embodiment SF3
into the overall scenario from the end of this scenario. The
majority of the interactions between the user and the
companion in this session are still via embodiments SF2, until
the very end of the session, when the technician arrives with
the replacement embodiment SF3. The user is then asked by
the technician to migrate the companion from embodiment
SF2 to SF3 for a test run.
Morning routine with, and departure of, embodiment SF3
session uses the same morning routine. The aim is to provide
participants with experiences necessary for them to compare
their interaction experiences with the companion when one of
the embodiments they are familiar with was replaced with
another similar embodiment. In this session, the robotic
companion can migrate between stationary embodiment SF2
and the replacement mobile embodiment SF3 and regains its
full functionalities including those associated with mobility
such as approaching the user to show them notification
messages, carry objects for the user from one location to
another etc. The first part of the narrative for this session is the
same to the morning routine sessions they experienced
previously in the Exploration and Habituation phase and the
first two sessions in the Intervention phase. This provides
participants with a control routine they are familiar with. The
familiar routine allows participants to focus on their
interaction experiences with the companion after the
introduction of the replacement embodiment SF3. The
narrative ends with another intervention narrative where the
technician making a call informing the user that the
embodiment SF1 has been fixed and that he will come to
collect embodiment SF3.
Return of the original mobile embodiment SF session was
designed to allow participants to re-experience their
interaction with SF1 in order for them to compare this
experience with their previous interaction experience with
SF3. The intervention narrative aims to provide a ‘happy
ending’ to the overall narrative by giving participants a
pleasant experience of seeing and interacting with the original
mobile embodiment SF1 again. This session uses the morning
routine that participants are familiar with plus an added
intervention narrative to allow the technician to deliver
embodiment SF1 to the user and leave after the user has
verified that SF1 is in working order. The morning routine then
continues as described previously with the companion utilising
both embodiments SF1 and SF2 to assist the user with their
activity of daily living. This session creates a comparable
interaction experience necessary for the participants to provide
constructive feedback with regards to their experiences with
the companion when it migrates and utilises embodiments
SF1, SF2 and SF3.
IV. APPARATUS
A. UH Robot House
The UH Robot House (RH) is a residential house that has
been adapted to provide a real-life domestic setting for the
study of HRI [20][21][22][23] and the development of smarthome technology [24] for robot-assisted living [25]. It has
been equipped with a real-time electrical energy monitoring
system to detect the activation and de-activation of common
household electrical appliances such as the TVs, microwaves,
computers, table lamps etc. The sensory information is used
by the companion to provide appropriate assistance in
response to the activity of the user or event as it happens in the
environment, i.e. approach the user and offer carrying
assistance when they open the fridge door, inform the user
when the kettle has boiled or when a visitor rings the doorbell
etc.
B. Sunflower Robot
The Sunflower robot, shown in Figure 2. was designed and
developed as a highly expressive robotic embodiment [22] for
the study of robotic home companion in the LIREC project
[26]. It has a mobile base, a square body and a cylindrical head.
The body of the Sunflower robot is equipped with a diffuse
LED display panel in the shoulder as well as a slide-out
carrying tray with an integrated GUI on its ‘chest’. The robot’s
head is mounted on a 4-degree of freedom articulated neck and
has a non-animated face which was built with two static eyes
and a webcam appearing as its nose.
It has four expressive channels and is able to attract the
user’s attention and provide simple non-verbal feedback,
either individually or in combination of using light (multicoloured light signals), sound (midi tunes), and the movements
of its head and body. In addition, it can also use its UI to
interact with the user. It is integrated into the Robot House
computational infrastructure and has the competencies to
navigate autonomously, detecting the user’s activity (i.e.
sitting on the sofa, turning on the kettle, opening the fridge
door etc.) and detecting events that are happening in the house
(i.e. incoming Skype call, doorbell is ringing), as well as
providing cognitive and physical assistance. Its tray can be
used to carry objects, an attached tablet computer (GUI) can
display messages.
The companion can use any of the Sunflower embodiments
shown in Figure 1. The Sunflower robot uses its GUI to
communicate with users. For example, in a typical robotinitiated task of reminding the user to have a drink, the robot
will approach the user, perform an attention seeking behaviour
(i.e. moving its head and body, and blinking its LED panel,
while playing short midi tunes) to attract the user’s attention,
then extend its tray to show the suggestive message on the user
interface, with options for the user to agree or reject the
suggestion.
Users can also initiate interaction with the companion via
the same user interface or by entering their preferences such as
drink and food preferences for breakfast etc., as well as
personalising the companion’s expressive behaviours (lights,
colours, movements). The companion can remember the users’
settings regardless of which embodiment it has migrated to.
The user can ask the companion to go to specific locations (i.e.
Kitchen, Living room etc.) in the RH as well as request it to
migrate to a different embodiment via the GUI. Depending on
the context or task the companion is going to perform, it may
Figure 2. Example of a user interacting with the Sunflower robot.
ask the user for permission to migrate to a different
embodiment.
The main difference between SF1 and SF2 is that SF2 does
not have a mobile base. The main differences between SF1 and
SF3 are the lack of an articulated head on SF3 and the GUI
that is located on its shoulder. Details of all these differences
are shown in TABLE II.
TABLE II. DIFFERENCES BETWEEN THE THREE EMBODIMENTS
Expressive
Channel
SF1
SF2
SF3
Head
movement
Yes
Yes
No
Body
movement
Yes
No
Yes
Yes
No
Yes
Location
Front of
drawer
Front of
drawer
On of
shoulder
Carrying
objects
Yes
No
Yes
Navigation
Yes
No
Yes
Skype call
No
Yes
No
Navigation
capability
GUI
Functionalities
V. METHOD
A. Measures
Participant responses to the overall scenario were
measured using a Scenario Acceptance Scale which is
described in appendix Scenario Evaluation Scale in [20] and
intended to measure the participants’ acceptance of the robot
within the scenario. It consists of 10 Likert scale items, and is
scored so that a middle “neutral” response on all items would
receive a score of 50.
Responses to the migration behaviours were measured
along two dimensions: Identity Retention, the perception of
one agent in the different embodiments; and Migration
Realisation, how clear the participants found the behaviour
signaling in the process of migration. These questions were
used in a previous study and are described in [20]. Participants
were asked to complete these after Session 2, 4, 7 and 8.
In addition, participants were asked the following question
“Which of the following factors was the one that best
communicated it to you that it was the same companion in the
different bodies”:
Its knowledge of your interaction history.
The personalised attention-seeking behaviour.
The way that it migrated out of, and into its
different bodies.
Its knowledge of your preferences.
Its ability to continue a task which it had started
in a different body
These categories were based on open-ended responses to
similar questions in previous studies investigating migration
(for instance [20][27][28]) asked after all the 9 sessions had
been completed, and was part of an evaluation where the
participants were asked to consider the study as a whole.
Finally, participants were also asked two open-ended
questions in the questionnaire: “Did you feel as if you were
interacting with the same companion in the different robot
bodies?”, and “How could it better communicate that it is the
same companion?”.
C. Demographics
There were 9 participants, 6 female and 3 male, recruited
via advertisement on the University of Hertfordshire Intranet.
The participants were between 21-32 years of age, with a
median age of 25 years with an interquartile range of 6 years.
B. Procedure
During the first session, prior to the trial, the experimenter/
facilitator welcomed the participants to the RH introducing
himself and a second experimenter whose responsibility is to
initiate and monitor the systems during the trial from a small
office (i.e. a converted bedroom not used in the study), and
also acted as the technician in the narrative. The facilitator then
proceeded to introduce and familiarise participants with the
RH (i.e. the layout of the house including the location of the
dishes, cutlery, cup, tea and coffee etc.), and how to operate
the house’s electrical appliances.
VI. RESULTS
Below is the procedure for each interaction session.
1) Introduction
The facilitator would introduce the scenario, grounding the
interaction in a narrative which provided context.
An example of a typical short narrative used to set-up the
context for a morning interaction session is shown below:
In the introductory session you gave us some preferences
for what you would like to do in the early morning. Your
robotic companion has these preferences and will apply them
when interacting with you.
Now imagine that you have woken up in your bedroom.
When you are ready you will come out of the bedroom, sit
down on the sofa and log-in to the robot with your user
account and password. The robot will then begin today's
session.
After giving this information to the participant, the
facilitator leaves the participant on the sofa and goes to a
facilitator room, to let the participant interact with the robot
and the scenario by themselves.
2) Interaction
This part consisted of the actual interaction scenario. The
robot and the participant would interact throughout this
scenario without the involvement of the researchers.
For the example above, this part would begin with the
robotic companion approaching the participant, seated on the
sofa, to suggest breakfast, remind him/her about the hot water
in the kettle (if they forgot they were making a hot drink) and
notify them about post/parcel/newspaper delivery.
In case the participant needed any assistance from the
experimenter during any of the sessions, they were asked to
knock 3 times at a designated location on the wall.
3) Post-Interaction
This phase involved the participant completing the
different post-interaction questionnaires and discussing the
experiences they just had in the trial with the facilitator. In
addition, both researchers would be available to answer any
questions that the participant might have about the migrating
companion and their interaction. In addition, this phase would
also be used to organise future sessions and respond to other
issues that might have come up.
A. Scenario Acceptance
The Scenario Acceptance Scores for the Habituation and
Manipulation Phase, which focuses on participants’
experience of each scenario within the study, is shown in
Figure 3. The results suggest that scores on this scale were
significantly higher than a ‘neutral’ score of 50 indicating that
the participants found the scenarios believable and provided
them with realistic experiences that allowed the participants to
suspend their disbelief and immerse themselves into the
narratives. There were no significant differences between
sessions (Friedman’s 2(7)=3.09, p=.87), suggesting that
participants responded to the scenario in a consistent manner
across sessions.
This suggests that our approach of a Narrative-based
Integrated Episodic Scenario framework in a realistic home
environment was successful: the environment, the individual
narrative within each scenario, the overall narrative between
100
80
60
40
20
0
2
3
4
5
6
7
8
9
Sessions
Figure 3. Scenario Acceptance Scores for session from Habituation
(2-4) and Manipulation Phase (5-9).
scenarios, the characters and their interactions within the
scenarios (i.e. the technician, the robotic companion with its
different embodiments and the participants) as a whole
provided the realistic interaction experience necessary for the
participants to evaluate the system.
B. Migration: Post-Interaction Evaluation
Participants rated both Migration Realisation and Identity
Retention higher as the study progressed (the direction of this
trend is described in Figure 4), Friedman tests suggested that
this trend was significant for Migration Realisation
(2(3)=9.61, p=0.02), but only approached significance for
Identity Retention (2(3)=5.43, p=0.14). Post-hoc tests
reported in TABLE III and TABLE IV show that the
differences were most pronounced between session 8 and the
earlier sessions. This trend suggests that not only were
participants able to rely more on the cues provided by the
system to signal the migration process, but they also felt more
certain of the identity of the companion in the later sessions.
C. Factors for Identity Retention
Results from the participants’ responses to the most
important factor that was contributing to the companion’s
identity retention is shown in TABLE V. Interaction context,
i.e. the manner in which the companion would begin one task
in one embodiment and complete it in another, was rated as
most important when contributing to the persistent identity of
the agent in different embodiments.
D. Open-ended Responses
Responses to the first open-ended question changed
slightly over time as well. Overall, participant responses to this
item consisted primarily of affirmations that they did feel as if
they were interacting with the same companion in the different
embodiments. However, 3 participants felt that they had not
interacted with the agent enough in the second session, and
only one participant referenced this in the fourth session.
5
4
3
2
1
0
2
Technical improvements primarily focused on the speed of
the migration process, as well as the possibility of foregoing
steps in the signaling behaviour in order to have an
instantaneous switch from one embodiment to the other when
performing tasks.
Finally, some participants rejected the current model of
migration and would rather have the agent control all
embodiments at the same time. The main motivation for this
was ease of use, and the convenience of being able to access
specific features of a given embodiment directly from another
embodiment.
Some of these results contrast with the results seen from
short-term studies such as [5], where explicit identification has
been seen as most important. This result is encouraging,
especially when using agent migration between robotic
embodiments whose appearances are restricted by the
functions they are to fulfil, in that coherence of behaviour is a
valid way of communicating identity to the user.
VII. CONCLUSION
By using narrative and persona-driven scenarios, and
grounding them in emergent technologies, we were able to
create a coherent narrative in which the participants were
allowed to interact with an agent migrating between different
embodiments. Unlike our previous studies, [20][23], the
research presented in this paper shows how one can conduct
an ‘experimental’ manipulation in a narrative study like this.
The seamless integration of this manipulation into the
narrative made for a more coherent experience of the agent,
and allowed us to explore the agent migration in domestic
environment to a much greater degree than would have been
possible using a constrained or short-term study, despite using
technologies that are unstable outside a controlled
4
7
8
Sessions
Figure 4. Migration Response Scores for session from Habituation
(2-4) and Manipulation Phase (5-9).
TABLE III. WILCOXON LSD POST-HOC TESTS
FOR MIGRATION REALISATION
Responses to the second open-ended question (how to
better communicate identity retention), could be classified
roughly into three classes, (1) explicit identification, (2)
technical improvements and (3) different model of migration.
Explicit identification was a common theme in Session 2,
and suggested that some of the participants wanted the robot
to explicitly identify the agent, suggestions for this was a
unique identifier for the agent, either in written text, but one
participant also suggested a unique face for the agent which
could be displayed as an avatar on the touch screen of the
robot.
Legend
Realisation
ID Retention
Session 2
Session 4
Session 7
Session 4
Session 7
Session 8
Session 7
Session 8
Session 8
Mean
Difference
-0.25
-0.5
-0.75
-0.25
-0.25
0
Wilcoxon.p
0.293
0.049
0.028
0.087
0.049
0.231
TABLE IV. WILCOXON LSD POST-HOC TESTS
FOR IDENTITY RETENTION
Session 2
Session 4
Session 7
Session 4
Session 7
Session 8
Session 7
Session 8
Session 8
Mean
Difference
0.125
-0.25
-0.75
-0.375
-0.375
-0.5
Wilcoxon.p
0.500
0.425
0.121
0.293
0.101
0.102
TABLE V. MOST IMPORTANT FACTOR FOR IDENTITY
RETENTION
Factor
Participants
Accessing Interaction History
2
Personalised Attention Seeking Behaviour
1
Migration Realisation
0
Personalised Preferences
0
Sense of Context/Task Continuation
5
development environment.
With the NIES framework we were able to explore
complex intervention scenarios and obtain results for
migrating agents suggesting that the sense of context/task
continuation factors can be used to help a migrating agent to
retain its identity in different robot embodiments that may not
have properties that are similar or replicable. Signaling the
migration process (realisation of migration) may not be the
most important factor for our participants at the end of the
study when compared to other factors shown in TABLE V.
This may be due to the fact that the migration process can be
easily missed if one is not paying attention, a typical situation
in a domestic environment where people are easily distracted.
Hence it is not as reliable when compared to other rated
factors such as the recorded interaction history between
participants and the companion, the personalised attention
seeking behaviour, and the sense of context/task continuation
for identifying the identity of the agent. However, the results
did indicate that participants found that migration signaling
became clearer and more meaningful as they habituated with
the companion and the concept of agent migration [17].
The main limitation to our approach is the small sample
size caused by the large amount of resources required to
maintain both the prototype technologies as well as to
structure the narrative. However, the need for participants to
have at least two one hour interactions sessions per week
limited the number of participants we can practically
accommodate in the Robot House during a working week.
This may limit the generalisation of results, but even so, the
findings suggest that studies intended to explore and
prototype domestic technologies like robot companions, can
use complex, meaningful and engagement narratives, even at
this early stage of technology development, using the
Narrative-based Integrated Episodic Scenario (NIES)
framework.
ACKNOWLEDGMENT
The work described in this paper was conducted within the
EU Integrated Projects, LIREC (Living with Robots and
Interactive Companions). Funded by the European
Commission under FP7-ICT under contract FP7-215554.
REFERENCES
[1]
M. L. Walters, D.S. Syrdal, K. Dautenhahn, R. Te Boekhorst, and K. L.
Koay, “Avoiding the uncanny valley: robot appearance, personality and
consistency of behavior in an attention-seeking home scenario for a
robot companion,” Autonomous Robots, vol.24, no.2, 159–178, 2008.
[2] K. Ogawa, and T. Ono, “Itaco: Constructing an emotional relationship
between human and robot,” in Proc. of RO-MAN 2008, pp.35–40.
[3] M. Imai, T. Ono and T. Etani, “Agent migration: communications
between a human and robot,” in Proc. IEEE International Conference
on Systems, Man, and Cybernetics, vol. 4, pp. 1044-1048, Tokyo, 1999.
[4] K. L. Koay, D. S. Syrdal, K. Dautenhahn, K. Arent, L. Malek, and B.
Kreczmer, “Companion Migration – Initial Participants’ Feedback from
a Video-Based Prototyping Study”, Mixed Reality and Human-Robot
Interaction, Springer, Series "Intelligent Systems, Control and
Automation: Science and Engineering", vol. 47, Xiangyu Wang (Ed.),
pp133-151, 2011.
[5] B. R. Duffy, G. M. O’Hare, A. N. Martin, J. F. Bradley, and B. Schon,
“Agent chameleons: Agent minds and bodies,” in Proc. CASA 2003,
pp.118-125.
[6] K. Hassani and W.S. Lee, “On designing migrating agents: from
autonomous virtual agents to intelligent robotic systems,” in
SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot
for Telepresence (SA '14), ACM, Article 7, 10 pages, 2014.
[7] R. Aylett, M. Kriegel, I. Wallace, E. Márquez Segura, J. Mecurio, S.
Nylander and P. Vargas, “Do I remember you? Memory and identity in
multiple embodiments,” in Proc. RO-MAN 2013, pp. 143-148.
[8] K. F. Macdorman and S. J. Cowley, “Long-term Relationships as a
Benchmark for Robot Personhood,” in Proc. RO-MAN 2006,
pp.378-383.
[9] P. F. Gomes, E. M. Segura, H. Cramer, T. Paiva, A. Paiva, and L. E.
Holmquist, “ViPleo and PhyPleo: artificial pet with two embodiments,”
in Proc. ACE 2011, ACM, pp.1-8.
[10] B. Tomlinson, M. L. Yau, and E. Baumer, “Embodied mobile agents,”
in Proc. AAMAS , pp.969–976, 2006.
[11] G. M. P. O’Hare, B. R. Duffy, B. Schön, A. N. Martin, and J. F.
Bradley, “Agent chameleons: Virtual agents real intelligence,” In
Intelligent virtual agents, pp.218–225, 2003.
[12] C. L. Bethel, and R. R. Murphy, “Survey of non-facial/non-verbal
affective expressions for appearance-constrained robots,” in IEEE
Transactions on Systems, Man, and Cybernetics, Part C: Applications
and Reviews, vol. 38, no.1, 83–92, 2008.
[13] A. Martin, B. Duffy, G. O’Hare, B. Schön, and J. Bradley,
“Empowering Agents within Virtual Environments,” in Proc. of the
IEEE Systems, Man, and Cybernetics UKRI 3rd Workshop on
Intelligent Cybernetic Systems, IEEE Computer Society, Los Alamitos,
2004.
[14] S.Yasuyuki, and M. Kenji, “AgentSalon: facilitating face-to-face
knowledge exchange through conversations among personal agents,” in
Proc. of the Fifth International Conference on Autonomous Agents,
pp.393–400, 2001.
[15] M. Salem, G. Lakatos, F. Amirabdollahian, K. Dautenhahn, “Would
You Trust a (Faulty) Robot? Effects of Error, Task Type and
Personality on Human-Robot Cooperation and Trust,” in Proc. HRI
2015, pp. 141-148.
[16] N. Martelaro, M. Shiloh, W. Ju, “Tell Me More: Designing HRI to
encourage more trust, disclosure and companionship,” in Proc. HRI
2016, pp. 181-188.
[17] K. L. Koay, D. S. Syrdal, M. L. Walters, and K. Dautenhahn, “A User
Study on Visualization of Agent Migration between Two Companion
Robots,” in Proc. HCII 2009, pp. 219-226.
[18] K. Arent and B. Kreczmer, "Identity of a companion, migrating
between robots without common communication modalities: Initial
results of VHRI study," in Proc. MMAR 2013, pp. 109-114.
[19] J. Gallego-Pérez, M. Lohse, and V. Evers, “Robots to motivate elderly
people: present and future challenges,” in Proc. RO-MAN 2013,
pp.685-690.
[20] D. S. Syrdal, K. Dautenhahn, K. L. Koay, and W. C. Ho, “Views from
within a narrative: Evaluating long-term human-robot interaction in a
naturalistic environment using open-ended scenarios,” Cognitive
computation, vol.6, no.4, pp.741–759, 2014.
[21] K. L. Koay, D. S. Syrdal, M. Ashgari-Oskoei, M. L. Walters, and K.
Dautenhahn, “Social Roles and Baseline Proxemic Preferences for a
Domestic Service Robot,” Int. J. of Social Robotics vol. 6, pp.469-488,
2014.
[22] K. L. Koay, G. Lakatos, D. S. Syrdal, M. Gácsi, B. Bereczky, K.
Dautenhahn, A. Miklósi, and M. L. Walters, “Hey! there is someone at
your door. A hearing robot using visual communication signals of
hearing dogs to communicate intent,” in Proc. of the IEEE Symposium
on Artificial Life, pp.90-97, 2013.
[23] D. S. Syrdal, K. Dautenhahn, K. L. Koay, and W. C. Ho, “Integrating
constrained experiments in long-term human-robot interactions using
task-and scenario-based prototyping,” The Information Society Special
Issue: Beyond Industrial Robotics: Social Robots Entering Public and
Domestic Spheres, vol. 31, no. 3, pp.265-283, 2015.
[24] I. Duque, K. Dautenhahn, K. L. Koay, I. Willcock, and B. Christianson,
“Knowledge-driven user activity recognition for a smart house.
Development and validation of a generic and low-cost, resourceefficient system,” in Proc. the international conference on Advances in
Computer-Human Interactions, pp. 141-146, 2013.
[25] J. Saunders, D. Syrdal, K. L. Koay, N. Burke and K. Dautenhahn,
“Teach Me - Show Me'- End-user personalisation of a smart home and
companion robot,” in IEEE Transactions on Human-Machine Systems,
vol. 46, no. 1, pp.27-40, 2015.
[26] LIREC - Living with Robots and Interactive Companions. Available:
http://www.lirec.eu/. [Accessed: 2- March- 2016].
[27] D. S. Syrdal, K. L. Koay, M. L. Walters, and K. Dautenhahn, “The boyrobot should bark! -children’s impressions of agent migration into
diverse embodiments,” in Proc. of New Frontiers of Human-Robot
Interaction, a symposium at AISB, 2009.
[28] K. L. Koay, D. S. Syrdal, M. L. Walters, and K. Dautenhahn, “A user
study on visualization of agent migration between two companion
robots,” in Proc. of Human-Computer Interaction International (HCII
2009).