Skip to main content
ABSTRACT Code reuse in advanced robotics can be problematic due to the wide spectrum of robotic projects and little standardization. Although, in some cases, there are standardisations and tools that can be used, such as YARP, there are... more
ABSTRACT Code reuse in advanced robotics can be problematic due to the wide spectrum of robotic projects and little standardization. Although, in some cases, there are standardisations and tools that can be used, such as YARP, there are still many niches in the field unable to use these. This paper addresses some of the issues preventing these tools from being used. It concentrates on some widely used concepts such as genericity and modularity; and in particular it discusses encapsulation, generic templates, and work in progress towards software self-regulation and self-alteration. The paper also studies the grouping of modules into items which can recursively reside within larger items, leading to a platform designed to take these concepts into consideration. This allows such modules to be employed in an easy and efficient manner, allowing not only code re-use but also enabling concepts of self-repairing and self-maintenance robots. With this level of extendibility, SAMGAR aims to facilitate the implementation for intelligent agents and the migration of their ldquopersonalitiesrdquo (behaviour tendencies) from one physical embodiment to another. A proof of concept implementation using a robotics simulation environment is presented.
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to... more
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people’s trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. We conducted a between-subjects study where participants’ trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants’ choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
Wetested thehypothesis that children aremore attentive toarobot iftherobot appears tobeinterested inthe children. Inaddition, weinvestigated ifandhowthequality and quantity ofa child's attentive behaviour varies withthe distance... more
Wetested thehypothesis that children aremore attentive toarobot iftherobot appears tobeinterested inthe children. Inaddition, weinvestigated ifandhowthequality and quantity ofa child's attentive behaviour varies withthe distance totherobot, reflecting thenotion of"social spaces". Hereto, 16groups ofupto10children eachwereengaged ina playscenario inwhichthey hadtomovecloser toarobot over6 successive rounds. Therobotwasendowedwitha "camera eye"andanarmandhand. Thecameracould either benon- moving ("static") oractively "searching" ("active searching"), giving theimpression itwastrying toselect achild tofocus on. Likewise, thearm andhand couldeither befixed ina permanent pointing position ("permanent pointing") oractively rise topoint selectively ataparticular child whenitstopped facing it("selective pointing"). Theresults showed that: 1)Themeanfrequency ofoverall attentive behaviour bythe children (including atten...
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have to trust robots to reliably and securely engage them in collaborative tasks. Our main research aims to assess... more
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have to trust robots to reliably and securely engage them in collaborative tasks. Our main research aims to assess whether a certain degree of transparency in the robots actions, the use of social behaviours and natural communications can affect humans’ sense of trust and companionship towards the robots. In this paper, we introduce the research topic and our approach to evaluate the impact of robot social behaviours on people’ trust of the robot. Future works will use the results collected during this study to create guidelines for designing a robot that is able to enhance human perceptions of trust and acceptability of robots in a safe Human-Robot Interaction.
When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and... more
When studying the use of assistive robots in home environments, and especially how such robots can be personalised to meet the needs of the resident, key concerns are issues related to behaviour verification, behaviour interference and safety. Here, personalisation refers to the teaching of new robot behaviours by both technical and non-technical end users. In this article, we consider the issue of behaviour interference caused by situations where newly taught robot behaviours may affect or be affected by existing behaviours and thus, those behaviours will not or might not ever be executed. We focus in particular on how such situations can be detected and presented to the user. We describe the human–robot behaviour teaching system that we developed as well as the formal behaviour checking methods used. The online use of behaviour checking is demonstrated, based on static analysis of behaviours during the operation of the robot, and evaluated in a user study. We conducted a proof-of-...
As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. We can expect that domestic robots will exhibit occasional mechanical, programming or... more
As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. We can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occur with other electrical consumer devices. For example, these errors could include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. As a first step, the current study investigated human users’ perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. We conducted a questionnaire-based study, where participants rated 20 different scenarios in which a domestic robot made an error. The potential errors were rated by participants by severity. Our findings indicate that people perceptions of the magnitude of the errors presented in the questionnaire were consistent. We did not find any significant differences in users’ ratings due to age and gender. We clearly identified scenarios that were rated by participants as having limited consequences (“small” errors) and that were rated as having severe consequences (“big” errors). Future work will use these two sets of consistently rated robot error scenarios as baseline scenarios to perform studies with repeated interactions investigating human perceptions of robot tasks and error severity.
Robot companions are starting to become more common and people are becoming more familiar with devices such as Google Home, Alexa or Pepper, one must wonder what is the optimum way for people to control their devices? This paper presents... more
Robot companions are starting to become more common and people are becoming more familiar with devices such as Google Home, Alexa or Pepper, one must wonder what is the optimum way for people to control their devices? This paper presents an investigation into how much direct control people want to have of their robot companion and how dependent this is on the criticality of the tasks the robot performs. A live experiment was conducted in the University of Hertfordshire Robot House, with a robot companion performing four different type of tasks. The four tasks were: booking a doctor’s appointment, helping the user to build a Lego character, doing a dance with the user, and carrying biscuits for the user. The selection of these tasks was based on our previous research to define tasks which were relatively high and low in criticality. The main goal of the study was to find what level of direct control over their robot participants have, and if this was dependent on the criticality of t...
In the late 1990s, the question of control was raised in the Human Computer Interaction (HCI) community within the process of designing computer interfaces. Following in their footsteps, the question of how much people want to be in... more
In the late 1990s, the question of control was raised in the Human Computer Interaction (HCI) community within the process of designing computer interfaces. Following in their footsteps, the question of how much people want to be in control of their robots and how it affects the way we should design robotic interfaces is explored. To investigate the subject, we conducted a study which involved two fully autonomously operating mobile robots, namely a multi-purpose companion robot and a single-purpose domestic robot. The purpose of the study was to evaluate participants' sense of control (perceived control -who they felt was in charge of the robots- and desired control -how they wanted the action to be executed-) for a common domestic task: cleaning. Unexpectedly, the results show the higher the participants' desired control was, the more autonomous they wanted the companion robot to be (meaning the robot executed the needed task without an explicit permission from the participants).
This paper presents results, outcomes and conclusions from a series of Human Robot Interaction (HRI) trials which investigated how a robot should approach a human in a fetch and carry task. Two pilot trials were carried out, aiding the... more
This paper presents results, outcomes and conclusions from a series of Human Robot Interaction (HRI) trials which investigated how a robot should approach a human in a fetch and carry task. Two pilot trials were carried out, aiding the development of a main HRI trial with four different approach contexts under controlled experimental conditions. The findings from the pilot trials were confirmed and expanded upon. Most subjects disliked a frontal approach when seated. In general, seated humans do not like to be approached by a robot directly from the front even when seated behind a table. A frontal approach is more acceptable when a human is standing in an open area. Most subjects preferred to be approached from either the left or right side, with a small overall preference for a right approach by the robot. However, this is not a strong preference and it may be disregarded if it is more physically convenient to approach from a left front direction. Handedness and occupation were not related to these preferences. Subjects do not usually like the robot to move or approach from directly behind them, preferring the robot to be in view even if this means the robot taking a physically nonoptimum path. The subjects for the main HRI trials had no previous experience of interacting with robots. Future research aims are outlined and include the necessity of carrying out longitudinal trials to see if these findings hold over a longer period of exposure to robots.
Stable encoding of robot paths using normalised radial basis networks: application to an autonomous wheelchair
Research Interests:
ABSTRACT The current paper focuses on a novel integration of the Personas technique into HRI studies, and the definition of a Persona-Based Computational Behaviour Model for achieving socially intelligent robot companions in living... more
ABSTRACT The current paper focuses on a novel integration of the Personas technique into HRI studies, and the definition of a Persona-Based Computational Behaviour Model for achieving socially intelligent robot companions in living environments. Our core interest is the creation of companions adapted to users' needs to support their activities of daily living. The aim is to create a mechanism that allows us to develop initial robot behaviour, i.e. behaviour when first encountering the user, which is already adapted to each user without the necessity of collecting in advance a large dataset to train the system. A persona represents the specific needs of many individuals for a particular scenario. This technique helps us develop initial robot behaviour adapted to user needs, and so reduces the amount of trials that participants have to perform during early stages of the system development. The paper describes how this behaviour model has been created and integrated into a functional architecture, and presents the motivation, background and conceptual framework for this new research direction. Future empirical studies will validate this approach and expand the initial definition of our model.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:

And 67 more