Skip to main content
The increasing adoption of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals who used... more
The increasing adoption of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals who used AI-enabled hiring software for two decision-making processes in hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Our research highlights the importance of understanding the contextual factors that shape how algorithmic hiring is practiced in organizations. CCS CONCEPTS • Social and professional topics → Employment issues; Sociotechnical systems.
In this opinion paper, we argue that global health crises are also information crises. Using as an example the coronavirus disease 2019 (COVID-19) epidemic, we (a) examine challenges associated with what we term "global information... more
In this opinion paper, we argue that global health crises are also information crises. Using as an example the coronavirus disease 2019 (COVID-19) epidemic, we (a) examine challenges associated with what we term "global information crises"; (b) recommend changes needed for the field of information science to play a leading role in such crises; and (c) propose actionable items for short-and long-term research, education, and practice in information science.
Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like COVID-19 when much research data collection must be conducted online only, these populations risk being further underrepresented. This paper... more
Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like COVID-19 when much research data collection must be conducted online only, these populations risk being further underrepresented. This paper explores methodological strategies for rigorous, efficient survey research with a large number of older adults online, focusing on (1) the design of a survey instrument both comprehensible and usable by older adults, (2) rapid collection (within hours) of data from a large number of older adults, and (3) validation of data using attention checks, independent validation of age, and detection of careless responses to ensure data quality. These methodological strategies have important implications for the inclusion of older adults in online research.
Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that... more
Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models-preferences about work and working conditions, and managerial fairness models-beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models; • Social and professional topics → Employment issues.
Algorithms increasingly automate or support managerial functions in organizations, with implications for the employee-employer relationship. We explored how algorithmic management affects this relationship with a focus on psychological... more
Algorithms increasingly automate or support managerial functions in organizations, with implications for the employee-employer relationship. We explored how algorithmic management affects this relationship with a focus on psychological contracts, or employees' perceptions of their own and their employers' obligations. Through five online experiments, we investigated how organizational agent type-algorithmic versus human-influenced one's psychological contract depending on the organizational inducement type-transactional versus relational. We explored psychological contracts in two stages of employment: during early phases, such as recruiting (Studies 1 and 2) and onboarding (Studies 4 and 5), when the agent explains the inducements to the employee; and during employment, when the agent under-delivers the inducements to varying degrees (Studies 3-5). Our results suggest that agent type did not affect psychological contracts around transactional inducements but did so for relational inducements in the cases of recruiting and low inducement delivery (Studies 1-5). Algorithmic agents signaled reduced employer commitments to relational inducements during recruiting (Study 1). Using human agents resulted in greater perceived breach when delivery of relational inducements was low (Study 5). Regardless of inducement type, turnover intentions were higher when the human agent underdelivered compared to the algorithmic agent (Study 5). Our studies show how algorithmic management may influence one's psychological contract.
Emerging research suggests that people trust algorithmic decisions less than human decisions. However, different populations, particularly in marginalized communities, may have different levels of trust in human decision-makers. Do people... more
Emerging research suggests that people trust algorithmic decisions less than human decisions. However, different populations, particularly in marginalized communities, may have different levels of trust in human decision-makers. Do people who mistrust human decision-makers perceive human decisions to be more trustworthy and fairer than algorithmic decisions? Or do they trust algorithmic decisions as much as or more than human decisions? We examine the role of mistrust in human systems in people's perceptions of algorithmic decisions. We focus on healthcare Artificial Intelligence (AI), group-based medical mistrust, and Black people in the United States. We conducted a between-subjects online experiment to examine people's perceptions of skin cancer screening decisions made by an AI versus a human physician depending on their medical mistrust, and we conducted interviews to understand how to cultivate trust in healthcare AI. Our findings highlight that research around human experiences of AI should consider critical differences in social groups. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI).
The majority of smart home research has focused on novel technical artifacts, but has overlooked the issues surrounding social relationships in the home. We argue in favor of research that is sensitive to and functions within the social... more
The majority of smart home research has focused on novel technical artifacts, but has overlooked the issues surrounding social relationships in the home. We argue in favor of research that is sensitive to and functions within the social constraints of dual income family homes. This paper describes our grounded contextual fieldwork with real families in their homes, and identifies socially-aware concepts smart home systems will need to address.
Research Interests:
As algorithms increasingly take managerial and governance roles, it is ever more important to build them to be perceived as fair and adopted by people. With this goal, we propose a procedural justice framework in algorithmic... more
As algorithms increasingly take managerial and governance roles, it is ever more important to build them to be perceived as fair and adopted by people. With this goal, we propose a procedural justice framework in algorithmic decision-making drawing from procedural justice theory, which lays out elements that promote a sense of fairness among users. As a case study, we built an interface that leveraged two key elements of the framework-transparency and outcome control-and evaluated it in the context of goods division. Our interface explained the algorithm's allocative fairness properties (standards clarity) and outcomes through an input-output matrix (outcome explanation), then allowed people to interactively adjust the algorithmic allocations as a group (outcome control). The findings from our within-subjects laboratory study suggest that standards clarity alone did not increase perceived fairness; outcome explanation had mixed effects, increasing or decreasing perceived fairness and reducing algorithmic accountability; and outcome control universally improved perceived fairness by allowing people to realize the inherent limitations of decisions and redistribute the goods to better fit their contexts, and by bringing human elements into final decision-making.
Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests in a moral, legitimate way? As one answer to this question, we present... more
Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests in a moral, legitimate way? As one answer to this question, we present WeBuildAI, a collective participatory framework that enables people to build algorithmic policy for their communities. The key idea of the framework is to enable stakeholders to construct a computational model that represents their views and to have those models vote on their behalf to create algorithmic policy. As a case study, we applied this framework to a matching algorithm that operates an on-demand food donation transportation service in order to adjudicate equity and efficiency trade-offs. The service's stakeholders-donors, volunteers, recipient organizations, and nonprofit employees-used the framework to design the algorithm through a series of studies in which we researched their experiences. Our findings suggest that the framework successfully enabled participants to build models that they felt confident represented their own beliefs. Participatory algorithm design also improved both procedural fairness and the distributive outcomes of the algorithm, raised participants' algorithmic awareness, and helped identify inconsistencies in human decision-making in the governing organization. Our work demonstrates the feasibility, potential and challenges of community involvement in algorithm design.
Virtual democracy is an approach to automating decisions, by learning models of the preferences of individual people, and, at runtime, aggregat-ing the predicted preferences of those people on the dilemma at hand. One of the key questions... more
Virtual democracy is an approach to automating decisions, by learning models of the preferences of individual people, and, at runtime, aggregat-ing the predicted preferences of those people on the dilemma at hand. One of the key questions is which aggregation method-or voting rule-to use; we offer a novel statistical viewpoint that provides guidance. Specifically, we seek voting rules that are robust to prediction errors, in that their output on people's true preferences is likely to coincide with their output on noisy estimates thereof. We prove that the classic Borda count rule is robust in this sense, whereas any voting rule belonging to the wide family of pairwise-majority consistent rules is not. Our empirical results further support, and more precisely measure , the robustness of Borda count.
Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency.... more
Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users' online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm's existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users' attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those... more
People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-to-face communication. We envision that enabling a remote person to be spatially superposed in one's living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other's presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person's presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people... more
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algo-rithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies. This article is a part of special theme on Algorithms in Culture. To see a full list of all articles in this special theme, please click here: http://journals.sagepub.com/page/bds/collections/algorithms-in-culture.
We already know algorithms can make our lives and our work more efficient, but how can we go beyond that to create trustworthy, fair, and enjoyable workplaces in which workers can find meaning and continuously learn?
Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW communities, this workshop aims to deal centrally with the topic of human " participation " and its changing role to data-driven,... more
Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW communities, this workshop aims to deal centrally with the topic of human " participation " and its changing role to data-driven, algorithmic ecosystems. Such a focus includes projects that involve users in the design of algorithms and " human-in-the-loop " systems, broader investigations into the ways in which " participation " is situated in data-driven activities, as well as conceptual concerns about participation's changing contours in contemporary social computing landscapes. This one-day workshop will be led by academic and industry researchers and sets out to achieve three goals: identify cases and ongoing projects on the topic of participation in algorithmic ecosystems; create a tactical toolkit of key challenges and strategies in this space; and set a forward-facing agenda to provoke further attention to the changing role of participation in contemporary sociotechnical systems.
As a city becomes smarter, the integrated networks of engineered cyber and physical elements provide the capability to greatly improve the quality of life of its citizens. In order to leverage these capabilities to benefit all classes of... more
As a city becomes smarter, the integrated networks of engineered cyber and physical elements provide the capability to greatly improve the quality of life of its citizens. In order to leverage these capabilities to benefit all classes of society, we propose a framework that balances the supply and demand of available resources while maximizing the social welfare of people-in-need by utilizing cyber-physical infrastructure in smart cities. We show through numerical simulations that our proposed framework can reduce the amount of resources wasted by 25% through intelligently assigning the location of services and dynamically pairing resources to different homeless populations.
How do individuals perceive algorithmic vs. group-made decisions? We investigated people's perceptions of mathematically-proven fair division algorithms making social division decisions. In our first qualitative study, about one third of... more
How do individuals perceive algorithmic vs. group-made decisions? We investigated people's perceptions of mathematically-proven fair division algorithms making social division decisions. In our first qualitative study, about one third of the participants perceived algorithmic decisions as less than fair (30% for self, 36% for group), often because algorithmic assumptions about users did not account for multiple concepts of fairness or social behaviors, and the process of quantifying preferences through interfaces was prone to error. In our second experiment, algorithmic decisions were perceived to be less fair than discussion-based decisions, dependent on participants' interpersonal power and computer programming knowledge. Our work suggests that for algorithmic mediation to be fair, algorithms and their interfaces should account for social and altruistic behaviors that may be difficult to define in mathematical terms.
Algorithms are increasingly being incorporated into diverse services that orchestrate multiple stakeholders' needs and interests. How can we design these algorithmic services to make decisions that are not only efficient, but also fair... more
Algorithms are increasingly being incorporated into diverse services that orchestrate multiple stakeholders' needs and interests. How can we design these algorithmic services to make decisions that are not only efficient, but also fair and motivating? We take a human-centered approach to identify and address challenges in building human-centered algorithmic services. We are in the process of building an allocation algorithm for 412 Food Rescue, an organization that matches food donations with non-profit organizations. As part of this ongoing project, we conducted interviews with multiple stakeholders in the service—organization staff, donors, volunteers, recipient non-profits and their clients, and everyday citizens—in order to understand how the allocation algorithm, interfaces, and surrounding work practices should be designed. The findings suggest that we need to understand and account for varying fairness notions held by stakeholders; consider people, contexts, and interfaces for algorithms to work fairly in the real world; and preserve meaningfulness and social interaction in automation in order to build fair and motivating algorithmic services.
Computational algorithms have recently emerged as the subject of fervent public and academic debates. What animates many of these debates is a perceived lack of clarity as to what algorithms actually are, what precisely they do, and which... more
Computational algorithms have recently emerged as the subject of fervent public and academic debates. What animates many of these debates is a perceived lack of clarity as to what algorithms actually are, what precisely they do, and which human-technology-relations their application may bring about. Therefore, this CSCW workshop critically discusses computational algorithms and the diverse ways in which humans relate to them—focusing particularly upon work practices and investigating how algorithms facilitate, regulate, and require human labor, as well as how humans make sense of and react to them. The purpose of this workshop is threefold: first, to chart the diversity of algorithmic technologies as well as their application, appropriation, use and presence in work practices; second, to probe analytic vocabularies that account for empirical diversity; third, to discuss implications for design that come out of our understandings of algorithms and the technologies through which they are enacted.
This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users.... more
This panel will explore algorithmic authority as it manifests and plays out across multiple domains. Algorithmic authority refers to the power of algorithms to manage human action and influence what information is accessible to users. Algorithms increasingly have the ability to affect everyday life, work practices, and economic systems through automated decision-making and interpretation of " big data ". Cases of algorithmic authority include algorithmically curating news and social media feeds, evaluating job performance, matching dates, and hiring and firing employees. This panel will bring together researchers of quantified self, healthcare, digital labor, social media, and the sharing economy to deepen the emerging discourses on the ethics, politics, and economics of algorithmic authority in multiple domains.
Seeking to be sensitive to users, smart home researchers have fo-cused on the concept of control. They attempt to allow users to gain control over their lives by framing the problem as one of end-user programming. But families are not... more
Seeking to be sensitive to users, smart home researchers have fo-cused on the concept of control. They attempt to allow users to gain control over their lives by framing the problem as one of end-user programming. But families are not users as we typically conceive them, and a large body of ethno-graphic research shows how their activities and routines do not map well to programming tasks. End-user programming ultimately provides control of devices. But families want more control of their lives. In this paper, we explore this disconnect. Using grounded contextual fieldwork with dual-income families , we describe the control that families want, and suggest seven design principles that will help end-user programming systems deliver that control.
Research Interests:
While the user-centered design methods we bring from human-computer interaction to ubicomp help sketch ideas and refine prototypes, few tools or techniques help explore divergent design concepts, reflect on their merits, and come to a new... more
While the user-centered design methods we bring from human-computer interaction to ubicomp help sketch ideas and refine prototypes, few tools or techniques help explore divergent design concepts, reflect on their merits, and come to a new understanding of design opportunities and ways to address them. We present Speed Dating, a design method for rapidly exploring application concepts and their interactions and contextual dimensions without requiring any technology implementation. Situated between sketching and prototyping, Speed Dating structures comparison of concepts, helping identify and understand contextual risk factors and develop approaches to address them. We illustrate how to use Speed Dating by applying it to our research on the smart home and dual-income families, and highlight our findings from using this method.
Research Interests:
Current approaches to personalization either presuppose people’s needs and automatically tailor services or provide formulaic options for people to customize. We propose a complementary approach to personalization: a reflective strategy... more
Current approaches to personalization either presuppose
people’s needs and automatically tailor services or provide
formulaic options for people to customize. We propose a
complementary approach to personalization: a reflective
strategy that helps people realize what matters to them and
enables them to better personalize services themselves. To
design this strategy, we first studied the practices of eight
personal health service providers. We then tested the
strategy’s efficacy by building a Fitbit Plan website that
encouraged Fitbit users to customize a plan or accept an
automatically tailored plan. For one group of users, the
website used the reflective strategy to assist in the plan setup
process. A two-week between-subjects field experiment
showed that the reflective strategy helped motivate users to
carry out their plans, increasing their average daily steps by
2,425 steps. Without the reflective strategy, users either set
easy goals or failed to carry out system-created plans,
ultimately showing no change in their average daily steps.
This work suggests that helping people reflect on and
connect with their own goals in using a personalized service
could advance the effectiveness of the service.
Research Interests:
Software algorithms are changing how people work in an ever-growing number of fields, managing distributed human workers at a large scale. In these work settings, human jobs are assigned, optimized, and evaluated through algorithms and... more
Software algorithms are changing how people work in an ever-growing number of fields, managing distributed human workers at a large scale. In these work settings, human jobs are assigned, optimized, and evaluated through algorithms and tracked data. We explore the impact of this algorithmic, data-driven management on human workers and work practices in the context of Uber and Lyft, new ridesharing services. Our findings from a qualitative study describe how drivers responded when algorithms assigned work, provided informational support, and evaluated their performance, and how drivers used online forums to socially make sense of the algorithm features. Implications and future work are discussed.
Research Interests:
Telepresence means business people can make deals in other countries, doctors can give remote medical advice, and soldiers can rescue someone from thousands of miles away. When interaction is mediated, people are removed from and lack... more
Telepresence means business people can make deals in other countries, doctors can give remote medical advice, and soldiers can rescue someone from thousands of miles away. When interaction is mediated, people are removed from and lack context about the person they are making decisions about. In this paper, we explore the impact of technological mediation on risk and dehumanization in decision-making. We conducted a laboratory experiment involving medical treatment decisions. The results suggest that technological mediation influences decision making, but its influence depends on an individual’s self-construal: participants who saw themselves as defined through their relationships (interdependent self-construal) recommended riskier and more painful treatments in video conferencing than when face-to-face. We discuss implications of our results for theory and future research.
Research Interests:
Gaze-based interaction has several benefits: naturalism, remote controllability, and easy accessibility. However, it has been mostly used for screen-based interaction with static information. In this paper, we propose a concept of... more
Gaze-based interaction has several benefits: naturalism, remote controllability, and easy accessibility. However, it has been mostly used for screen-based interaction with static information. In this paper, we propose a concept of gaze-based interaction that augments the physical world with social information. We demonstrate this interaction in a shopping scenario. In-store shopping is a setting where social information can augment the physical environment to better support a user's purchase decision. Based on the user's ocular point, we project the following information on the product and its surrounding surface: collective in-store gazes and purchase data, product comparison information, animation expressing ingredient of product, and online social comments. This paper presents the design of the system, the results and discussion of an informal user study, and future work.
Research Interests:
As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the... more
As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the commercial market is mobile remote presence (MRP) systems, physically embodied videoconferencing systems that remote workers use to drive through a workplace, communicating with locals there. Our interviews, observations, and survey results from people, who had 2-18 months of MRP use, showed how remotely- controlled mobility enabled remote workers to live and work with local coworkers almost as if they were physically there. The MRP supported informal communications and connections between distributed coworkers. We also found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers.
Research Interests:
Influence through information and feedback has been one of the main approaches of persuasive technology. We propose another approach based on behavioral economics research on decision-making. This approach involves designing the... more
Influence through information and feedback has been one of the main approaches of persuasive technology. We propose another approach based on behavioral economics research on decision-making. This approach involves designing the presentation and timing of choices to encourage people to make self-beneficial decisions. We applied three behavioral economics persuasion techniques—the default option strategy, the planning strategy, and the asymmetric choice strategy—to promote healthy snacking in the workplace. We tested the strategies in three experimental case studies using a human snack deliverer, a robot, and a snack ordering website. The default and the planning strategies were effective, but they worked differently depending on whether the participants had healthy dietary lifestyles or not. We discuss designs for persuasive technologies that apply behavioral economics.
Research Interests:
Prior research has investigated the effect of interactive social agents presented on computer screens or embodied in robots. Much of this research has been pursued in labs and brief field studies. Comparatively little is known about... more
Prior research has investigated the effect of interactive social agents presented on computer screens or embodied in robots. Much of this research has been pursued in labs and brief field studies. Comparatively little is known about social agents embedded in the workplace, where employees have repeated interactions with the agent, alone and with others. We designed a social robot snack delivery service for a workplace, and evaluated the service over four months allowing each employee to use it for two months. We report on how employees responded to the robot and the service over repeated encounters. Employees attached different social roles to the robot beyond a delivery person as they incorporated the robot’s visit into their workplace routines. Beyond one-on-one interaction, the robot created a ripple effect in the workplace, triggering new behaviors among employees, including politeness, protection of the robot, mimicry, social comparison, and even jealousy. We discuss the implications of these ripple effects for designing services incorporating social agents.
Research Interests:
Creating and sustaining rapport between robots and people is critical for successful robotic services. As a first step towards this goal, we explored a personalization strategy with a snack delivery robot. We designed a social robotic... more
Creating and sustaining rapport between robots and people is critical for successful robotic services. As a first step towards this goal, we explored a personalization strategy with a snack delivery robot. We designed a social robotic snack delivery service, and, for half of the participants, personalized the service based on participants’ service usage and interactions with the robot. The service ran for each participant for two months. We evaluated this strategy during a 4-month field experiment. The results show that, as compared with the social service alone, adding personalized service improved rapport, cooperation, and engagement with the robot during service encounters.
Research Interests:
Robots that operate in the real world will make mistakes, and those who design and build systems will need to understand how best to provide ways for robots to mitigate those mistakes. Building on diverse research literatures, we consider... more
Robots that operate in the real world will make mistakes, and those who design and build systems will need to understand how best to provide ways for robots to mitigate those mistakes. Building on diverse research literatures, we consider how to mitigate breakdowns in services provided by robots. Expectancy-setting strategies forewarn people of a robot’s limitations so people will expect mistakes. Recovery strategies, including apologies, compensation, and options for the user, aim to reduce the negative consequence of breakdowns. We tested these strategies in an online scenario study with 317 participants. A breakdown in robotic service had severe impact on evaluations of the service and the robot, but forewarning and recovery strategies reduced the negative impact of the breakdown. People’s orientation toward services influenced which recovery strategy worked best. Those with a relational orientation responded best to the apology; those with a utilitarian orientation responded best to the compensation. We discuss robotic service design to mitigate service problems.
Research Interests:
We present the design of the Snackbot, a robot that will deliver snacks in our university buildings. The robot is intended to provide a useful, continuing service and to serve as a research platform for long-term Human-Robot Interaction.... more
We present the design of the Snackbot, a robot that will deliver snacks in our university buildings. The robot is intended to provide a useful, continuing service and to serve as a research platform for long-term Human-Robot Interaction. Our design process, which occurred over 24 months, is documented as a contribution for others in HRI who may be developing social robots that offer services. We describe the phases of the design project, and the design decisions and tradeoffs that led to the current version of the robot.
Research Interests:
While the user-centered design methods we bring from human- computer interaction to ubicomp help sketch ideas and refine prototypes, few tools or techniques help explore divergent design concepts, reflect on their merits, and come to a... more
While the user-centered design methods we bring from human- computer interaction to ubicomp help sketch ideas and refine prototypes, few tools or techniques help explore divergent design concepts, reflect on their merits, and come to a new understanding of design opportunities and ways to address them. We present Speed Dating, a design method for rapidly exploring application concepts and their interactions and contextual dimensions without requiring any technology implementation. Situated between sketching and prototyping, Speed Dating structures comparison of concepts, helping identify and understand contextual risk factors and develop approaches to address them. We illustrate how to use Speed Dating by applying it to our research on the smart home and dual-income families, and highlight our findings from using this method.
Research Interests:
Our research group is designing robotic products and services that will adapt to people’s changing behavior through repeated interactions. Through the process of creating these products and services, we have found that there are many... more
Our research group is designing robotic products and services that will adapt to people’s changing behavior through repeated interactions. Through the process of creating these products and services, we have found that there are many dynamic issues to account for in the design: the behavior of customers, the capabilities of the robot, and what the robot knows about people and their preferences. Current service design blueprints are not sufficient for capturing these dynamic relationships among elements of the service. Therefore, we explored the activity of designing a dynamic robot-assisted snacking service. Based on the results of a contextual inquiry, we created a service design blueprint which incorporated three main components: a delivery robot, a website, and human assistants. Using this blueprint, we mapped out how the snacking service will evolve over time. Our contribution is an addition to the service blueprint process, which can represent services that change over time.
Research Interests:
The mental structures that people apply towards other people have been shown to influence the way people cooperate with others. These mental structures or schemas evoke behavioral scripts. In this paper, we explore two different scripts,... more
The mental structures that people apply towards other people have been shown to influence the way people cooperate with others. These mental structures or schemas evoke behavioral scripts. In this paper, we explore two different scripts, receptionist and information kiosk, that we propose channeled visitors’ interactions with an interactive robot. We analyzed visitors’ typed verbal responses to a receptionist robot in a university building. Half of the visitors greeted the robot (e.g., “hello”) prior to interacting with it. Greeting the robot significantly predicted a more social script: more relational conversational strategies such as sociable interaction and politeness, attention to the robot’s narrated stories, self-disclosure, and less negative/rude behaviors. The findings suggest people’s first words in interaction can predict their schematic orientation to an agent, making it possible to design agents that adapt to individuals during interaction. We propose designs for interactive computational agents that can elicit people’s cooperation.
Research Interests:
A handover is a complex collaboration, where actors coordinate in time and space to trans- fer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and... more
A handover is a complex collaboration, where actors coordinate in time and space to trans- fer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and the cognitive process of exchang- ing information to guide the transfer. Despite this complexity, we humans are capable of performing handovers seamlessly in a wide variety of situations, even when unexpected. This suggests a common procedure that guides all handover interactions. Our goal is to codify that procedure.
To that end, we first study how people hand objects to each other in order to understand their coordination process and the signals and cues that they use and observe with their partners. Based on these studies, we propose a coordination structure for human-robot han- dovers that considers the physical and social-cognitive aspects of the interaction separately. This handover structure describes how people approach, reach out their hands, and trans- fer objects while simultaneously coordinating the what, when, and where of handovers: to agree that the handover will happen (and with what object), to establish the timing of the handover, and to decide the configuration at which the handover will occur. We experimen- tally evaluate human-robot handover behaviors that exploit this structure, and offer design implications for seamless human-robot handover interactions.
Research Interests:
Designing radically new technology systems that people will want to use is complex. Design teams must draw on knowledge related to people’s current values and desires to envision a preferred yet plausible future. However, the introduction... more
Designing radically new technology systems that people will want to use is complex. Design teams must draw on knowledge related to people’s current values and desires to envision a preferred yet plausible future. However, the introduction of new technology can shape people’s values and practices, and what-we-know-now about them does not always translate to an effective guess of what the future could, or should, be. New products and systems typically exist outside of current understandings of technology and use paradigms; they often have few interaction and social conventions to guide the design process, making efforts to pursue them complex and risky. User Enactments (UEs) have been developed as a design approach that aids design teams in more successfully investigate radical alterations to technologies’ roles, forms, and behaviors in uncharted design spaces. In this paper, we reflect on our repeated use of UE over the past five years to unpack lessons learned and further specify how and when to use it. We conclude with a reflection on how UE can function as a boundary object and implications for future work.
Research Interests:
When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participat- ing... more
When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participat- ing in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We train and test our algorithm on multi- channel vision and pose data collected from an extensive user study in an instrumented kitchen. Our algorithm outputs a tree of possibilities, automatically encoding various types of pre- handover communication. A surprising outcome is that mutual gaze and inter-personal distance, often cited as being key for interaction, were not key discriminative features. Finally, we discuss the immediate and future impact of this work for human-robot interaction.
Research Interests:
User models can be useful for improving dialogue manage- ment. In this paper we analyze human-robot dialogues that occur during uncontrolled interactions and estimate rela- tions between the initial dialogue turns and patterns of dis-... more
User models can be useful for improving dialogue manage- ment. In this paper we analyze human-robot dialogues that occur during uncontrolled interactions and estimate rela- tions between the initial dialogue turns and patterns of dis- course that are indicative of such user traits as persistence and politeness. The significant effects shown in this prelimi- nary study suggest that initial dialogue turns may be useful in modeling a user’s interaction style.
Research Interests:
Handing over objects to humans is an essential capability for assistive robots. While there are infinite ways to hand an object, robots should be able to choose the one that is best for the human. In this paper we focus on choosing the... more
Handing over objects to humans is an essential capability for assistive robots. While there are infinite ways to hand an object, robots should be able to choose the one that is best for the human. In this paper we focus on choosing the robot and object configuration at which the transfer of the object occurs, i.e. the hand-over configuration. We advocate the incorporation of user preferences in choosing hand-over configurations. We present a user study in which we collect data on human preferences and a human-robot interaction experiment in which we compare hand-over configurations learned from human examples against configurations planned using a kinematic model of the human. We find that the learned configurations are preferred in terms of several criteria, however planned configurations provide better reachability. Additionally, we find that humans prefer hand-overs with default orientations of objects and we identify several latent variables about the robot’s arm that capture significant hu- man preferences. These findings point towards planners that can generate not only optimal but also preferable hand-over configurations for novel objects.
Research Interests:
For robots to get integrated in daily tasks assisting humans, robot-human interactions will need to reach a level of fluency close to that of human-human interactions. In this paper we address the fluency of robot-human hand-overs. From... more
For robots to get integrated in daily tasks assisting humans, robot-human interactions will need to reach a level of fluency close to that of human-human interactions. In this paper we address the fluency of robot-human hand-overs. From an ob- servational study with our robot HERB, we identify the key problems with a baseline hand-over action. We find that the failure to convey the intention of handing over causes delays in the transfer, while the lack of an intuitive signal to indicate timing of the hand-over causes early, unsuccessful attempts to take the object. We propose to address these problems with the use of spatial contrast, in the form of dis- tinct hand-over poses, and temporal contrast, in the form of unambiguous transitions to the hand-over pose. We conduct a survey to identify distinct hand-over poses, and determine variables of the pose that have most communicative poten- tial for the intent of handing over. We present an experiment that analyzes the effect of the two types of contrast on the fluency of hand-overs. We find that temporal contrast is particularly useful in improving fluency by eliminating early attempts of the human.
Research Interests:
Believabilityofcharactershasbeenanobjec- tive in literature, theater, film, and animation. We argue that believable robot characters are important in human-robot interaction, as well. In particular, we contend that believable char- acters... more
Believabilityofcharactershasbeenanobjec- tive in literature, theater, film, and animation. We argue that believable robot characters are important in human-robot interaction, as well. In particular, we contend that believable char- acters evoke users’ social responses that, for some tasks, lead to more natural interactions and are associated with improved task per- formance. In a dialogue-capable robot, a key to such believability is the integration of a consis- tent story line, verbal and nonverbal behaviors, and sociocultural context. We describe our work in this area and present empirical results from three robot receptionist test beds that operate “in the wild.”
Research Interests:
For many years technology researchers have promised a smart home that, through an awareness of people’s activities and intents, will provide the appropriate assistance to improve human experience. However, before people will accept... more
For many years technology researchers have promised a smart home that, through an awareness of people’s activities and intents, will provide the appropriate assistance to improve human experience. However, before people will accept intelligent technology into their homes and their lives, they must feel they have control over it (Norman 1994). To address this issue, social researchers have been conducting ethnographic research on families, looking for opportunities where technology can best provide assistance. At the same time, technology researchers studying “end user programming” have focused on how people can control devices in their homes. We observe an interesting disconnect between the two approaches–the ethnographic work reveals that families desire to “feel in control of their lives,” more than in control of their devices. Our work attempts to bridge the divide between these two research communities by exploring the role a smart home can play in the life of a dual-income family. If we first understand the roles a smart home can play, we can then more appropriately choose how to provide families with the control they desire, extending the control of devices to incorporate the control of their lives families say they need.
Research Interests:
The rate of donations made by individuals is relatively low in Korea when compared to other developed countries. To address this problem, we propose the DONA, an urban donation motivating robot prototype. The robot roams around in a... more
The rate of donations made by individuals is relatively low in Korea when compared to other developed countries. To address this problem, we propose the DONA, an urban donation motivating robot prototype. The robot roams around in a public space and solicits donation from passers-by by engaging them through a pet like interaction. In this paper, we present the prototype of the robot and our design process.
Research Interests:
One goal of assistive robotics is to design interactive robots that can help disabled people with tasks such as fetching objects. When people do this task, they coordinate their movements closely with receivers. We investigated how a... more
One goal of assistive robotics is to design interactive robots that can help disabled people with tasks such as fetching objects. When people do this task, they coordinate their movements closely with receivers. We investigated how a robot should fetch and give household objects to a person. To develop a model for the robot, we first studied trained dogs and person-to-person handoffs. Our findings suggest two models of handoff that differ in their predictability and adaptivity.
Research Interests:
Previous research has shown that design features that support privacy are essential for new technologies looking to gain widespread adoption. As such, privacy-sensitive design will be important for the adoption of social robots, as they... more
Previous research has shown that design features that support privacy are essential for new technologies looking to gain widespread adoption. As such, privacy-sensitive design will be important for the adoption of social robots, as they could introduce new types of privacy risks to users. In this paper, we report findings from our preliminary study on users’ perceptions and attitudes toward privacy in human-robot interaction, based on interviews that we conducted about a workplace social robot.
Research Interests:
Seeking to be sensitive to users, smart home researchers have fo- cused on the concept of control. They attempt to allow users to gain control over their lives by framing the problem as one of end-user programming. But families are not... more
Seeking to be sensitive to users, smart home researchers have fo- cused on the concept of control. They attempt to allow users to gain control over their lives by framing the problem as one of end-user programming. But families are not users as we typically conceive them, and a large body of ethnographic research shows how their activities and routines do not map well to programming tasks. End-user programming ultimately provides control of de- vices. But families want more control of their lives. In this paper, we explore this disconnect. Using grounded contextual fieldwork with dual-income families, we describe the control that families want, and suggest seven design principles that will help end-user programming systems deliver that control.
Research Interests:

And 3 more