The increasing adoption of AI-enabled hiring software raises questions about the practice of Huma... more The increasing adoption of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals who used AI-enabled hiring software for two decision-making processes in hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Our research highlights the importance of understanding the contextual factors that shape how algorithmic hiring is practiced in organizations. CCS CONCEPTS • Social and professional topics → Employment issues; Sociotechnical systems.
In this opinion paper, we argue that global health crises are also information crises. Using as a... more In this opinion paper, we argue that global health crises are also information crises. Using as an example the coronavirus disease 2019 (COVID-19) epidemic, we (a) examine challenges associated with what we term "global information crises"; (b) recommend changes needed for the field of information science to play a leading role in such crises; and (c) propose actionable items for short-and long-term research, education, and practice in information science.
Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like C... more Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like COVID-19 when much research data collection must be conducted online only, these populations risk being further underrepresented. This paper explores methodological strategies for rigorous, efficient survey research with a large number of older adults online, focusing on (1) the design of a survey instrument both comprehensible and usable by older adults, (2) rapid collection (within hours) of data from a large number of older adults, and (3) validation of data using attention checks, independent validation of age, and detection of careless responses to ensure data quality. These methodological strategies have important implications for the inclusion of older adults in online research.
Artificial intelligence is increasingly being used to manage the workforce. Algorithmic managemen... more Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models-preferences about work and working conditions, and managerial fairness models-beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models; • Social and professional topics → Employment issues.
Algorithms increasingly automate or support managerial functions in organizations, with implicati... more Algorithms increasingly automate or support managerial functions in organizations, with implications for the employee-employer relationship. We explored how algorithmic management affects this relationship with a focus on psychological contracts, or employees' perceptions of their own and their employers' obligations. Through five online experiments, we investigated how organizational agent type-algorithmic versus human-influenced one's psychological contract depending on the organizational inducement type-transactional versus relational. We explored psychological contracts in two stages of employment: during early phases, such as recruiting (Studies 1 and 2) and onboarding (Studies 4 and 5), when the agent explains the inducements to the employee; and during employment, when the agent under-delivers the inducements to varying degrees (Studies 3-5). Our results suggest that agent type did not affect psychological contracts around transactional inducements but did so for relational inducements in the cases of recruiting and low inducement delivery (Studies 1-5). Algorithmic agents signaled reduced employer commitments to relational inducements during recruiting (Study 1). Using human agents resulted in greater perceived breach when delivery of relational inducements was low (Study 5). Regardless of inducement type, turnover intentions were higher when the human agent underdelivered compared to the algorithmic agent (Study 5). Our studies show how algorithmic management may influence one's psychological contract.
Emerging research suggests that people trust algorithmic decisions less than human decisions. How... more Emerging research suggests that people trust algorithmic decisions less than human decisions. However, different populations, particularly in marginalized communities, may have different levels of trust in human decision-makers. Do people who mistrust human decision-makers perceive human decisions to be more trustworthy and fairer than algorithmic decisions? Or do they trust algorithmic decisions as much as or more than human decisions? We examine the role of mistrust in human systems in people's perceptions of algorithmic decisions. We focus on healthcare Artificial Intelligence (AI), group-based medical mistrust, and Black people in the United States. We conducted a between-subjects online experiment to examine people's perceptions of skin cancer screening decisions made by an AI versus a human physician depending on their medical mistrust, and we conducted interviews to understand how to cultivate trust in healthcare AI. Our findings highlight that research around human experiences of AI should consider critical differences in social groups. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI).
The majority of smart home research has focused on novel technical artifacts, but has overlooked ... more The majority of smart home research has focused on novel technical artifacts, but has overlooked the issues surrounding social relationships in the home. We argue in favor of research that is sensitive to and functions within the social constraints of dual income family homes. This paper describes our grounded contextual fieldwork with real families in their homes, and identifies socially-aware concepts smart home systems will need to address.
As algorithms increasingly take managerial and governance roles, it is ever more important to bui... more As algorithms increasingly take managerial and governance roles, it is ever more important to build them to be perceived as fair and adopted by people. With this goal, we propose a procedural justice framework in algorithmic decision-making drawing from procedural justice theory, which lays out elements that promote a sense of fairness among users. As a case study, we built an interface that leveraged two key elements of the framework-transparency and outcome control-and evaluated it in the context of goods division. Our interface explained the algorithm's allocative fairness properties (standards clarity) and outcomes through an input-output matrix (outcome explanation), then allowed people to interactively adjust the algorithmic allocations as a group (outcome control). The findings from our within-subjects laboratory study suggest that standards clarity alone did not increase perceived fairness; outcome explanation had mixed effects, increasing or decreasing perceived fairness and reducing algorithmic accountability; and outcome control universally improved perceived fairness by allowing people to realize the inherent limitations of decisions and redistribute the goods to better fit their contexts, and by bringing human elements into final decision-making.
Algorithms increasingly govern societal functions, impacting multiple stakeholders and social gro... more Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests in a moral, legitimate way? As one answer to this question, we present WeBuildAI, a collective participatory framework that enables people to build algorithmic policy for their communities. The key idea of the framework is to enable stakeholders to construct a computational model that represents their views and to have those models vote on their behalf to create algorithmic policy. As a case study, we applied this framework to a matching algorithm that operates an on-demand food donation transportation service in order to adjudicate equity and efficiency trade-offs. The service's stakeholders-donors, volunteers, recipient organizations, and nonprofit employees-used the framework to design the algorithm through a series of studies in which we researched their experiences. Our findings suggest that the framework successfully enabled participants to build models that they felt confident represented their own beliefs. Participatory algorithm design also improved both procedural fairness and the distributive outcomes of the algorithm, raised participants' algorithmic awareness, and helped identify inconsistencies in human decision-making in the governing organization. Our work demonstrates the feasibility, potential and challenges of community involvement in algorithm design.
Virtual democracy is an approach to automating decisions, by learning models of the preferences o... more Virtual democracy is an approach to automating decisions, by learning models of the preferences of individual people, and, at runtime, aggregat-ing the predicted preferences of those people on the dilemma at hand. One of the key questions is which aggregation method-or voting rule-to use; we offer a novel statistical viewpoint that provides guidance. Specifically, we seek voting rules that are robust to prediction errors, in that their output on people's true preferences is likely to coincide with their output on noisy estimates thereof. We prove that the classic Borda count rule is robust in this sense, whereas any voting rule belonging to the wide family of pairwise-majority consistent rules is not. Our empirical results further support, and more precisely measure , the robustness of Borda count.
Algorithms exert great power in curating online information, yet are often opaque in their operat... more Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users' online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm's existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users' attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
People in work-separated families have been heavily relying on cutting-edge face-to-face communic... more People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-to-face communication. We envision that enabling a remote person to be spatially superposed in one's living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other's presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person's presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algori... more Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algo-rithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies. This article is a part of special theme on Algorithms in Culture. To see a full list of all articles in this special theme, please click here: http://journals.sagepub.com/page/bds/collections/algorithms-in-culture.
We already know algorithms can make our lives and our work more efficient, but how can we go beyo... more We already know algorithms can make our lives and our work more efficient, but how can we go beyond that to create trustworthy, fair, and enjoyable workplaces in which workers can find meaning and continuously learn?
Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW... more Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW communities, this workshop aims to deal centrally with the topic of human " participation " and its changing role to data-driven, algorithmic ecosystems. Such a focus includes projects that involve users in the design of algorithms and " human-in-the-loop " systems, broader investigations into the ways in which " participation " is situated in data-driven activities, as well as conceptual concerns about participation's changing contours in contemporary social computing landscapes. This one-day workshop will be led by academic and industry researchers and sets out to achieve three goals: identify cases and ongoing projects on the topic of participation in algorithmic ecosystems; create a tactical toolkit of key challenges and strategies in this space; and set a forward-facing agenda to provoke further attention to the changing role of participation in contemporary sociotechnical systems.
As a city becomes smarter, the integrated networks of engineered cyber and physical elements prov... more As a city becomes smarter, the integrated networks of engineered cyber and physical elements provide the capability to greatly improve the quality of life of its citizens. In order to leverage these capabilities to benefit all classes of society, we propose a framework that balances the supply and demand of available resources while maximizing the social welfare of people-in-need by utilizing cyber-physical infrastructure in smart cities. We show through numerical simulations that our proposed framework can reduce the amount of resources wasted by 25% through intelligently assigning the location of services and dynamically pairing resources to different homeless populations.
How do individuals perceive algorithmic vs. group-made decisions? We investigated people's percep... more How do individuals perceive algorithmic vs. group-made decisions? We investigated people's perceptions of mathematically-proven fair division algorithms making social division decisions. In our first qualitative study, about one third of the participants perceived algorithmic decisions as less than fair (30% for self, 36% for group), often because algorithmic assumptions about users did not account for multiple concepts of fairness or social behaviors, and the process of quantifying preferences through interfaces was prone to error. In our second experiment, algorithmic decisions were perceived to be less fair than discussion-based decisions, dependent on participants' interpersonal power and computer programming knowledge. Our work suggests that for algorithmic mediation to be fair, algorithms and their interfaces should account for social and altruistic behaviors that may be difficult to define in mathematical terms.
The increasing adoption of AI-enabled hiring software raises questions about the practice of Huma... more The increasing adoption of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals' use of the software and its consequences. We interviewed 15 recruiters and HR professionals who used AI-enabled hiring software for two decision-making processes in hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals' job content as it automates or augments pieces of the existing hiring process. Our research highlights the importance of understanding the contextual factors that shape how algorithmic hiring is practiced in organizations. CCS CONCEPTS • Social and professional topics → Employment issues; Sociotechnical systems.
In this opinion paper, we argue that global health crises are also information crises. Using as a... more In this opinion paper, we argue that global health crises are also information crises. Using as an example the coronavirus disease 2019 (COVID-19) epidemic, we (a) examine challenges associated with what we term "global information crises"; (b) recommend changes needed for the field of information science to play a leading role in such crises; and (c) propose actionable items for short-and long-term research, education, and practice in information science.
Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like C... more Vulnerable populations (e.g., older adults) can be hard to reach online. During a pandemic like COVID-19 when much research data collection must be conducted online only, these populations risk being further underrepresented. This paper explores methodological strategies for rigorous, efficient survey research with a large number of older adults online, focusing on (1) the design of a survey instrument both comprehensible and usable by older adults, (2) rapid collection (within hours) of data from a large number of older adults, and (3) validation of data using attention checks, independent validation of age, and detection of careless responses to ensure data quality. These methodological strategies have important implications for the inclusion of older adults in online research.
Artificial intelligence is increasingly being used to manage the workforce. Algorithmic managemen... more Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models-preferences about work and working conditions, and managerial fairness models-beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models; • Social and professional topics → Employment issues.
Algorithms increasingly automate or support managerial functions in organizations, with implicati... more Algorithms increasingly automate or support managerial functions in organizations, with implications for the employee-employer relationship. We explored how algorithmic management affects this relationship with a focus on psychological contracts, or employees' perceptions of their own and their employers' obligations. Through five online experiments, we investigated how organizational agent type-algorithmic versus human-influenced one's psychological contract depending on the organizational inducement type-transactional versus relational. We explored psychological contracts in two stages of employment: during early phases, such as recruiting (Studies 1 and 2) and onboarding (Studies 4 and 5), when the agent explains the inducements to the employee; and during employment, when the agent under-delivers the inducements to varying degrees (Studies 3-5). Our results suggest that agent type did not affect psychological contracts around transactional inducements but did so for relational inducements in the cases of recruiting and low inducement delivery (Studies 1-5). Algorithmic agents signaled reduced employer commitments to relational inducements during recruiting (Study 1). Using human agents resulted in greater perceived breach when delivery of relational inducements was low (Study 5). Regardless of inducement type, turnover intentions were higher when the human agent underdelivered compared to the algorithmic agent (Study 5). Our studies show how algorithmic management may influence one's psychological contract.
Emerging research suggests that people trust algorithmic decisions less than human decisions. How... more Emerging research suggests that people trust algorithmic decisions less than human decisions. However, different populations, particularly in marginalized communities, may have different levels of trust in human decision-makers. Do people who mistrust human decision-makers perceive human decisions to be more trustworthy and fairer than algorithmic decisions? Or do they trust algorithmic decisions as much as or more than human decisions? We examine the role of mistrust in human systems in people's perceptions of algorithmic decisions. We focus on healthcare Artificial Intelligence (AI), group-based medical mistrust, and Black people in the United States. We conducted a between-subjects online experiment to examine people's perceptions of skin cancer screening decisions made by an AI versus a human physician depending on their medical mistrust, and we conducted interviews to understand how to cultivate trust in healthcare AI. Our findings highlight that research around human experiences of AI should consider critical differences in social groups. CCS CONCEPTS • Human-centered computing → Human computer interaction (HCI).
The majority of smart home research has focused on novel technical artifacts, but has overlooked ... more The majority of smart home research has focused on novel technical artifacts, but has overlooked the issues surrounding social relationships in the home. We argue in favor of research that is sensitive to and functions within the social constraints of dual income family homes. This paper describes our grounded contextual fieldwork with real families in their homes, and identifies socially-aware concepts smart home systems will need to address.
As algorithms increasingly take managerial and governance roles, it is ever more important to bui... more As algorithms increasingly take managerial and governance roles, it is ever more important to build them to be perceived as fair and adopted by people. With this goal, we propose a procedural justice framework in algorithmic decision-making drawing from procedural justice theory, which lays out elements that promote a sense of fairness among users. As a case study, we built an interface that leveraged two key elements of the framework-transparency and outcome control-and evaluated it in the context of goods division. Our interface explained the algorithm's allocative fairness properties (standards clarity) and outcomes through an input-output matrix (outcome explanation), then allowed people to interactively adjust the algorithmic allocations as a group (outcome control). The findings from our within-subjects laboratory study suggest that standards clarity alone did not increase perceived fairness; outcome explanation had mixed effects, increasing or decreasing perceived fairness and reducing algorithmic accountability; and outcome control universally improved perceived fairness by allowing people to realize the inherent limitations of decisions and redistribute the goods to better fit their contexts, and by bringing human elements into final decision-making.
Algorithms increasingly govern societal functions, impacting multiple stakeholders and social gro... more Algorithms increasingly govern societal functions, impacting multiple stakeholders and social groups. How can we design these algorithms to balance varying interests in a moral, legitimate way? As one answer to this question, we present WeBuildAI, a collective participatory framework that enables people to build algorithmic policy for their communities. The key idea of the framework is to enable stakeholders to construct a computational model that represents their views and to have those models vote on their behalf to create algorithmic policy. As a case study, we applied this framework to a matching algorithm that operates an on-demand food donation transportation service in order to adjudicate equity and efficiency trade-offs. The service's stakeholders-donors, volunteers, recipient organizations, and nonprofit employees-used the framework to design the algorithm through a series of studies in which we researched their experiences. Our findings suggest that the framework successfully enabled participants to build models that they felt confident represented their own beliefs. Participatory algorithm design also improved both procedural fairness and the distributive outcomes of the algorithm, raised participants' algorithmic awareness, and helped identify inconsistencies in human decision-making in the governing organization. Our work demonstrates the feasibility, potential and challenges of community involvement in algorithm design.
Virtual democracy is an approach to automating decisions, by learning models of the preferences o... more Virtual democracy is an approach to automating decisions, by learning models of the preferences of individual people, and, at runtime, aggregat-ing the predicted preferences of those people on the dilemma at hand. One of the key questions is which aggregation method-or voting rule-to use; we offer a novel statistical viewpoint that provides guidance. Specifically, we seek voting rules that are robust to prediction errors, in that their output on people's true preferences is likely to coincide with their output on noisy estimates thereof. We prove that the classic Borda count rule is robust in this sense, whereas any voting rule belonging to the wide family of pairwise-majority consistent rules is not. Our empirical results further support, and more precisely measure , the robustness of Borda count.
Algorithms exert great power in curating online information, yet are often opaque in their operat... more Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users' online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm's existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users' attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
People in work-separated families have been heavily relying on cutting-edge face-to-face communic... more People in work-separated families have been heavily relying on cutting-edge face-to-face communication services. Despite their ease of use and ubiquitous availability, experiences in living together are still far incomparable to those through remote face-to-face communication. We envision that enabling a remote person to be spatially superposed in one's living space would be a breakthrough to catalyze pseudo living-together interactivity. We propose HomeMeld, a zero-hassle self-mobile robotic system serving as a co-present avatar to create a persistent illusion of living together for those who are involuntarily living apart. The key challenges are 1) continuous spatial mapping between two heterogeneous floor plans and 2) navigating the robotic avatar to reflect the other's presence in real time under the limited maneuverability of the robot. We devise a notion of functionally equivalent location and orientation to translate a person's presence into another in a heterogeneous floor plan. We also develop predictive path warping to seamlessly synchronize the presence of the other. We conducted extensive experiments and deployment studies with real participants.
Algorithms increasingly make managerial decisions that people used to make. Perceptions of algori... more Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algo-rithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies. This article is a part of special theme on Algorithms in Culture. To see a full list of all articles in this special theme, please click here: http://journals.sagepub.com/page/bds/collections/algorithms-in-culture.
We already know algorithms can make our lives and our work more efficient, but how can we go beyo... more We already know algorithms can make our lives and our work more efficient, but how can we go beyond that to create trustworthy, fair, and enjoyable workplaces in which workers can find meaning and continuously learn?
Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW... more Contributing to a growing attention to algorithms and algorithmic interaction in the CHI and CSCW communities, this workshop aims to deal centrally with the topic of human " participation " and its changing role to data-driven, algorithmic ecosystems. Such a focus includes projects that involve users in the design of algorithms and " human-in-the-loop " systems, broader investigations into the ways in which " participation " is situated in data-driven activities, as well as conceptual concerns about participation's changing contours in contemporary social computing landscapes. This one-day workshop will be led by academic and industry researchers and sets out to achieve three goals: identify cases and ongoing projects on the topic of participation in algorithmic ecosystems; create a tactical toolkit of key challenges and strategies in this space; and set a forward-facing agenda to provoke further attention to the changing role of participation in contemporary sociotechnical systems.
As a city becomes smarter, the integrated networks of engineered cyber and physical elements prov... more As a city becomes smarter, the integrated networks of engineered cyber and physical elements provide the capability to greatly improve the quality of life of its citizens. In order to leverage these capabilities to benefit all classes of society, we propose a framework that balances the supply and demand of available resources while maximizing the social welfare of people-in-need by utilizing cyber-physical infrastructure in smart cities. We show through numerical simulations that our proposed framework can reduce the amount of resources wasted by 25% through intelligently assigning the location of services and dynamically pairing resources to different homeless populations.
How do individuals perceive algorithmic vs. group-made decisions? We investigated people's percep... more How do individuals perceive algorithmic vs. group-made decisions? We investigated people's perceptions of mathematically-proven fair division algorithms making social division decisions. In our first qualitative study, about one third of the participants perceived algorithmic decisions as less than fair (30% for self, 36% for group), often because algorithmic assumptions about users did not account for multiple concepts of fairness or social behaviors, and the process of quantifying preferences through interfaces was prone to error. In our second experiment, algorithmic decisions were perceived to be less fair than discussion-based decisions, dependent on participants' interpersonal power and computer programming knowledge. Our work suggests that for algorithmic mediation to be fair, algorithms and their interfaces should account for social and altruistic behaviors that may be difficult to define in mathematical terms.
Uploads
Papers by Min Kyung Lee