Human–Computer Interaction
ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/hhci20
Introduction to this special issue on unifying
human computer interaction and artificial
intelligence
Munmun De Choudhury , Min Kyung Lee , Haiyi Zhu & David A. Shamma
To cite this article: Munmun De Choudhury , Min Kyung Lee , Haiyi Zhu & David
A. Shamma (2020) Introduction to this special issue on unifying human computer
interaction and artificial intelligence, Human–Computer Interaction, 35:5-6, 355-361, DOI:
10.1080/07370024.2020.1744146
To link to this article: https://doi.org/10.1080/07370024.2020.1744146
Published online: 31 May 2020.
Submit your article to this journal
Article views: 252
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=hhci20
HUMAN–COMPUTER INTERACTION
2020, VOL. 35, NOS. 5–6, 355–361
https://doi.org/10.1080/07370024.2020.1744146
Introduction to this special issue on unifying human computer
interaction and artificial intelligence
Munmun De Choudhury
a
, Min Kyung Lee
b
, Haiyi Zhu
c
, and David A. Shamma
d
a
Georgia Institute of Technology, Atlanta, USA; bUniversity of Texas at Austin, USA; cCarnegie Mellon University,
Pittsburgh, USA; dFX Palo Alto Laboratory, USA
KEYWORDS Intelligent UI; HCI; AI; UI before Intelligent U
ARTICLE HISTORY Received 14 March 2020; Accepted 15 March 2020
McCarthy (1998) defined Artificial Intelligence (AI) as both “the science and engineering of intelligent machines, especially computer programs” and the “computational part of the ability to
achieve goals in the world.” Today, AI is increasingly deployed across many domains of direct
societal relevance, such as transportation, retail, criminal justice, finance, and health. But these very
domains that AI is aiming to revolutionize may also be where human implications are the most
momentous. The potential negative effects of AI on society, whether amplifying human biases or the
perils of automation, cannot be ignored, and as a result, such topics are increasingly discussed in
scholarly and popular press contexts. As the New York Times notes: “… if we want [AI] to play
a positive role in tomorrow’s world, it must be guided by human concerns” (Li, 2018).
The relationship between technology and humans is the direct focus of human–computer
interaction (HCI) research. However, conversations about the relationship between HCI and AI
are not new. For the past 20 years, the HCI community has proposed principles, guidelines, and
strategies for designing and interacting with user interfaces that employ or are powered by AI in
a general sense (Norman, 1994; Ho¨ o¨ k, 2000). For example, an early discussion by Shneiderman
and Maes (1997) challenged whether AI should be a primary metaphor in the human interface to
computers: Should interactions between a human and a computer mimic human–human interaction? Or are there practical or even philosophical objections to assigning human attributes and
abilities to computers? Putting aside these fundamental questions about what human-AI interactions
might look like, Norman (2014) and Ho¨ O¨ (2000) adopt a more practical approach to designing AI
systems. They recommend building in safeguards like verification steps or regulating users’ agency so
as to prevent unwanted behaviors or undesirable consequences arising from these systems. More
broadly, other HCI researchers have contrasted the differences in approaches and philosophies
adopted by HCI and AI researchers, particularly around how we understand people and create
technologies for their benefit (Winograd, 2006). Grudin (2009) also described alternating cycles in
which one approach flourished, while the other suffered a “winter,” characterized by a period of
reduced funding, accompanied by low academic and popular interest. Building upon Grudin,
Winograd (2006) contrasted the strengths and limitations of each, as well as the relevance of
rationalistic versus design approaches offered by AI and HCI, respectively, when applied to
“messy” human problems. Winograd’s overall conclusion was rather surprising: he conjectured
that the two fields are not so distinct. He concluded that their philosophies are both rooted in
common attempts to push the computer metaphor onto all of reality, as evidenced in most
twentieth-century science and technology research. Formative and notable work by Horvitz (1999)
also attempted to reconcile many of the seeming differences between HCI and AI by highlighting key
challenges and opportunities for building “mixed-initiative user interfaces.” These are interfaces that
enable users and AI to collaborate efficiently. Horvitz states principles for balancing autonomous
CONTACT David A. Shamma
© 2020 Taylor & Francis Group, LLC
aymans@acm.org
FX Palo Alto Laboratory
356
M. D. CHOUDHURY ET AL.
actions with direct manipulation constructs, gauging ideal actions to pursue in light of costs,
benefits, and uncertainties.
Despite early efforts to bridge this divide, we are yet to witness a convincing marriage between
HCI and AI. Efforts like Stanford’s Human-Centered Artificial Intelligence (HAI) and MIT’s College
of Computing that explicitly focus on human-centric AI research are commendable. But we, like
other scholars (Ernala et al., 2019; Fox et al., 2017; Green, 2018; Inkpen et al., 2019; Tufekci, 2015),
posit that simply introducing human guidance or human sensitivity into AI is not sufficient to
realize AI’s full potential and prevent its unintended consequences. While most AI-based approaches
offer promising methods for tackling real-world problems including those of concern to HCI
researchers, many of the technologies they enable have been developed in isolation, without appropriate involvement of the human stakeholders who use these systems and who are the most affected
by them (Lee et al., 2019; Woodruff et al., 2018). Furthermore, many AI systems generate automated
inferences and function under uncertainty (Chancellor et al., 2019) – scenarios where false positives
or negatives can have severe implications for humans using them, leading to unpredictable, disruptive, hostile, and even dangerous system behaviors.
This renewed interest is also warranted because the landscape of both AI and HCI research has
changed a great deal since those discussions 20 years ago. The field of AI has developed very rapidly
in recent years, with exponential gains in data sizes (e.g., ImageNet (Deng et al., 2009)) and compute
power (e.g., Graphical Processing Units or GPUs (Cui et al., 2016)). And algorithms and machine
learning methods have also evolved significantly (Abadi et al., 2015; Paszke et al., 2019). This has
caused a wide diversity in the design, functioning, and complexities of AI systems. Naturally,
a higher propensity for failures has resulted. According to media reports, these range from embarrassing (e.g., autocompletion errors (Amershi et al., 2019)), through outright offensive (e.g.,
Microsoft’s “racist” chatbot Tay (Wolf et al., 2017)) to fatal (e.g., self-driving cars failing to “see”
pedestrians who are people of color (Hawkins, 2019)). To summarize, the vision of engaging,
adaptive, and useful mixed-initiative interfaces (Horvitz, 1999) that work for the benefit of their
users (Winograd, 2006) is yet to be realized.
Existing AI systems are already providing multiple challenges and opportunities for the HCI
community. These include understanding sources of bias in AI systems (Scheuerman et al., 2019),
conceptualizing how to adequately represent different users’ perspectives in building new AI systems
(Lee et al., 2019; Woodruff et al., 2018), developing methods for incorporating and balancing
stakeholder values in algorithm design (Zhu et al., 2018), designing transparent interfaces that
communicate how AI systems work to stakeholders (Cheng et al., 2019), or finding ways to address
people’s fleeting trust in opaque online recommendation and content curation algorithms such as
Facebook’s News Feed (Eslami et al., 2015). At the same time, AI researchers have recently made
huge investments in the topics of fairness, accountability, and transparency of AI systems1, while
acknowledging the value of identifying human-centered principles that can guide how AI systems
are built, evaluated, and deployed.
Complementing these emergent efforts, this special issue’s central thesis is that human involvement in AI system design, development, and evaluation is critical to ensure that AI-based systems
are practical, with their outputs being meaningful and relevant to those who use them. Moreover,
human activities and behaviors are deeply contextual, complex, nuanced, and laden with subjectivity.
These characteristics may cause current AI-based approaches to fail, as they cannot adequately be
addressed by simply adding more data. As a result, to ensure the success of future AI approaches, we
must incorporate new complementary human-centered insights. These include stakeholders’
demands, beliefs, values, expectations, and preferences – attributes that constitute a focal point of
HCI research – and which need to be a part of the development of these new AI-based technologies.
The same issues also give rise to pressing new questions. For instance, how can existing HCI
methodology incorporate AI methods and data to develop intelligent systems that improve the
1
https://facctconference.org
HUMAN–COMPUTER INTERACTION
357
human condition? What are the best ways to bridge the gap between machines and humans when
designing technologies? How can AI enhance the human experience in interactive technologies; and
further, could it help define new styles of interaction? How will conventional evaluation techniques
in HCI need to be modified in contexts where AI is a core technology component? What existing
research methods might be most compatible with AI approaches? And, what will be involved in
training the next generation of HCI researchers who want to work at the intersection with AI? Of
course, the concepts of “design,” “interaction,” and “evaluation” continue to be interpreted by
different HCI researchers and practitioners in multiple related but non-identical ways.
Nonetheless, how unifying AI and HCI research will influence these interpretations remains an
open but pertinent question.
1. Articles in this special issue
This special issue is motivated by the premise that the currently disjunct philosophies and research
styles of the HCI and AI fields, along with the current context, both academic and societal, demand
renewed attention to unifying HCI and AI. We hope the articles featured in this issue extend prior
attempts to bridge the two fields, adding to the recent traction AI has achieved in tackling
challenging human problems. In doing so, we seek to engage both HCI and AI researchers working
on theoretical, empirical, systems, or design research that draws upon both perspectives. We hope
the original research presented in this special issue can initiate a dialog to bridge the gap to help
integrate this emerging space.
The HCI community recognizes that designers struggle to work with AI and Machine Learning
(ML) techniques (Dove et al., 2017). Despite attempts to integrate HCI and AI, these designers
experience challenges in incorporating ML into common user experience (UX) design paradigms.
These challenges can arise because ML models are constantly evolving, and UX designers may be
unaware of, or uncomfortable with, the restrictions of the environment, laws, and regulations
guiding ML operations in real-world scenarios (Amershi et al., 2019). Consequently, Lingyun Sun,
Zhibin Zhou, Yuyang Zhang, Xuanhui Liu, and Qing Gong (Sun, Zhou, Azhang, Liu & Gong, this
issue) argue that conceptual design methods for empowering UX with ML need to be tailored to
these unique attributes of ML. Accordingly, they adapt concepts from design thinking (e.g., service
design, material-driven design) to integrate existing research and guidelines for ML-human interaction. Their goal is to help designers understand the changing and complex nature of ML and to
combine information about ML, users, and specific real-world contexts in a holistic way. Targeting
novice designers, the authors develop ML Lifecycle Canvas, a conceptual design tool derived from
Material Lifecycle Thinking (MLT) that regards ML as a constantly growing material with an
iterative “lifecycle.” The tool creates a visual schematic incorporating the perspectives of ML,
users, and the scenario, detailing the ML lifecycle from data annotation to ML model update. On
comparing this tool with a typical conceptual design tool in workshops involving 32 participants, the
authors find that exposure to the ML Lifecycle Canvas enhances both understanding of ML and
helps designers tackle ML-related UX issues.
The challenges faced by designers in building AI systems also lead us to the following question:
How can we develop new methods, tools, and processes to help designers better innovate with AI?
Jing Liao, Preben Hansen, and Chunlei Chai (Liao, Hansen & Chai, this issue) develop a framework
describing explicit roles of AI in design ideation as (i) representation creation, (ii) empathy triggers,
and (iii) engagement. They evaluate their framework in an empirical study of 30 designers with
concurrent Think-Aloud protocols and behavior analysis. The study reveals opportunities for AI to
support human creativity and decision-making in the early design stages.
Complementarily, AI systems have rarely considered the underlying principles of end user
involvement that is core to HCI system design. To address this, the central premise of the work of
Sachin Grover, Sailik Sengupta, Tathagata Chakraborti, Aditya Mishra, and Subbarao Kambhampati
(Grover, Sengupta, Chakraborti, Mishra and Kambhampati, this issue) is that existing AI planning
358
M. D. CHOUDHURY ET AL.
systems rarely provide decision-support to end user stakeholders, and instead have largely focused
on end-to-end plan generation, with little human involvement, beyond making such systems merely
“human-aware.” In their article, the authors investigate whether an automated planner can support
the human’s decision-making process, despite not having access to the complete domain and
preference models, while the humans control the process. The work makes important contributions
to naturalistic decision-making scenarios such as disaster response where the cognitive overload of
the human can negatively affect the quality of decision-making.
A different perspective is that the specification and development of AI systems are held back by
an inability to view the problem in a human-centered manner. Eric Baumer, Drew Siedel, McLean
Donnell, Jiayun Zhong, Patricia Sittikul, and Micki McGee (Baumer, Siedel, Donnell, Zhong, Sittikul,
& McGee, this issue) develop a design process to support topic modeling and visualization for
interpretive text analysis. The tool’s primary aim is not to offer a definitive corpus-topic analysis, but
rather it supports the iterative processes used by social scientific and humanist researchers to develop
interpretations. The paper also suggests a novel application of machine learning techniques to
support interaction design through visualization.
A different way to involve humans in AI is suggested in Gonzalo Ramos, Christopher Meek,
Patrice Simard, Jina Suh, and Soroush Ghorashi(Ramos, Meek, Simard, Suh, & Ghorashi, this issue).
This paper investigates how interactive machine teaching (IMT) can leverage unique human
capabilities, allowing non ML experts to build ML models. The authors built an IMT system and
used it as a design probe to highlight further opportunities and challenges with such systems. This
article provides a clear example of how we can synergistically combine AI and human capabilities, so
that everyday end users can build intelligent systems for their own contexts.
This special issue also covers research involving specific user populations for whom both the HCI
and AI communities have identified direct opportunities for research as well as impact. AI-powered
systems, such as intelligent virtual agents (IVAs), are increasingly used commercially in essential
domains such as health care. Older adults are an exemplar of a potential user group to benefit from
these technologies. However, there is a limited understanding of how to address the socio-technical
challenges older adults might face in the development of such AI-powered systems. Jaisie Sin and
Cosmin Munteanu(Sin & Munteanu, this issue) studied how older adults use and perceive an IVA.
Specifically, they uncovered socio-technical issues relating to each of the six stages of the information
search process which helps to better contextualize older users’ interaction with IVA interfaces. We also
see AI entering the field of employee management as addressed in the article by Lionel Robert, Casey
Pierce, Liz Marquis, Sangmi Kim, and Rasha Alahmad (Robert, Pierce, Marquis, Kim, & Alahmad, this
issue). Optimizing AI for work organizations can lead to unfairness to workers resulting in personal
burnouts and worker turnover. The article addresses distributive, procedural, interactional fairness and
proposes a design agenda for organizational scenarios. These designs must work effectively inside work
compliance structures (legal, regulation, and policies) with an audit friendly AI which lays
design groundwork for future AI approaches to support fairness in the workplace.
2. Issues for future research
We believe this special issue is a first, but formative step revisiting the complex, evolving relationship
between HCI and AI. Although we have identified and begun to address some emerging issues, the
articles published here do not constitute an exhaustive representation of work happening at the
intersection of these fields and there are future challenges on the roadmap that the communities
should be investigating. Here, we identify open issues in hopes to inspire future research.
2.1. A socio-technical, instead of a purely technical mindset in AI system design
Many AI systems adopt a technical approach to solving real-world problems. Given the ethical
challenges that have been rampant in recent media coverage, we instead advocate a “socio-technical”
HUMAN–COMPUTER INTERACTION
359
approach, where both the social context and technical aspects of the problem are combined. This
includes designs that carefully consider factors like culture, people, and socio-economic situations
and their effect on society. As a result, in order to successfully bridge HCI and AI, we must go
beyond the technology-determinism perspective which makes its claims based exclusively on technological power and hype, and instead enhance our understanding of why the same AI technology
may have such different results in similar contexts.
2.2. Participatory AI system design – Stakeholders are key
From its inception in the mid-1950s, AI has emphasized participatory design to simulate capacities
of human intelligence, and to ensure that people can effectively use machines and their applications.
However, recent efforts in employing AI to solve real-world problems have not leveraged participatory research designs. Participatory techniques – a method core to the HCI field, can be a critical way
to overcome the weaknesses of traditional AI approaches. Such techniques can enable developers and
users of AI system to co-create technologies and tools that are relevant to end users’ personal and
social contexts, while being motivating and useful (Lee et al., 2019; Zhu et al., 2018).
2.3. Lowering disciplinary barriers to collaboration
The ultimate success of AI depends on how it actually addresses real-world issues, after factoring in
the scenarios’ complexities, nuances, and implications. This needs deep, substantive collaborations
across disciplines that include not only researchers who identify with the AI field but also domain
experts, and even those who are critics of AI. Perhaps AI and HCI researchers might benefit from
adopting team science approaches (Bo¨ Rner et al., 2010). Central to the mission is to also evaluate
how cooperation between sciences and technologies and domains might either promote or hinder
progress. From there, we can devise better approaches to team management by identifying the most
efficient methodologies in research, training, and communication at a larger scale, allowing teams to
bridge the long-standing disciplinary barriers between HCI and AI. This will enable us to improve
the team dynamic allowing collaborative groups to reach the level of progress and innovation
achieved by individual researchers, whether in the HCI or AI field. This goes beyond simple
checklists for AI system or design guidelines, instead requiring committed collaboration to achieve
the authentic unification of research across the two fields.
Notes on contributors
Munmun De Choudhury (munmund@gatech.edu, http://www.munmund.net/) is an Assistant Professor of Interactive
Computing at Georgia Tech where she directs the Social Dynamics and Wellbeing Lab. Dr. De Choudhury is best
known for defining a new line of research focusing on assessing and improving mental health from online social
interactions.
Min Kyung Lee (minkyung.lee@austin.utexas.edu, http://minlee.net) is an Assistant Professor in the School of
Information at the University of Texas at Austin. Dr. Lee has done some of the first studies that examine AI’s impact
on workers and propose participatory methods for stakehold- ers to build fair AI for their own communities.
Haiyi Zhu (haiyiz@cs.cmu.edu, https://www.haiyizhu.com) is the Daniel P. Siewiorek As- sistant Professor of HumanComputer Interaction at Carnegie Mellon University. Dr. Zhu is a HCI researcher with an interest in building
trustworthy human-centered AI systems to support critical decision-making tasks.
David A. Shamma (aymans@acm.org, https://shamur.ai) is a Senior Research Scientist at FX Palo Alto Laboratory and
Distinguished Member of the Association for Computing Machinery with a research interest in AI systems for aiding
human editorial tasks, enriching collaboration, and enhancing creativity using computer vision and social computing.
360
M. D. CHOUDHURY ET AL.
ORCID
Munmun De Choudhury
http://orcid.org/0000-0002-8939-264X
http://orcid.org/0000-0002-2696-6546
Min Kyung Lee
http://orcid.org/0000-0001-7271-9100
Haiyi Zhu
http://orcid.org/0000-0003-2399-9374
David A. Shamma
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., … Zheng, X. (2015). TensorFlow: Large-scale
machine learning on heterogeneous systems. https://www.tensorflow.org/(Software available from tensorflow.org)
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P.N., Inkpen, K.
and Teevan, J. (2019). Guidelines for human-ai interaction. In Proceedings of the 2019 chi conference on human
factors in computing systems, Glasgow, Scotland (pp. 1–13).
Börner, K., Contractor, N., Falk-Krzesinski, H.J., Fiore, S.M., Hall, K.L., Keyton, J., Spring, B., Stokols, D., Trochim, W.
and Uzzi, B. (2010). A multi-level systems perspective for the science of team science. Science Translational
Medicine, 2 (49), 49cm24. https://doi.org/10.1126/scitranslmed.3001399
Chancellor, S., Baumer, E. P., & De Choudhury, M. (2019). Who is the” human” in human-centered machine learning:
The case of predicting mental health from social media. Proceedings of the ACM on Human-Computer Interaction, 3
(CSCW), Austin, TX (pp. 1–32).
Cheng, H.-F., Wang, R., Zhang, Z., O’Connell, F., Gray, T., Harper, F. M., & Zhu, H. (2019). Explaining
decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019
chi conference on human factors in computing systems, Glasgow, Scotland (pp. 1–12).
Cui, H., Zhang, H., Ganger, G. R., Gibbons, P. B., & Xing, E. P. (2016). Geeps: Scalable deep learning on distributed
GPUs with a GPU-specialized parameter server. In Proceedings of the eleventh european conference on computer
systems, London, England (pp. 1–16).
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image
database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248–255).
Dove, G., Halskov, K., Forlizzi, J., & Zimmerman, J. (2017). Ux design innovation: Challenges for working with
machine learning as a design material. In Proceedings of the 2017 chi conference on human factors in computing
systems, Denver, Colorado USA (pp. 278–288).
Ernala, S. K., Birnbaum, M. L., Candan, K. A., Rizvi, A. F., Sterling, W. A., Kane, J. M., & De Choudhury, M. (2019).
Methodological gaps in predicting mental health states from social media: Triangulating diagnostic signals. In
Proceedings of the 2019 chi conference on human factors in computing systems, Glasgow Scotland (pp. 1–16).
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K. and Sandvig, C. (2015).
I always assumed that i wasn’t really that close to [her]” reasoning about invisible algorithms in news feeds. In
Proceedings of the 33rd annual acm conference on human factors in computing systems Seoul, South Korea (pp.
153–162).
Fox, S., Dimond, J., Irani, L., Hirsch, T., Muller, M., & Bardzell, S. (2017). Social justice and design: Power and
oppression in collaborative systems. In Companion of the 2017 acm conference on computer supported cooperative
work and social computing, Portland, Oregon (pp. 117–122).
Green, B. (2018). Data science as political action: Grounding data science in a politics of justice. arXiv Preprint,
arXiv:1811.03435. https://arxiv.org/abs/1811.03435
Grudin, J. (2009). Ai and hci: Two fields divided by a common focus. Ai Magazine, 30(4), 48. https://doi.org/10.1609/
aimag.v30i4.2271
Hawkins, A. (2019). Serious safety lapses led to uber’s fatal self-driving crash, new documents suggest. The Verge. https://
www.theverge.com/2019/11/6/20951385/ uber-self-driving-crash-death-reason-ntsb-dcouments.
Ho¨ O¨, K. K. (2000). Steps to take before intelligent user interfaces become real. Interacting with Computers, 12(4),
409–426. https://doi.org/10.1016/S0953-5438(99)00006-5
Horvitz, E. (1999). Principles of mixed-initiative user interfaces. In Proceedings of the sigchi conference on human
factors in computing systems, Pittsburgh, Pennsylvania. (pp. 159–166).
Inkpen, K., Chancellor, S., De Choudhury, M., Veale, M., & Baumer, E. P. (2019). Where is the human? bridging the
gap between AI and HCI. In Extended abstracts of the 2019 chi conference on human factors in computing systems,
Glasgow, Scotland (pp. 1–9).
Lee, M.K., Kusbit, D., Kahng, A., Kim, J.T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas, A. and
Procaccia, A.D. (2019). WeBuildAI: Participatory framework for algorithmic governance. Proceedings of the ACM
on Human-Computer Interaction, 3 (CSCW), Austin, TX (pp. 1–35).
Li, -F.-F. (2018). How to make a.i. that’s good for people. New York Times.
McCarthy, J. (1998). What is artificial intelligence? (Tech. Rep.). Stanford University.
HUMAN–COMPUTER INTERACTION
361
Norman, D. (2014). Things that make us smart: Defending human attributes in the age of the machine. Diversion
Books.
Norman, D. A. (1994). How might people interact with agents. Communications of the ACM, 37(7), 68–71. https://doi.
org/10.1145/176789.176796
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L. and
Desmaison, A (2019). Pytorch: An imperative style, high-performance deep learning library. H. Wallach,
H. Larochelle, A. Beygelzimer, F. D. Alche´-Buc, E. Fox, & R. Garnett, Eds. Advances in neural information
processing systems 32. 8024–8035. Curran Associates, Inc. http://papers.neurips.cc/paper/9015-pytorch-animperative-style-high-performance-deep-learning-library.pdf
Scheuerman, M. K., Paul, J. M., & Brubaker, J. R. (2019). How computers see gender: An evaluation of gender
classification in commercial facial analysis services. Proceedings of the ACM on Human-Computer Interaction, 3
(CSCW), Austin, TX (pp. 1–33).
Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. interface agents. interactions, 4(6), 42–61. https://doi.org/
10.1145/267505.267514
Tufekci, Z. (2015). Algorithms in our midst: Information, power and choice when software is everywhere. In
Proceedings of the 18th acm conference on computer supported cooperative work & social computing, Vancouver,
Canada. (pp. 1918).
Winograd, T. (2006). Shifting viewpoints: Artificial intelligence and human–computer interaction. Artificial
Intelligence, 170(18), 1256–1258. https://doi.org/10.1016/j.artint.2006.10.011
Wolf, M. J., Miller, K., & Grodzinsky, F. S. (2017). Why we should have seen that coming: Comments on microsoft’s
tay” experiment,” and wider implications. ACM SIGCAS Computers and Society, 47(3), 54–64. https://doi.org/10.
1145/3144592
Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of
algorithmic fairness. In Proceedings of the 2018 chi conference on human factors in computing systems, Montréal,
Canada (pp. 1–14).
Zhu, H., Yu, B., Halfaker, A., & Terveen, L. (2018). Value-sensitive algorithm design: Method, case study, and lessons.
Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), Jersey City, NJ (pp. 1–23).