Abstract
Predicting compliance with AI recommendations and knowing when to intervene are critical facets of human-AI teaming. AIs are typically deployed in settings where their abilities to evaluate decision variables far exceed the abilities of their human counterparts. However, even though AIs excel at weighing multiple issues and computing near optimal solutions with speed and accuracy beyond that of any human, they still make mistakes. Thus, perfect compliance may be undesirable. This means, just as individuals must know when to follow the advice of other people, it is critical for them to know when to adopt the recommendations from their AI. Well-calibrated trust is thought to be a fundamental aspect of this type of knowledge. We compare the ability of a common trust inventory and the ability of a behavioral measure of trust to predict compliance and success in a reconnaissance mission. We interpret the experimental results to suggest that the behavioral measure is a better predictor of overall mission compliance and success. We discuss how this measure could possibly be used in compliance interventions and related open questions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
APA dictionary of psychology. American Psychological Association. https://dictionary.apa.org/trust
Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2019)
Araujo, T., Helberger, N., Kruikemeier, S., De Vreese, C.H.: In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc. 35(3), 611–623 (2020)
Barnes, M.J., Wang, N., Pynadath, D.V., Chen, J.Y.: Human-agent bidirectional transparency. In: Trust in Human-Robot Interaction, pp. 209–232. Elsevier (2021)
Christoforakos, L., Gallucci, A., Surmava-Große, T., Ullrich, D., Diefenbach, S.: Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Front. Robot. AI 8, 79 (2021)
Elliot, J.: Artificial social intelligence for successful teams (ASIST) (2021). https://www.darpa.mil/program/artificial-social-intelligence-for-successful-teams
Gurney, N., Pynadath, D.V., Wang, N.: Compliance in human-robot interactions (2022). Submitted to the Conference on User Modeling, Adaptation and Personalization
Gurney, N., Pynadath, D.V., Wang, N.: Explainable reinforcement learning in human-machine teams: the impact of decision-tree based explanations on transparency communication and team performance (2022). Submitted to the International Symposium on Robot and Human Interactive Communication
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)
Kunkel, J., Donkers, T., Michael, L., Barbu, C.M., Ziegler, J.: Let me explain: impact of personal and impersonal explanations on trust in recommender systems. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2019)
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)
McKnight, D.H., Choudhury, V., Kacmar, C.: Developing and validating trust measures for e-commerce: an integrative typology. Inf. Syst. Res. 13(3), 334–359 (2002)
Pynadath, D.V., Wang, N., Rovira, E., Barnes, M.J.: Clustering behavior to recognize subjective beliefs in human-agent teams. In: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1495–1503 (2018)
Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)
Seeber, I., et al.: Machines as teammates: a research agenda on AI in team collaboration. Inf. Manage. 57(2), 103174 (2020)
Shneiderman, B.: Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Hum.-Comput. Interact. 36(6), 495–504 (2020)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109–116. IEEE (2016)
Wang, N., Pynadath, D.V., Hill, S.G., Ground, A.P.: Building trust in a human-robot team with automatically generated explanations. In: Proceedings of the Interservice/Industry Training, Simulation and Education Conference (I/ITSEC), vol. 15315, pp. 1–12 (2015)
Wang, N., Pynadath, D.V., Rovira, E., Barnes, M.J., Hill, S.G.: Is it my looks? Or something I said? The impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In: Ham, J., Karapanos, E., Morita, P.P., Burns, C.M. (eds.) PERSUASIVE 2018. LNCS, vol. 10809, pp. 56–69. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-78978-1_5
Yin, M., Wortman Vaughan, J., Wallach, H.: Understanding the effect of accuracy on trust in machine learning models. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–12 (2019)
Zhang, Y., Liao, Q.V., Bellamy, R.K.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 295–305 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gurney, N., Pynadath, D.V., Wang, N. (2022). Measuring and Predicting Human Trust in Recommendations from an AI Teammate. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)