[go: up one dir, main page]

Skip to main content

Should My Agent Lie for Me? Public Moral Perspectives on Deceptive AI

  • Conference paper
  • First Online:
Autonomous Agents and Multiagent Systems. Best and Visionary Papers (AAMAS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14456))

Included in the following conference series:

  • 187 Accesses

Abstract

Artificial Intelligence (AI) advancements might deliver autonomous agents capable of human-like deception. Such capabilities have mostly been negatively perceived in HCI design, as they can have serious ethical implications. However, AI deception might be beneficial in some situations. Previous research has shown that machines designed with some level of dishonesty can elicit increased cooperation with humans. This raises several questions: Are there future-of-work situations where deception by machines can be an acceptable behaviour? Is this different from human deceptive behaviour? How does AI deception influence human trust and the adoption of deceptive machines? In this paper, we describe the results of a user study published in the proceedings of AAMAS 2023. The study answered these questions by considering different contexts and job roles. Here, we contextualise the results of the study by proposing ways forward to achieve a framework for developing Deceptive AI responsibly. We provide insights and lessons that will be crucial in understanding what factors shape the social attitudes and adoption of AI systems that may be required to exhibit dishonest behaviour as part of their jobs.

This project was supported by the Royal Academy of Engineering and the Office of the Chief Science Adviser for National Security under the UK Intelligence Community Postdoctoral Research Fellowship programme.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://osf.io/pyjgb/?view_only=33fb5965b0e94b0da70c05cfce4ac8ab.

  2. 2.

    A detailed account of governing common goods can be found in the works of [56].

  3. 3.

    For example, the cyberspace. However, the Infosphere is not limited to online environments [30, 31].

  4. 4.

    Additionally, [73] present a taxonomy of deceptive robot behaviour considering who is deceived (humans or machines), who benefits, and whether the deceiver intended to deceive. While the discussion is reduced to human-robot interaction (embodied agents), it still emphasises the benefit of smoother human-AI interactions.

  5. 5.

    One example of a co-performance would be to use AI to perform magic tricks [75].

  6. 6.

    Philip Staines edited Hamblin’s manuscript following his death. What resulted three decades after Hamblin’s death is the book Linguistics and the Parts of the Mind: Or how to Build a Machine Worth Talking to [78].

References

  1. Adar, E., Tan, D.S., Teevan, J.: Benevolent deception in human computer interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1863–1872 (2013)

    Google Scholar 

  2. Awad, E., et al.: The moral machine experiment. Nature 563(7729), 59–64 (2018)

    Article  Google Scholar 

  3. Awad, E., Dsouza, S., Shariff, A., Rahwan, I., Bonnefon, J.F.: Universals and variations in moral decisions made in 42 countries by 70,000 participants. Proc. Natl. Acad. Sci. 117(5), 2332–2337 (2020)

    Article  Google Scholar 

  4. Awad, E., et al.: Computational ethics. Trends Cogn. Sci. 26(5), 388–405 (2022)

    Article  Google Scholar 

  5. Berndt, T.J., Berndt, E.G.: Children’s use of motives and intentionality in person perception and moral judgment. Child Dev. 904–912 (1975)

    Google Scholar 

  6. Borenstein, J., Arkin, R.: Robotic nudges: the ethics of engineering a more socially just human being. Sci. Eng. Ethics 22(1), 31–46 (2016)

    Article  Google Scholar 

  7. Brammer, S., Williams, G., Zinkin, J.: Religion and attitudes to corporate social responsibility in a large cross-country sample. J. Bus. Ethics 71(3), 229–243 (2007)

    Article  Google Scholar 

  8. Bryan, C.J., Tipton, E., Yeager, D.S.: Behavioural science is unlikely to change the world without a heterogeneity revolution. Nat. Hum. Behav. 5(8), 980–989 (2021)

    Article  Google Scholar 

  9. Camden, C., Motley, M.T., Wilson, A.: White lies in interpersonal communication: a taxonomy and preliminary investigation of social motivations. West. J. Speech Commun. 48(4), 309–325 (1984)

    Article  Google Scholar 

  10. Castelfranchi, C.: Artificial liars: why computers will (necessarily) deceive us and each other. Ethics Inf. Technol. 2(2), 113–119 (2000)

    Article  Google Scholar 

  11. Castelfranchi, C., Tan, Y.H.: Trust and Deception in Virtual Societies. Springer, Dordrecht (2001). https://doi.org/10.1007/978-94-017-3614-5

    Book  Google Scholar 

  12. Castelfranchi, C., Tan, Y.H.: The role of trust and deception in virtual societies. Int. J. Electron. Commer. 6(3), 55–70 (2002)

    Article  Google Scholar 

  13. Chakraborti, T., Kambhampati, S.: (When) can AI bots lie? In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 53–59 (2019)

    Google Scholar 

  14. Chatila, R., Havens, J.C.: The IEEE global initiative on ethics of autonomous and intelligent systems. In: Aldinhas Ferreira, M., Silva Sequeira, J., Singh Virk, G., Tokhi, M., Kadar, E. (eds.) Robotics and Well-Being, pp. 11–16. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12524-0_2

    Chapter  Google Scholar 

  15. Clark, M.H.: Cognitive illusions and the lying machine: a blueprint for sophistic mendacity. Ph.D. thesis, Rensselaer Polytechnic Institute (2010)

    Google Scholar 

  16. Coeckelbergh, M.: Are emotional robots deceptive? IEEE Trans. Affect. Comput. 3(4), 388–393 (2011)

    Article  Google Scholar 

  17. Coeckelbergh, M.: How to describe and evaluate “deception” phenomena: recasting the metaphysics, ethics, and politics of ICTS in terms of magic and performance and taking a relational and narrative turn. Ethics Inf. Technol. 20(2), 71–85 (2018)

    Google Scholar 

  18. Cohen, P.R., Levesque, H.J.: Speech acts and rationality. In: 23rd Annual Meeting of the Association for Computational Linguistics, pp. 49–60 (1985)

    Google Scholar 

  19. Cohen, P.R., Perrault, C.R.: Elements of a plan-based theory of speech acts. In: Readings in Artificial Intelligence, pp. 478–495. Elsevier (1981)

    Google Scholar 

  20. Conger, J.A.: The necessary art of persuasion. Harv. Bus. Rev. 76, 84–97 (1998)

    Google Scholar 

  21. Cushman, F.: Crime and punishment: distinguishing the roles of causal and intentional analyses in moral judgment. Cognition 108(2), 353–380 (2008)

    Article  Google Scholar 

  22. Danaher, J.: Robot betrayal: a guide to the ethics of robotic deception. Ethics Inf. Technol. 22(2), 117–128 (2020)

    Article  Google Scholar 

  23. De Rosis, F., Carofiglio, V., Grassano, G., Castelfranchi, C.: Can computers deliberately deceive? A simulation tool and its application to Turing’s imitation game. Comput. Intell. 19(3), 235–263 (2003)

    Article  MathSciNet  Google Scholar 

  24. Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30371-6

    Book  Google Scholar 

  25. Dragan, A., Holladay, R., Srinivasa, S.: Deceptive robot motion: synthesis, analysis and experiments. Auton. Robot. 39(3), 331–345 (2015)

    Article  Google Scholar 

  26. Dunbar, N.E., Gangi, K., Coveleski, S., Adams, A., Bernhold, Q., Giles, H.: When is it acceptable to lie? Interpersonal and intergroup perspectives on deception. Commun. Stud. 67(2), 129–146 (2016)

    Article  Google Scholar 

  27. Evans, O., et al.: Truthful AI: developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674 (2021)

  28. Falcone, R., Castelfranchi, C.: Social trust: a cognitive approach. In: Castelfranchi, C., Tan, Y.H. (eds.) Trust and Deception in Virtual Societies, pp. 55–90. Springer, Dordrecht (2001). https://doi.org/10.1007/978-94-017-3614-5_3

    Chapter  Google Scholar 

  29. Falcone, R., Singh, M., Tan, Y.H.: Trust in Cyber-Societies: Integrating the Human and Artificial Perspectives, vol. 2246. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45547-7

    Book  Google Scholar 

  30. Floridi, L.: Philosophy and Computing: An Introduction. Psychology Press (1999)

    Google Scholar 

  31. Floridi, L.: Ethics in the infosphere. Philosophers’ Mag. 16, 18–19 (2001)

    Article  Google Scholar 

  32. Fogg, B.J.: Persuasive computers: perspectives and research directions. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 225–232 (1998)

    Google Scholar 

  33. Franklin, M., Ashton, H., Gorman, R., Armstrong, S.: Missing mechanisms of manipulation in the EU AI act. In: The International FLAIRS Conference Proceedings, vol. 35 (2022)

    Google Scholar 

  34. Greco, G.M., Floridi, L.: The tragedy of the digital commons. Ethics Inf. Technol. 6(2), 73–81 (2004)

    Article  Google Scholar 

  35. Habermas, J.: The Theory of Communicative Action: Lifeworld and Systems, a Critique of Functionalist Reason, vol. 2. Wiley, Hoboken (2015)

    Google Scholar 

  36. Häggström, O.: Strategies for an unfriendly oracle AI with reset button. In: Artificial Intelligence Safety and Security, pp. 207–215. Chapman and Hall/CRC (2018)

    Google Scholar 

  37. Hamblin, C.L.: Mathematical models of dialogue 1. Theoria 37(2), 130–155 (1971)

    Article  MathSciNet  Google Scholar 

  38. Han, L., Siau, K.: Impact of socioeconomic status on trust in artificial intelligence (2020). AMCIS 2020 TREOs. 90. https://aisel.aisnet.org/treos_amcis2020/90

  39. Hardin, G.: The tragedy of the commons. Science 162(3859), 1243–1248 (1968)

    Article  Google Scholar 

  40. HLEG, in AI: Ethics guidelines for trustworthy AI. B-1049 Brussels (2019)

    Google Scholar 

  41. Isaac, A., Bridewell, W.: White Lies on Silver Tongues: Why Robots Need to Deceive (and How). Oxford University Press, Oxford (2017)

    Google Scholar 

  42. Ishowo-Oloko, F., Bonnefon, J.F., Soroye, Z., Crandall, J., Rahwan, I., Rahwan, T.: Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nat. Mach. Intell. 1–5 (2019)

    Google Scholar 

  43. Kampik, T., Nieves, J.C., Lindgren, H.: Coercion and deception in persuasive technologies. In: 20th International Trust Workshop (co-located with AAMAS/IJCAI/ECAI/ICML 2018), Stockholm, Sweden, 14 July 2018, pp. 38–49. CEUR-WS (2018)

    Google Scholar 

  44. Kant, I.: On a supposed right to lie from philanthropy (1797). Practical philosophy [trans: Gregor m] (1996)

    Google Scholar 

  45. Leslie, A.M., Knobe, J., Cohen, A.: Acting intentionally and the side-effect effect: theory of mind and moral judgment. Psychol. Sci. 17(5), 421–427 (2006)

    Article  Google Scholar 

  46. Levine, E.E., Schweitzer, M.E.: Prosocial lies: when deception breeds trust. Organ. Behav. Hum. Decis. Process. 126, 88–106 (2015)

    Article  Google Scholar 

  47. Levine, T.R.: Encyclopedia of Deception. Sage Publications, Thousand Oaks (2014)

    Book  Google Scholar 

  48. Levine, T.R.: Duped: Truth-Default Theory and the Social Science of Lying and Deception. University Alabama Press, Tuscaloosa (2019)

    Google Scholar 

  49. Lewis, P.R., Marsh, S.: What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cogn. Syst. Res. 72, 33–49 (2021)

    Article  Google Scholar 

  50. Lippard, P.V.: “Ask me no questions, i’ll tell you no lies’’;: situational exigencies for interpersonal deception. West. J. Commun. (includes Commun. Rep.) 52(1), 91–103 (1988)

    Google Scholar 

  51. Masters, P., Smith, W., Sonenberg, L., Kirley, M.: Characterising deception in AI: a survey. In: Sarkadi, S., Wright, B., Masters, P., McBurney, P. (eds.) Deceptive AI. CCIS, vol. 1296, pp. 3–16. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-91779-1_1

    Chapter  Google Scholar 

  52. Mauldin, M.L.: Chatterbots, tinymuds, and the turing test: entering the loebner prize competition. In: AAAI, vol. 94, pp. 16–21 (1994)

    Google Scholar 

  53. Mell, J., Lucas, G., Mozgai, S., Gratch, J.: The effects of experience on deception in human-agent negotiation. J. Artif. Intell. Res. 68, 633–660 (2020)

    Article  Google Scholar 

  54. Miller, M.D., Levine, T.R.: Persuasion. In: An Integrated Approach to Communication Theory and Research, pp. 261–276. Routledge (2019)

    Google Scholar 

  55. Natale, S., et al.: Deceitful Media: Artificial Intelligence and Social Life After the Turing Test. Oxford University Press, Oxford (2021)

    Book  Google Scholar 

  56. Ostrom, E.: Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, Cambridge (1990)

    Book  Google Scholar 

  57. Panisson, A.R., Sarkadi, S., McBurney, P., Parsons, S., Bordini, R.H.: Lies, bullshit, and deception in agent-oriented programming languages. In: Proceedings of the 20th International TRUST Workshop @ IJCAI/AAMAS/ECAI/ICML, pp. 50–61. CEUR Workshop Proceedings, Stockholm, Sweden (2018)

    Google Scholar 

  58. Panisson, A.R., Sarkadi, S., McBurney, P., Parsons, S., Bordini, R.H.: On the formal semantics of theory of mind in agent communication. In: Lujak, M. (ed.) AT 2018. LNCS, vol. 11327, pp. 18–32. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-17294-7_2

    Chapter  Google Scholar 

  59. Rawls, J.: A Theory of Justice. Harvard University Press, Cambridge (2009)

    Google Scholar 

  60. Resnick, D.: The Ethics of Science. Rout, London (1998)

    Google Scholar 

  61. Sætra, H.S.: Social robot deception and the culture of trust. Paladyn J. Behav. Robot. 12(1), 276–286 (2021)

    Article  Google Scholar 

  62. Sarkadi, S.: Deception. Ph.D. thesis, King’s College London (2021)

    Google Scholar 

  63. Sarkadi, S.: An arms race in theory-of-mind: Deception drives the emergence of higher-level theory-of-mind in agent societies. In: 4th IEEE International Conference on Autonomic Computing and Self-Organizing Systems ACSOS 2023. IEEE Computer Society (2023)

    Google Scholar 

  64. Sarkadi, S., McBurney, P., Parsons, S.: Deceptive storytelling in artificial dialogue games. In: Proceedings of the AAAI 2019 Spring Symposium Series on Story-Enabled Intelligence (2019)

    Google Scholar 

  65. Sarkadi, S., Mei, P., Awad, E.: Should my agent lie for me? A study on attitudes of US-based participants towards deceptive AI in selected future-of-work scenarios. In: Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). IFAAMAS (2023)

    Google Scholar 

  66. Sarkadi, S., Panisson, A.R., Bordini, R.H., McBurney, P., Parsons, S.: Towards an approach for modelling uncertain theory of mind in multi-agent systems. In: Lujak, M. (ed.) AT 2018. LNCS, vol. 11327, pp. 3–17. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-17294-7_1

    Chapter  Google Scholar 

  67. Sarkadi, S., Panisson, A.R., Bordini, R.H., McBurney, P., Parsons, S., Chapman, M.D.: Modelling deception using theory of mind in multi-agent systems. AI Commun. 32(4), 287–302 (2019)

    Article  MathSciNet  Google Scholar 

  68. Sarkadi, Ş, Rutherford, A., McBurney, P., Parsons, S., Rahwan, I.: The evolution of deception. R. Soc. Open Sci. 8(9), 201032 (2021)

    Article  Google Scholar 

  69. Sarkadi, S., Wright, B., Masters, P., McBurney, P. (eds.): DeceptiveAI, vol. 1296. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-91779-1

    Book  Google Scholar 

  70. Searle, J.R.: The Chinese room revisited. Behav. Brain Sci. 5(2), 345–348 (1982)

    Article  Google Scholar 

  71. Seiter, J.S., Bruschke, J., Bai, C.: The acceptability of deception as a function of perceivers’ culture, deceiver’s intention, and deceiver-deceived relationship. West. J. Commun. (includes Commun. Rep.) 66(2), 158–180 (2002)

    Article  Google Scholar 

  72. Sharkey, A., Sharkey, N.: We need to talk about deception in social robotics! Ethics Inf. Technol. 23(3), 309–316 (2021)

    Article  Google Scholar 

  73. Shim, J., Arkin, R.C.: A taxonomy of robot deception and its benefits in HRI. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2328–2335. IEEE (2013)

    Google Scholar 

  74. Sklar, E., Parsons, S., Davies, M.: When is it okay to lie? a simple model of contradiction in agent-based dialogues. In: Rahwan, I., Moraïtis, P., Reed, C. (eds.) ArgMAS 2004. LNCS, vol. 3366, pp. 251–261. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-32261-0_17

    Chapter  Google Scholar 

  75. Smith, W., Dignum, F., Sonenberg, L.: The construction of impossibility: a logic-based analysis of conjuring tricks. Front. Psychol. 7, 748 (2016)

    Article  Google Scholar 

  76. Sorensen, R.: Kant tell an a priori lie. In: From Lying to Perjury: Linguistic and Legal Perspectives on Lies and Other Falsehoods, vol. 3, p. 65 (2022)

    Google Scholar 

  77. Sorensen, R.A.: A Cabinet of Philosophical Curiosities: A Collection of Puzzles, Oddities, Riddles and Dilemmas. Oxford University Press, Oxford (2016)

    Google Scholar 

  78. Staines, P.: Linguistics and the Parts of the Mind: Or how to Build a Machine Worth Talking to. Cambridge Scholars Publishing, Cambridge (2018)

    Google Scholar 

  79. Turing, A.: Computing machinery and intelligence. Mind 59(236), 433–460 (1950). www.jstor.org/stable/2251299

  80. Van Maris, A., Zook, N., Caleb-Solly, P., Studley, M., Winfield, A., Dogramadzi, S.: Designing ethical social robots-a longitudinal field study with older adults. Front. Robot. AI 7, 1 (2020)

    Article  Google Scholar 

  81. Wagner, A.R., Arkin, R.C.: Robot deception: recognizing when a robot should deceive. In: 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation-(CIRA), pp. 46–54. IEEE (2009)

    Google Scholar 

  82. Wagner, A.R., Arkin, R.C.: Acting deceptively: providing robots with the capacity for deception. Int. J. Soc. Robot. 3(1), 5–26 (2011)

    Article  Google Scholar 

  83. Wang, D., Maes, P., Ren, X., Shneiderman, B., Shi, Y., Wang, Q.: Designing AI to work with or for people? In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–5 (2021)

    Google Scholar 

  84. Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “do you trust me?” increasing user-trust by integrating virtual agents in explainable AI interaction design. In: Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, pp. 7–9 (2019)

    Google Scholar 

  85. Westlund, J.K., Breazeal, C.: Deception, secrets, children, and robots: what’s acceptable. In: Workshop on The Emerging Policy and Ethics of Human-Robot Interaction, held in conjunction with the 10th ACM/IEEE International Conference on Human-Robot Interaction (2015)

    Google Scholar 

  86. Yudkowsky, E.: The AI-box experiment. Singularity Institute (2002)

    Google Scholar 

  87. Zhan, X., Xu, Y., Sarkadi, S.: Deceptive AI ecosystems: the case of chatgpt. In: Conversational User Interfaces, CUI 2023, 19–21 July 2023, Eindhoven, Netherlands (2023)

    Google Scholar 

Download references

Acknowledgments

This project was supported by the Royal Academy of Engineering and the Office of the Chief Science Adviser for National Security under the UK Intelligence Community Postdoctoral Research Fellowship programme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Sarkadi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sarkadi, S., Mei, P., Awad, E. (2024). Should My Agent Lie for Me? Public Moral Perspectives on Deceptive AI. In: Amigoni, F., Sinha, A. (eds) Autonomous Agents and Multiagent Systems. Best and Visionary Papers. AAMAS 2023. Lecture Notes in Computer Science(), vol 14456. Springer, Cham. https://doi.org/10.1007/978-3-031-56255-6_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56255-6_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56254-9

  • Online ISBN: 978-3-031-56255-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics