[go: up one dir, main page]

Academia.eduAcademia.edu

The Other in the machine: diplomacy and the AI conundrum

2024, Place Branding and Public Diplomacy

The ancient origins of diplomacy can be traced back to the first encounters between bands of nomadic hunter-gatherers and the emergence of the Other in prehistory. The character of diplomacy (how it is made) should not be confused with its nature: person-to-person interactions and how to conduct peaceful relations among foreign and/or separate political communities. Language was critical for behavioral modernity in Homo Sapiens evolution. ChatGPT seemingly mastering language was a tipping point. As AI systems increasingly generate alien, non-human outputs, and encroach upon cognitive tasks that once were a monopoly of our biological brains, seeing the Other in the machine will become more common. Human intellectual supremacy will likely be challenged in several narrow domains, raising long-term anthropological questions as well. This is why the AI conundrum is better understood by making a distinction between foreignness (Us-Them) and alienness (Us-It). Although technological disruption has been changing the landscape where diplomats work, the very nature of their profession remains the same. However, considering the risks involved, caution is advised while deploying new AI tools, particularly in sensitive diplomatic decision-making. Human-machine collaboration will be key to successfully dealing with the inherent alienness of AI.

The Other in the machine: diplomacy and the AI conundrum Eugenio V. Garcia 1 Orcid https://orcid.org/0000-0002-7207-4653 Place Branding and Public Diplomacy, Palgrave Macmillan, 2024 Forum: The End of Diplomacy? ChatGPT, Generative AI and the Future of Digital Diplomacy DOI: https://doi.org/10.1057/s41254-024-00329-6 Also available as pre-print at SSRN: http://ssrn.com/abstract=4629685 Abstract The ancient origins of diplomacy can be traced back to the first encounters between bands of nomadic hunter-gatherers and the emergence of the Other in prehistory. The character of diplomacy (how it is made) should not be confused with its nature: person-to-person interactions and how to conduct peaceful relations among foreign and/or separate political communities. Language was critical for behavioral modernity in Homo Sapiens evolution. ChatGPT seemingly mastering language was a tipping point. As AI systems increasingly generate alien, non-human outputs, and encroach upon cognitive tasks that once were a monopoly of our biological brains, seeing the Other in the machine will become more common. Human intellectual supremacy will likely be challenged in several narrow domains, raising long-term anthropological questions as well. This is why the AI conundrum is better understood by making a distinction between foreignness (Us-Them) and alienness (Us-It). Although technological disruption has been changing the landscape where diplomats work, the very nature of their profession remains the same. However, considering the risks involved, caution is advised while deploying new AI tools, particularly in sensitive diplomatic decision-making. Human-machine collaboration will be key to successfully dealing with the inherent alienness of AI. Keywords Diplomacy - Artificial intelligence - ChatGPT - Large language models - Foreignness - Alienness 1 Consulate General of Brazil in San Francisco, California, USA. Email: egarcia.virtual@gmail.com 1 The Otherness condition: foreignness vs. alienness For millennia, many different types of interaction have come into play among separate political communities, even if only in a rudimentary form.2 In a broad sense, foreign relations among political units can be traced back to the age of hominids in prehistory. The ancient origins of diplomacy point to the pre-state era and the first encounters between distinct bands of nomadic Homo sapiens hunter-gatherers in the Paleolithic period.3 Incidentally, the character of diplomacy should not be confused with its nature. Its character is related to how diplomacy is made, the skills, expertise, and resources usually deployed to do the job. By contrast, the nature of diplomacy is the essence of the craftsmanship applied in organizing peaceful interactions between groups, societies, or states. As technology advances, the character of diplomacy changes accordingly. Yet, the deeply ingrained nature of diplomacy remains associated with person-to-person relations among polities and across borders. Human nature has not changed dramatically since antiquity after all. The relationship with the Other (a foreigner, a stranger) is central to international relations, as in the case of intergroup conflict and the Us-Them divide. Whenever interactions occur in or among a plurality of polities, including non-sedentary ones, the question of foreignness comes to the fore. Pijl made “the foreign” one of the cornerstones of his proposal to broaden the scope of international relations, beyond the nation-state, redefined as “relations between communities occupying separate spaces and dealing with each other as outsiders”.4 The international is indeed a historical contingency encompassing different configurations over time. Sharp, for instance, argued that diplomacy exists wherever people live in “conditions of separateness from one another”.5 The existence of the Other can be regarded as a sine qua non for any international-like relationship. What does artificial intelligence (AI) have to do with the Otherness condition in diplomacy? AI is an umbrella term for computational processes that use different techniques to process data and perform tasks normally associated with human intelligence. In a way, machine intelligence produces non-biological, non-human knowledge. AI systems can come up with discoveries, solutions, or ideas humans never conceived before. This phenomenon makes them profoundly alien in many aspects and may have unprecedented anthropological consequences for our very human Self.6 Diplomats are used to dealing with the notion of Us vs. Them brought about by foreignness. The alienness of AI might give rise to an Us vs. It dichotomy in the future. I thank Peter Cihon, Roger Spitz, Edson Prestes, and the anonymous reviewers for their comments on the first draft. The views expressed here are the sole responsibility of the author. 3 Garcia (2018, p. 473). 4 Pijl (2007, p. vi). 5 Sharp (2009, p. 10). 6 Romero (2022, p. 1-2). 2 2 For the purposes of this article, “the Other in the machine” must be understood as the hypothesis of an emerging intellectual entity to be reckoned with or treated as such by the human species. Obviously, it will not be perceived as a foreign agent or self-conscious being, in a way that would create an “international” interaction (Table 1). Instead, and this is the conundrum we shall face, it is the inherently alien nature of its intelligence that could trigger the Otherness feeling of dealing with something else, with creature-like qualities, and not merely a technological tool to serve human needs. Table 1. Otherness and the AI conundrum: foreign vs. alien The Otherness condition Foreignness Alienness Key element Who is the Other? Primary dichotomy Historical timeframe Type of interaction Foreign, separate, stranger, outsider Humans, same species (Homo sapiens) Us-Them Since the dawn of humanity (ongoing) Intergroup or interpolity, transnational, international Alien, different species, non-human Non-biological AI systems (machine) Us-It Inceptive, evolving (undetermined future) Human-machine interaction Is AI set to become a new intellectual species, uniquely different from us? Actually, for the Otherness condition to emerge, achieving artificial general intelligence (AGI) is not a prerequisite. Narrow AI deals with very specific tasks within predictable parameters, such as playing chess or Go, recognizing faces, making recommendations, analyzing images, and so on. Current deep learning systems, based on pattern recognition and statistic predictions, belong to the narrow category as they do not possess any understanding of the real world.7 AGI does not exist for the moment and experts disagree if and when it will be achieved. 8 The AI conundrum is exacerbated by three additional problems: Roitblat’s assessment (2023) gets straight to the point: “Language models do not represent a breakthrough in artificial general intelligence; they are just as focused on a narrow task as is any other machine learning model. It is humans who attribute cognitive properties to that singular task in a kind of collective delusion”. 8 Critical views claim that building mathematical models to emulate the human neurocognitive system is “impossible”. Therefore, without language understanding, there would be no AGI. Landgrebe and Smith (2023, p. xi). 7 3 1) Humans are wired to anthropomorphize almost everything. We are a social creature prone to make emotional connections and attach feelings to inanimate objects, particularly those that are designed to simulate human behavior or display similar humanoid traits. 2) Humans suffer from automation bias. We have a tendency to trust machines when they carry out certain tasks better than we do (the pocket calculator being a case in point). As it turns out, imperfect as they are, AI systems make mistakes and can generate misleading or false outputs. 3) Humans attribute meaning to their own existence and the universe. Machines, on the contrary, do not perceive the world the way a human does as a biological animal with an inextricably intertwined combination of organic body and mind. These compounded problems create a mismatch that can open the door to numerous dilemmas. Our human experience is usually the yardstick to anything remotely appearing to be intelligent, regardless of where we are searching, whether in nature or deep in outer space. We gauge other species according to our own perception of what is intelligent or not. This is why our first reaction is to look upon alien intelligence through our own lens and then instinctively anthropomorphize it. Automation bias would follow suit. Our embodied minds build world models, after millions of years of evolution, while operating in the natural environment within a given culture or civilization. Today, in a world dominated by predictive algorithms, as Nowotny put it, AI challenges our identities in “a co-evolutionary trajectory on which humankind has embarked together with the digital machines it has invented and deployed”. 9 Intelligence is indeed what made Homo sapiens what we are. Applying a theory of mind to grasp what others might be thinking would be exceedingly hard for computing devices, notably probabilistic deep learning models relying heavily on statistical learning but lacking an understanding of the context. We must concede, however, that if technological breakthroughs do turn AI into the would-be Other sometime in the future, a Copernican revolution could then take place and redefine intelligence as we know it. Intellectual anthropocentrism would be challenged, breaking in practice the prevailing monopoly of our biological brains. Enter ChatGPT: mastering language? We have been learning to live under AI supremacy in domain-specific activities where humans have already been outperformed. Computers do not need to mimic human behavior to accomplish certain goals faster and more efficiently. DeepMind’s AlphaZero developers were not chess grandmasters. They utilized a Monte Carlo tree search algorithm and a mix of reinforcement learning techniques to generate moves without supervision, allowing the program to win games with superhuman performance. It was AlphaZero that found the best She adds that, in this co-evolution, “digital beings or entities like the robots created by us are mutating into our significant Others”. Nowotny (2021, p. 17). 9 4 moves, not its developers.10 Again, these non-human outputs are fundamentally alien. A subtle feeling of estrangement or disconnection may emerge as a result. Generative AI is set to have far-reaching implications. Large language models (LLMs) 11 will probably converge and integrate with image and video generation to enable multimodal chatbots capable of searching, chatting, and creating content. Newly developed interactive AI agents would have the capacity to perform a myriad of tasks at the same time. More sophisticated applications and advanced use cases are expected in the following years. Soon it may be nearly impossible to distinguish between human-made and AI-generated text, images, audio, video, or code. OpenAI’s ChatGPT seemingly mastering language was a tipping point. As a rule-bound symbolic system, articulate language is unique to humans. Language acquisition, along with abstract thinking, social norms, and other cognitive and cultural foundations, made possible the behavioral modernity that separated Homo Sapiens from other anatomically analogous hominins.12 ChatGPT appears to know the language's syntax by consistently applying formal grammar rules. It can write better, faster, and more fluently than many humans on a wide variety of issues. Still, semantics remains elusive. The actual meaning of words is unknown for existing LLMs, inasmuch as they lack symbolic knowledge representation, common sense, and abstract causal reasoning, missing features of deep neural networks.13 Explanability is also a challenge: even the developers themselves are not yet able to convincingly interpret the inner workings of these models.14 Certainly, language for prehistoric individuals meant oral communication. The age-old art of conversation to verbally communicate information and ideas preceded writing and shaped the cultural foundation of society. Fast forward to the present day, conversational AI will take interactive spoken dialogue to the next level by using natural language understanding for a more intuitive experience. Talking to high-quality virtual assistants or companions will make interactions with It much more convincing. Looming manipulation risks will become more prevalent in cases of “real-time voice and photorealistic digital personas that look, move, and express like real people”, as Rosenberg pointed out.15 Similarly, further convergence between AI and robotics will call for new philosophical examinations of the ontological nuances between natural persons, things, and robots.16 Garcia (2021, p. 5). A large language model is a type of deep learning algorithm pre-trained with massive amounts of data to understand existing content and generate original text. Cf. https://machinelearningmastery.com/what-are-largelanguage-models. 12 For a full account of how language began, cf. Johansson (2021). 13 Garcez and Lamb (2020, p. 1-3). 14 To avoid confusion, language should be separated from thought: LLMs can arguably master the formal structure of language, but they are not really “thinking” the way humans do. Dickson (2023, p. 1-2). 15 Rosenberg (2023, p. 1). 16 Gunkel (2023). 10 11 5 Some might speculate that GPT-4 is an emerging Other with “sparks of AGI”.17 It is not. LLMs are based on text prediction to identify among billions of parameters which token makes the best choice in a given sequence. The results may trick your brain and make it believe that there was some real intelligence involved in the process. LLMs available today are not reliable and fail to differentiate fact from fiction. Impaired by prompt brittleness and without a plausible world model, their output looks convincing for the novice learner, but it can be deeply flawed. They provide time and again factually inaccurate, wrong answers, and frequently “hallucinate”.18 Their long-term effects on human ingenuity and creativity are still unclear. What should we (diplomats in particular) do about it? The Other of our making: overcoming the Us-It dichotomy From the very beginning of their careers, diplomats are trained to interact with foreigners (Them). One of the unsaid skills needed in their job description is being at ease with the concept of foreignness, something they experience almost on a daily basis when living abroad. Alienness is different. As suggested above, it concerns how humans interact, communicate, or relate to non-biological AI systems or machines (It) exhibiting cognitive capabilities that at times may put our Self into question. Diplomats (and all of Us actually) need to find out how to make the most of It and what to avoid. Human-machine collaboration will be key to successfully dealing with the inherent alienness of AI. The Otherness condition that seems to be part of It may lead diplomats to misuse AI tools or use them in a domain where they add little benefit or even can do harm. AI must be handled with care. Using ChatGPT or any other LLM as an oracle would not be advisable, as they cannot be trusted. Expert human judgment is required for fact-checking and scrutinizing connection with reality. Ministries of Foreign Affairs are notoriously hierarchical and risk-averse organizations that are usually slow to change and cautious to welcome disruptive technologies. Even so, the timely adoption of AI technologies may prove useful in many ways. Their diplomatic archives are a goldmine of data waiting to be fully harvested. As diplomats use written language every day (in cables and reports sent and received between headquarters and missions overseas), taking advantage of natural language processing tools can be a low-hanging fruit. Digitizing information would allow relevant data to be gathered for AI models, mindful of local needs and priorities.19 In the case of operational resources to boost productivity and save time, knowledge workers could benefit from writing assistants by using application programming interfaces (APIs) based on commercial chatbots to summarize lengthy documents, expedite reporting, Bubeck et al. (2023). Hallucinations occur when the LLM generates text that is erroneous, nonsensical, or detached from reality. For this and other unsolved problems of LLMs, see Kaddour et al. (2023). 19 There might be a need for in-house, customized solutions to ensure confidentiality, cybersecurity, and data protection, given the risks involved in relying on third-party private systems to provide, generate, or access critical content. 17 18 6 write diplomatic communications (invitations, condolences, congratulations, thank-you letters), or prepare the zero draft of a speech on a given topic. Hastily deploying these AI tools can be a double-edged sword though. Diplomats should be aware of clichés generated by LLMs and avoid becoming overdependent on any flashy technological apparatus. Diplomacy is a tricky skill for machines to learn. Some soft skills are enduring assets that will continue to be in high demand, including good cross-cultural communication, emotional intelligence, building interpersonal connections, and developing non-computable empathy. As explained above, concrete technical limitations also prevent AI from playing a more active role in diplomatic settings. No AI system can “read the room” like an experienced diplomat would. Real-life input collected with a personal touch through private contacts, embedded in the local reality, is difficult to replicate. Applying AI to diplomatic policymaking is even more challenging. Here lies the ultimate risk of alienness leading to bad advice. As a foreign policy decision-support tool, AI applications can inter alia be adopted in strategic planning, intelligence gathering, data analytics, predictive modeling, simulations, or negotiations.20 Collecting and processing data at scale, machine intelligence could provide algorithm-augmented prescriptive insights and offer decisionmakers a preferred course of action in a complex, non-linear, and very often unpredictable world.21 Yet, when it comes to strategic advice for diplomatic decision-making, we are still mostly in unchartered territory. There are very few cases that could serve as an inspiration.22 A thin line exists between utilizing AI in strategic thinking and delegating critical decisions to a machine. It can be argued that certain decisions should never be left to AI systems alone, such as making life-and-death decisions or launching nuclear weapons. It will become imperative that we draw the line somewhere to ensure safety and avert nightmare scenarios. It must be stressed that more familiarity over time or a better understanding of how these systems operate will not change the fact that knowledge generated by AI is intrinsically nonhuman. But as Kelly noted, the alienness of AI – thinking differently from humans – is its chief asset, rather than an anomaly or something to be avoided.23 Interaction with the Other changes the perception of Self and one’s place in the world. We need to move beyond the AI conundrum and strive to see machines much more as a tool rather than a creature. In this context, humanmachine collaboration is desirable not only to augment (and not replace) diplomats, but also to make any emerging Us-It interplay less binary and more pragmatic. The Cooperative AI Foundation has an interesting research program, cf. www.cooperativeai.com. Spitz (2020). 22 Lee (2023) explores some uses of AI in negotiations. 23 “An intelligence running on a very different body (in dry silicon instead of wet carbon) would think differently. I don’t see that as a bug but rather as a feature”, Kelly (2017, p. 9). 20 21 7 Ideally, we had better move from the obsession with creating intelligent agents that act autonomously to replace humans and embrace instead, in Shneiderman’s words, “supertools that amplify, augment, empower, and enhance human performance”. He adds that this aspiration is complemented by the goals of building human self-efficacy, creativity, responsibility, social connectedness, and collaboration.24 By using AI tools wisely, diplomats would be able to escape from dualistic mindsets that may keep them too attached to atavistic Us-Them antagonisms when interacting with It. Conclusion A defining feature of the Anthropocene is Homo sapiens’ sense of superiority and control over nature and other sentient animals. Thanks to technology, humans can fly without flapping wings or swim underwater without fish fins. The breakneck pace of AI development has been introducing signs of a coming anthropological uneasiness that may challenge the notion of human exceptionalism and force us to redefine what we mean by cognitive supremacy on this planet. Technological progress may change how diplomacy is made (its character) over the centuries, but humans (their nature) remain mostly the same. AI should not be expected to take over the role of diplomats anytime soon. Diplomacy is basically about people, human-to-human interactions. Technology is constantly changing, but the ultimate goals of diplomacy are very similar to ancient times: how to conduct international (intergroup or interpolity) relations by cooperation, negotiation, or any other peaceful means, in order to promote friendly exchanges among the actors involved. In the name of efficiency, however, we are increasingly delegating cognitive functions to computers while voluntarily renouncing our ability to understand how powerful AI systems make decisions. Ultimately, automation bias may become a major problem if this technology is employed to provide meaning from raw data giving no insight into how its conclusions were reached. Diplomats should be wary of always searching for optimizing everything, especially as far as foreign policy is concerned, as if the datafication of politics could be a panacea for all problems. There must be a balance between valuable improvements leveraged by AI systems and their encroachment on sensitive diplomatic decision-making, in view of the serious ethical, political, and safety concerns it can raise. Seeing the Other in the machine (and anthropomorphizing It) may not lead to an “international” dynamic with a foreign entity as diplomats are used to. It can, nonetheless, be conducive to encounters with an alien intelligence that may not always be predictable, controlled, or aligned with our preferences. Overcoming the AI conundrum calls for deconstructing the creature-like Other we conceive in our own 24 Shneiderman (2022, Chapter 1). 8 minds and exercising caution while using AI-powered tools to enhance our skills and help us achieve our goals. * * * References Bubeck, Sébastien et al. 2023. Sparks of artificial general intelligence: early experiments with GPT4. Arxiv. https://arxiv.org/abs/2303.12712. Accessed 6 October 2023. Dickson, Ben. 2023. To understand language models, we must separate “language” from “thought”. TechTalks. https://bdtechtalks.com/2023/02/20/llm-dissociating-languageand-thought. Accessed 28 April 2023. Garcez, Artur d'Avila, and Luis C. Lamb. 2020. Neurosymbolic AI: the 3rd wave. Arxiv. https://arxiv.org/abs/2012.05876. Accessed 3 February 2023. Garcia, Eugenio V. 2021. Living under AI supremacy: five lessons learned from chess. LinkedIn. https://www.linkedin.com/pulse/living-under-ai-supremacy-five-lessons-learned-fromchess-garcia. Accessed 12 December 2022. Garcia, Eugenio V. 2018. Back to prehistory: the quest for an alternative IR founding myth. Cambridge Review of International Affairs 31 (6): 473-493. Available at https://www.tandfonline.com/doi/full/10.1080/09557571.2018.1539948 Gunkel, David J. 2023. Person, thing, robot: a moral and legal ontology for the 21st century and beyond. Cambridge, Massachusetts: The MIT Press. Johansson, Sverker. 2021. The dawn of language: how we came to talk. London: MacLehose Press. Kaddour, Jean et al. 2023. Challenges and applications of large language models. Arxiv. https://arxiv.org/abs/2307.10169. Accessed 10 August 2023. Kelly, Kevin. 2017. The myth of a superhuman AI. Wired. https://www.wired.com/2017/04/themyth-of-a-superhuman-ai. Accessed 15 November 2023. Landgrebe, Jobst, and Barry Smith. 2023. Why machines will never rule the world: artificial intelligence without fear. New York: Routledge. Lee, Daniel D. 2023. AI negotiator: artificial intelligence-powered negotiations in diplomacy and deal-making. Independently published. Nowotny, Helga. 2021. In AI we trust: power, illusion and control of predictive algorithms. Cambridge: Polity Press. Pijl, Kees van Der. 2007. Nomads, empires, states: modes of foreign relations and political economy. London: Pluto Press. Roitblat, Herbert. 2023. Does artificial intelligence threaten human extinction? TechTalks. https://bdtechtalks.com/2023/06/15/artificial-intelligence-human-extinction. Accessed 27 November 2023. Romero, Alberto. 2022. The alienness of AI is a bigger problem than its imperfection. The Algorithmic Bridge, Substack. https://thealgorithmicbridge.substack.com/p/thealienness-of-ai-is-a-bigger-problem. Accessed 15 July 2023. 9 Rosenberg, Louis. 2023. The manipulation problem: conversational AI as a threat to epistemic agency. Arxiv. https://arxiv.org/abs/2306.11748. Accessed 20 October 2023. Sharp, Paul. 2009. Diplomatic theory of international relations. New York: Cambridge University Press. Shneiderman, Ben. 2022. Human-centered AI. Oxford: Oxford University Press. Spitz, Roger. 2020. The future of strategic decision-making. Journal of Future Studies. https://jfsdigital.org/2020/07/26/the-future-of-strategic-decision-making. Accessed 4 October 2021. Eugenio V. Garcia is Deputy Consul General of Brazil in San Francisco, Head of science, technology, and innovation, and focal point for Silicon Valley. A career diplomat with 30 years of professional experience in foreign policy and diplomacy, including assignments in London, Mexico City, Asuncion, New York, and Conakry, he was a senior adviser to the President of the United Nations General Assembly in 2018-2020. He holds a Ph.D. in History of International Relations from the University of Brasilia and has published seven books. His current areas of academic research include artificial intelligence and global governance, the impact of new technologies on peace and security, and the role of multilateral organizations. Website: https://eugeniovargasgarcia.academia.edu 10