Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
When I first read Bill Joy's 2000 WIRED article "why the future doesn't need us" I thought to myself, hasn't this guy ever read any science fiction at all? On reading Ray Kurzweil's The Age of Spiritual Machines, Joy immediately proclaimed it "utopian" and stated that there were only two possibilities of humans in the future and gaining "near immortality" by becoming one with robotic technology--one where machines think for themselves and run everything and one where humans control the thinking of machines. But there are vast numbers of science fiction writers who have postulated several different versions of just such a happening, running the range from dis-utopian to utopian, seeing the good, bad and ugly in-between of all possibilities. This fictional working out of technological progress is what Science Fiction, as a genre, was developed for and which these writers' approach with both insight and in many respects, glee in showing how society could develop---for good, for evil and for the mediocre of every day life. This essay discusses what Bill Joy missed in his vision.
In considering how to best deploy robotic systems in public and private sectors, we must consider what individuals will expect from the robots with which they interact. Public awareness of robotics—as both military machines and domestic helpers—emerges out of a braided stream composed of science fiction and popular science. These two genres influence news media, government and corporate spending, and public expectations. In the Euro-American West, both science fiction and popular science are ambivalent about the military applications for robotics, and thus we can expect their readers to fear the dangers posed by advanced robotics while still eagerly anticipating the benefits to be accrued through them. The chief pop science authors in robotics and artificial intelligence have a decidedly apocalyptic bent and have thus been described as leaders in a social movement called "Apocalyptic AI." In one form or another, such authors look forward to a transcendent future in which machine life succeeds human life, thanks to the march of evolutionary progress. The apocalyptic promises of popular robotics presume that presently exponential growth in computing will continue indefinitely, producing a "Singularity." During the Singularity, technological progress will be so rapid that undreamt of changes will take place on earth, the most important of which will be the evolutionary succession of human beings by massively intelligent robots and the "uploading" of human consciousness into computer bodies. This supposedly inevitable transition into post-biological life looms across the entire scope of pop robotics and artificial intelligence (AI), and it is from beneath that shadow that all popular books engage the military and the ethics of warfare. Creating a just future will require that we transcend the apocalyptic discourse of pop science and establish an ethical approach to researching and deploying robots, one that emphasizes human rather than robot welfare; doing so will require the collaboration of social scientists, humanists, and scientists.
Slaves and robots have in common that they are intended to obey orders. Therefore I suggest taking a close look at some of Isaac Asimov’s robot stories. Executing a program while detecting and overcoming problems and acting towards fulfillment of given instructions—all this makes a robot a perfect slave. In the same way as slave laws in the British Colonies in America were intended to keep slavery effective by confining slaves in their place, so are Asimov’s Three Laws of Robotics the formal condition for the workability of a robot holding society. Asimov’s androids reveal the implicit impossibility of both robots and slaves by establishing the command structure that would be needed to keep the system working and then disassembling this structure. The Three Laws, as they are meant to guarantee protection, command, and operation, cannot possibly work with separate master/slave subjects. They are a paradoxical juxtaposition. And consequently, slavery is logically impossible.
2004 •
ABSTRACT This paper reflects on the culture of human-robot interaction. A review of common concepts in movies and literature is presented and their relation to scientific work is discussed. Two new research directions on the synthesis of behavior models and the perception of social robots are presented.
2012 •
Abstract Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value.
AI and Society
Evil and Roboethics in Management Studies2017 •
I address the issue of evil and roboethics in the context of management studies and suggest that management scholars should locate evil in the realm of the human rather than of the artificial. After discussing the possibility of addressing the reality of evil machines in ontological terms, I explore users’ reaction to robots in a social context. I conclude that the issue of evil machines in management is more precisely a case of technology anthropomorphization.
In this essay, I will explore the complex ways Isaac Asimov's fiction both engages with and disavows race. Looking mainly at his robot stories -- especially those collected in I, Robot (1950) and the most popular of the Dr. Susan Calvin stories -- I will show how Asimov's robots engage with the legacy of slavery as well as with contemporary racial issues. On the one hand, Asimov's robot stories and novels explicitly draw on tropes of black servitude, such as the way robots are routinely addressed as "boy" and older models call humans "master," or the parallels between the logic of slave codes, Jim Crow laws, and the Three Laws of Robotics. On the other hand, Asimov was a committed social liberal, skeptical of Golden Age editor John W. Campbell's tendency to, as Asimov put it, "take for granted, somehow, the stereotype of the Nordic white as the true representative of Man the Explorer, Man the Darer, Man the Victor." His stories often questioned the logic of all prejudice, and he even wrote a non-fiction book, Races and People, popularizing the work of his friend William C. Boyd, which the cover claims will convince the reader "that there is no such thing as a 'superior race.'" On yet a third hand (this is science fiction, after all), Asimov resisted the notion that his works were allegories for race. His characters, while not the Aryan engineers favored by more conservative Golden Age authors, were nonetheless privileged users of technology, working out the proper ethics for using their power, and grappling with their unfortunate prejudices; they were, in other words, characters who shared a science fictionalized version the white liberal perspective. Ultimately, this chapter will explicate the racial context into which Asimov's stories were written, offer examples of how to unpack the racial politics of the Robot stories, and situate Asimov's engagement with race in a larger discussion of the racial politics of science fiction and technoculture.
Through this research I attempt to answer questions about what it means to be human and how the human-robot relationship will alter how we look at humanity and human nature. This research is an attempt to explore and define the future of the human-robot relationship. The study analyzes what the nature of the human-robot relationship is. Although the concept is a difficult one to encapsulate as a whole let alone in a brief essay, it is important for the human to think about what it will become and the nature of its being. There is a line that divides the human being and the robot that is becoming increasingly blurred. When a robot looks and acts like a human, it is suddenly difficult to distinguish between the two. This also raises the issue, whether humans with prosthetic organs or limbs may be called robots. The issue is whether they ought to be called man, machine, or posthuman. With the intention of finding out who we are, research must be done to determine in which direction the so-called " posthuman " is going and what it means for humanity. In this study, the central focus is personhood and humanity. Technology is a broad topic, and although it is important, the main focus is not on technology as a whole. This essay focuses on the robot or humanoid specifically rather than other possible technologies. The purpose of this research is to attempt to find out what it is to be a human and to discover the nature of the man-and-machine relationship. This essay attempts analyze the human condition and human nature. The questions I mean to address are the following: (1) what will the human become in the future and (2) what do robots tell us about ourselves as human beings? The main question contemplates who we are as human beings. Along with print sources, I use films like Her (2013), A.I. Artificial Intelligence (2001), and Ex Machina (2015) in attempt to reconcile the issues surrounding the human-robot relationship. These science fiction films can tell us a lot about who we are or who we think we are, and the assist in bringing important questions to the forefront. Is technology more than just a mediator? Are we and should we be becoming more technologically dependent? And what does that mean for us as a group of beings? Is society adapting or slowly assimilating? I also make use of thought experiments to bring into the conversation important issues that can be explained or worked out in a more comprehensive way via those experiments. Again, in order to narrow the wider scope of issues, the study will center on the human-robot relationship, rather than the more general human-technology relationship. Importantly, I argue that what is dangerous and not the human relationship with machines, but the loss of relationship and connection with one another. The sort of danger that usually comes to mind is the potential existential threat of robot revolution. But this implies that the nature of the man-machine relationship is equivalent to the historical relationship of humans and their war-waging ancestors (not to mention present war-wagers). Humans tend to forget who they are in relation to one another as they worry about the nature of man and machine. The nature of the human-human relationship may not be applicable to the man-machine relationship, and as time moves on into this new era of technology, the human-human connection must not be thrown to the wayside either. Further, the research does not include certain issues that arise from human enhancement, such as the ethics of the use of 2 robotics in general. This is unimportant to the central theme of this study, as the concentration is on not ethical implications posed by robots and technology, but rather, on what robots tell us about the human condition and the direction in which (post)humanity is headed. This is an attempt to expand the knowledge of what we think it means to be human. Here, I refine the definitions of robot and human, raise questions about the human condition, and challenge traditional negative views of the man-machine relationship to reshape how all humans— philosophers, scientists, and others alike—think about those relations and the posthuman future.
1988 •