w1v2 Transcript
w1v2 Transcript
w1v2 Transcript
This week we're getting into the topic of artificial intelligence, and considering it
from a number of different angles. In this video in particular I’ll talk about the
goals of AI, what we see as its purpose, and how it shows up in fiction. In other
videos we’ll talk about how it shows up in your life, and some of the maths
behind how it works. We’ll do one video just to focus on natural language
processing and tools like ChatGPT, and one video about self-driving cars and all
of the systems that have to integrate for that to work well. And then we'll talk
about some of the ethical questions in the development of AI in the way that
we’re currently using it in the world.
first people to really use the term “AI” as a computer software thing that people
could interact with were computer scientists in the
People use the term AI as a computer software in
1950s, and their goal was basically to make an artificial person.
They wanted to come up with mathematical ways to represent the thoughts that
a person could have, and to develop a machine that would have the same
reasoning processes that people do.
Goal:
Their goal was to make a general intelligence, that would know things and
remember things and be able to apply logic to answer questions.
They kept trying to build robots that could play chess. Chess must have been
representative at the time in the culture of real thoughtfulness and reasoning,
and so that was seen as a big achievement: if we can program something that
plays chess, then maybe it's as capable of thinking through possibilities and
choosing the right course of action, applying reason, and solving problems as a
person is.
The AI technologies that we’re most likely to interact with in our daily lives are
more like single purpose tools and enhancers than they are general intelligence-
having entities with individuality.
There are a lot of different reasons why that vision about the purpose of AI has
shifted so much in the last 70 years.
First of all, in the 1950s computers were huge and wildly expensive. They were
difficult to program and difficult to interface with. And the computer scientists
could see that they had enormous potential, and they were very optimistic about
what could be accomplished if they tried hard enough.
Looking at it now ,we can see that they tried, we can see that for a long time
they would make very public statements about “in 5 years, in 10 years, we’ll
have a real artificial consciousness”, and we’ve seen that they weren’t able to
deliver on that.
Also, though, in the 1950s they had no way of knowing that microchips would
get so much smaller, that silicon fabrication would get so perfectly
precise. They couldn’t have predicted Moore's law and how computers would be
getting smaller and more affordable and really ubiquitous, that so many people
would have one with them most of the time. That creates a possibility for
functions and tools that 1950s computer scientists wouldn’t have thought about
making, because they had no idea that there would be a commercial market or
who the consumers for artificial intelligence tools might be.
Other changes from then to now include things like economics and business
culture. Currently in business it feels necessary to be as efficient as possible, to
move as fast as possible, and so any task that can be automated, any difficult
thing that could be made simpler, is desirable.
I don't think either of those is fully accurate, but I do think it's a good idea to
look at all of the different things that AI tools are currently doing, to be aware of
things that could be used to make our lives easier, smoother, more efficient, and
so on, and also to be aware of things that are unacceptable, things that need
regulation, limits, or simply the option not to engage. And of course everyone
will put those limits up in different places for themselves.
So let's talk about the current way of looking at AI. There are so many different
types of tools that we currently have, that we want to exist, that we're trying to
develop and we're trying to find uses for. And there are also uses that are not
appropriate for the tools, but we'll talk about that more next week when we talk
about ethics.
From my perspective these things exist on a continuum, from things that are
being done for you, to things that are being done to you.
Convenience things like virtual assistants and smart homes, robots and self-
driving cars, you could put under an umbrella of things that have been made for
the purpose of making your life easier. Healthcare applications of AI, in terms of
interpreting complex constellations of symptoms, or analysing medical imaging,
are intended to be helpful to people, and to assist doctors in understanding what
their patients need. Those tools are very susceptible to problems in the
sense of biases in their training set, and in terms of being asked to
answer a particular question, instead of being a general intelligence that
can reason out answers. Because they have to do with health and
safety, it’s really important to be sure that they are working correctly.
And a lot of the purposes that AI tools are being used for have to do with
commerce. There are recommendation algorithms based on things you've done
in the past or based on the characteristics of other people who have done the
same things you have, liked the same things, watched the same shows,
interacted with the same media sources and things like that. There are business
AI tools and law enforcement AI tools that claim that they can predict things
about you based on your voice, your appearance, your demographics. Those
things are not here to help you or to make your life easier. And I would contend
they're not really here to benefit society.
I think we all have to consider what level of this automated classification and
management we are comfortable with. That will be very dependent on who we
are and what our experiences have been. But in order to do that, we all need to
be aware of what those capabilities are.
So like I was saying, back in the day when the idea of artificial intelligence as a
computer capability was just getting started, the goals the researchers had read
a lot like European enlightenment philosophy. This idea that reason and
rationality are the most important thing, that everything about the world and
everything about people can be expressed as logical statements or as
mathematics.
And so if we can find some kind of mathematical or symbolic way to talk about
logic and about reasoning, then we can express any human thoughts or any
question in those terms, and then it can be solved by a computer. And, further,
if you had the structure of all of the ways that people think written down in this
way, if you put it all together as the resource for your computer program, then a
good program will be able to apply those equations of human thought to any
question it’s presented with, and it will be able to come up with the same answer
as a person would.
This viewpoint is on the one hand lowering human thought and consciousness to
almost a level of being mechanistic or deterministic, you know, put in the
numbers, go through the motions, and get the answer, but it’s also an elevation
of the way that people work into some kind of beautiful universal logic. It's a
very interesting philosophical standpoint for the people developing artificial
intelligence to be coming from.
There was a very important conference in the fifties that served as sort of a kick-
off point for a lot of the AI development that happened from that on, where this
group of around 10 people who were very active in developing these concepts
got together to do some practical work on how to actually make these AI ideas
real, and to also talk to each other about the underlying ideas and the
underlying goals.
And as they then went out and continued to work on ideas in artificial
intelligence, they started building things. I mentioned that they made robots
that can play chess. There was actually a self-driving van in 1986 but it was self-
driving in the sense that if it was on a closed, straight, clear stretch of highway,
with no other vehicles around, it could go from point A to point B and then stop.
But it was not self-driving in the sense that it was sensing its surroundings and
making decisions about how to manoeuvre through its environment.
Artists also got in on this and started making machines that could be controlled
by people who came to look at or participate in the art. You could switch some
knobs and dials and set the settings on the painting robot and it would make you
some unique piece of art based on your input. You could reduce or increase the
light going onto some sensors and cause different physical reactions in a
machine. And these were meant as an exploration of those same philosophical
questions about people and intelligence and whether what we do can be reduced
to an algorithm.
In terms of important software and algorithms that were developed early on,
one was the concept of semantic nets, which are a logical way to write out the
rules of grammar: if you have a series of words, how do you recognise which
one is the verb? How do you know if an object is a singular or a plural? What are
the different semantic pieces that do different jobs in the sentence, how do they
connect to each other and how do they affect each other? You need to have all
of this worked out in order to have an automatic translation between different
languages, you need to be able to map the purposes of different parts of speech.
Another really important tool that gets use all the time in this kind of work is
automatic classification, which is finding mathematical ways to identify groups
that are similar to each other, and split items apart from things that they're
different from.
Fast forward to today and the ways that we’re picturing the purpose of AI, I
would contend, belong in these categories. We have tools for prediction and
classification. We want recommendations. We want image recognition. We want
to know what people are going to want to do.
We've got tools for decision making, in business, perhaps, where there's a lot of
information on what customers did last year, and we want to predict how this
year’s situation (economically, politically, culturally) is going to affect what
they're going to do this year. We want know what would be an effective
strategy, and we can set up algorithms to estimate the impacts of those factors
and make predictions.
There are a lot of things I think are just fun toys, like art generators and text
generators, robots that do like one thing, like robot pets, or little vacuum robots,
they’re not a robot sidekick or friend, or even a robot maid from the Jetsons,
they do one thing. And those are fun, and because their purpose is simple they
make good illustrations of how AI works, but I don't know if they’re deeper than
that.
And then, like I was saying at the beginning, there are automation tools, which
cover a range from helpful to harmful. Automated monitoring and recording can
be very helpful, but there are times where societally we might see automation as
a bad thing. If there are things that don’t require a person to do because an AI
system can do it reliably, then what are the people meant to do? And this is
where science fiction comes in, because if there are highly capable AI systems,
then, what people do in that world, well, there are a lot of options. Sometimes
you get a post-scarcity luxury world, sometimes you get an economic underclass
who resent the AIs, sometimes you get all-out war – it depends on what the
author is trying to say.
There are a lot of different roles in a story that an AI can play. One of them I’d
call indifferent gods: they are immensely more capable than people, have a
sense of self, and aren’t particularly worried about things on the human scale.
Iain M Banks has a series of space opera novels with these wildly advanced AI
ships, and they make for very interesting characters, because they are distinctly
not people, they are not worried about people-scale things.
One kind of artificial intelligence that we actually don't use much in this course
but is interesting to think about are people who have moved into a virtual world.
They've stopped being physical people, they’ve uploaded their consciousness
into the cloud. Are they different to characters who started out as artificial
intelligences? What are the strengths and weaknesses of uploaded people versus
physical people? There are great stories to be told there.
And then I would contend that the negative AI characters and fiction usually fall
into the category of “clearly not people”. Perhaps they are drones rather than
individuals, cybermen from Doctor Who or the Borg from Star Trek. Are they
AIs? They’re not people, not really, and people don’t want to be assimilated
because they don’t want to lose something important about themselves.
Or you’ve got the systems that are purely rational, with no emotions, no ability
to be persuaded, and they've gone too far. Examples are things like Skynet in
the Terminator movies, Cylons in Battlestar Galactica, or the computer system in
I, Robot. They have come to the logical conclusion that people are dangerous
and must be stopped, and because they are perfectly rational and highly
powerful, then all of the drama is in figuring out what people are going to do,
how they are going to use some unique people-ness to overcome the all-
powerful, all-seeing computer system.
Modern AI applications range from virtual assistants and smart home devices to
healthcare diagnostics and recommendation algorithms. While some applications
aim to enhance convenience and efficiency, others raise concerns about privacy,
bias, and societal impact.
Ultimately, the study of AI encompasses both its practical applications in the real
world and its rich portrayal in storytelling, offering insights into human behavior,
values, and societal dynamics.