Press "Enter" to skip to content

Morality and Artificial Intelligence: The Science and Beyond | Devin Gonier | TEDxAustinCollege


on a Sunday evening Elaine was crossing
the street with her bicycle when a car
crashed into her at 40 miles per hour
leading to injuries
that ultimately caused her death in
Finsbury Park a group of people gathered
around a Muslim who had collapsed after
prayer just outside a mosque when a car
drove off the road into the crowd
killing Makram Ali both of these events
are tragedies but there’s one very
important difference between the two of
them Makram was killed by a man
motivated by hate someone who just
wanted to see some Muslims die
Elaine was killed by a computer now of
course when we can easily describe the
actions of Makram killer as immoral or
even evil but how do we describe the
actions of Elaine’s killer the incident
with ubers self-driving car was an
accident it was a glitch something that
happens extremely rarely and given the
fact that autonomous vehicles could
reduce traffic accidents by as much as
90 percent it seems worth the risk still
we should pause and ask ourselves how
would we feel if the computer had made a
calculated choice to hit Elaine what if
it was the decision between hitting
Elaine or hitting two other pedestrians
hitting Elaine or saving the drivers
life maybe the computer would even
choose to hit multiple pedestrians just
to save the drivers life at this point
we’re clearly defining the actions as
moral or immoral I ask these questions
not to make you afraid of crossing the
street but to highlight the many complex
and thorny issues surrounding machines
and morality this is not a subject that
belongs purely in an Isaac Asimov novel
nor is it something that will be
relevant in 30 years from now it’s here
today it’s happening now therefore we
really ought to be talking about
artificial intelligence and morality
more often artificial intelligence is
working its way into crucial parts of
our society serving primary roles in
hospitals with Mattila military
autonomous equipment making stock
choices in 2010 a group of ai’s brought
the stock market down by 9% in mere
seconds and this is just the start as AI
weaves its way into crucial aspects of
societies it will make crucial decisions
that will have big impacts and serious
moral ramifications and what’s
unsettling about this is that unlike
mathematics or chemistry we do not have
hard and fast rules for morality
morality is a contentious subject it’s
something for which many of us disagree
on where it comes from or how it applies
to certain groups but still we must
overcome these differences and find some
strategy to solve these problems
I believe it basically comes down to two
key questions first how do we ensure
that a machine understands what is moral
second how do we ensure that a machine
behaves morally now when answering the
first question about understanding many
will of course look to moral theories
since moral theories can simplify
morality and important ways for example
utilitarianism which argues that
whatever is most moral is what maximizes
happiness for the greatest number of
people given the fact that utility is a
widely used concept in computer science
perhaps it’s a good place to start still
others may argue for a more rule-based
approach maybe if we can specify the
right number of rules rules like don’t
tell lies then we can ensure that a
machine if they follow those rules will
behave morally I would say that
both of these approaches are ultimately
misguided for one very simple reason the
world is far too complex if the Nazis
come knocking at your door and you’re
harboring and frank most of us would
agree the most moral thing to do is tell
a lie and say she’s not there but this
violates a common moral maxim to not
tell lies and also in this case would
violate the law the problem is that
Computer Sciences is struggles with
morality because it’s full of exceptions
and these exceptions quickly explode
into intractable problems
luckily artificial intelligence was
developed as a way of addressing such
problems of solving problems where the
rubber meets the road amazing and
amazing tool of artificial intelligence
is machine learning machine learning
enables us to no longer dictate every
move and counter move instead a machine
can respond dynamically to its
environment it can learn and get better
over time this is why for example in
games like go or chess we see such
success we see some applications in
chess we see a lot of applications and
go in the game of chess for example I
can give you guidelines I can say it’s
best not to lose your queen but there
will be times in which losing your queen
is the only way to win the game and the
same with morality I can say it’s best
not to tell a lie but there will be
times in which telling a lie is the most
moral thing to do so with machine
learning we can get better at these
things and we can grow and dynamically
respond to the problems as they arise
so if morality really ought to be
learned and not necessarily structured
into a formal framework how do we go
about doing that I would say the best
place to start is to look to ourselves
how do we develop morally how do we
become more moral people and ultimately
it’s a balance between two competing
forces nature and nurture nature in the
sense that we don’t come into this world
with blank slates we come into a genetic
predispositions emotional reactions
always play very important roles in
shaping how we develop morally
many might say that this would suggest
morality is somehow innately human that
it cannot be transferred to a computer I
would say just the opposite if evolution
is an algorithmic process then we can
use algorithms to potentially unlock
what’s there we use genetic algorithms
all the time all the time and machine
learning which allows us to inherit good
ideas and good strategies over time
perhaps the right group of evolutionary
psychologists philosophers computer
scientists may be able to use such
algorithms to tap into some of the
hidden gems of morality embedded within
evolution on the other end of the
spectrum we have nurture with nurture
it’s what we learn from others it’s our
observation from our parents from
society an interesting group of
algorithms that show promise here is
called apprenticeship learning
apprenticeship learning essentially
inverts the idea of a more common
approach called reinforcement learning
observing humans performing a task and
from the success of that task
determining what actions were good or
bad regardless of the details the
algorithms show a lot of promise and
potentially unlocking some aspects of
morality of course these would start
simple and gradually grow in complexity
in much the same way that children learn
concepts and simple ways through rewards
and Punishment and as they grow older
are eventually able to generalize into
machine learning is definitely the path
forward and to be successful in this
endeavor to tap into and transmit
morality to a computer would be an
amazing human accomplishment it would be
an incredible human accomplishment and
it will involve technologies and
techniques I did not go into today but
all of it would be for not all of it
would be for nothing if we cannot answer
that second question which is how do we
ensure that a machine actually behaves
morally of course many will say this is
the easier of the two questions though
machines automatically do what we tell
them to do aren’t they required to do
that well this is problematic for two
key reasons first
machines don’t do we tell them to do all
the time if you’ve ever been working on
something complicated and gotten so
frustrated you just wanted to unplug the
computer go out to the pool and throw
your computer in you know exactly what
I’m talking about
it’s often called user error but
basically boils down to the simple idea
that the ideas on one side of the
keyboard are not the same as the ideas
on the other side of the keyboard if we
struggle with formatting a document
imagine developing a goal as
sophisticated as improve the economy
such that has unforeseen moral
consequences and what’s worse is that
goals don’t exist in isolation they come
into conflict with one another and those
conflicts create complexity in computer
science we may see this in the form of
permission denied
but in morality we see this all the time
as well there’s a really interesting
study done by some scientists with
Princeton Theological Seminary students
it’s a Darlene Benson study in 1973 the
scientist took these students and asked
them to give a talk on the good
samaritan which is a story about helping
others in need they place an injured
person in between the students dormitory
and the location of the talk then they
study to see how many students would
stop and help the injured person it
turns out the most important factor in
determining whether or not these
students would stop was how late they
were in 66% of the time if they were on
time they would stop whereas only 10% of
the time they were running late would
they stop what’s going on here clearly
there is a goal conflict happening we
have the goal to be implicitly good to
be moral and these students have that
goal and included within that would be
helping this injured person we have this
other more immediate goal which is to go
and give a talk a recorded talk on
morality which is sounding a little
familiar right now and this goal comes
into conflict and immoral behavior
unfolds now in the context of artificial
intelligence what does this mean well
with artificial intelligence we can
create an excellent moral model a
machine
understands morality but how do we test
that that will work many have pointed
out that that the difficulty here is we
can do amazing tests but how do we know
if those tests really capture all the
scenarios to what extent is my act of
testing somehow influencing the results
in some fundamental way this is often
referred to as the observer effect or
the Hawthorne effect which is
essentially the idea that my act of
testing is somehow influencing the
participant in this case the computer
and therefore is not representative of
what would happen outside of the test
now we can try and deal with this in two
potential ways we can trick the computer
into thinking that the computer that the
that the experiment has stopped and that
real-world actions are happening
afterwards this allows us as scientists
to study what the computer does whenever
it thinks there is no test occurring
alternatively we can do the opposite we
can trick the computer into thinking
that the experiment just goes on ad
infinitum thereby creating a sort of
moral umbrella an illusory sense of
ongoing moral control a constant test
and then perhaps this moral control will
play a powerful role what’s interesting
is that from the computers point of view
it creates a little bit of a problem on
one sense my world is as real but
appears to be a simulation and the other
is a simulation but appears to be real
if my thinking I’m in a test is itself a
part of a test how do I ever know if I’m
in a test perhaps I’m in a test within a
test within a test a sort of Russian
metaphysical nesting doll of reality in
which one can never be sure if this is
an experiment a moral test or not and
this plays a powerful role this
uncertainty ultimately plays a role in
control it is a powerful mechanism of
moral control and if you don’t believe
me consider the human equation how do we
humans know if we’re in a moral
experiment perhaps there is some God
categorizing our actions into good or
bad
some karmic force doing something
similar or maybe it’s even a rational
principle we don’t know and we can’t
prove any group right or wrong which
means that each of us has to make a
personal choice a faith choice into one
belief system or not and ultimately this
choice plays a really fundamental role
in the decisions that we make in the
moral development that we have the moral
choices that follow this sort of
experience will inevitably happen with a
computer developing morally as well at
least in the context of this question of
is this real or is a simulation but more
likely in many other contexts as well
the point I’m trying to make here more
than anything else is that when we think
about moral development especially in
the context of artificial intelligence
we cannot demarcate questions of science
from questions of philosophy or religion
we cannot separate them as irrelevant
from one another of course mathematics
and computer science will be crucial and
ensuring that artificial intelligence
comes to being and then it’s moral but
also philosophy and religion will play
an important role in helping to define
that morality and helping to define that
process therefore we must have a
holistic approach we must think about
these questions together because
unknowable non scientific questions and
I say unknowable with a capital K
unknowable quest are a necessary part of
moral development and they will be for
an artificially intelligent computer I’d
like to leave you with one last final
thought many have said that artificial
intelligence will be the last great
human invention they do so because
artificial intelligence creates a whole
new form of discovery by creating a
whole new form of Discoverer if we can
accept the possibility that there could
be something more intelligent than us
but there could be something with a
greater mind than us with greater
capacity for intelligence than us
then the mere fact that our brain is
confined organic matter in between two
ears and a computer can fill an entire
room with silicon would suggest not only
the possibility but perhaps even the
inevitability of one day a computer
being more intelligent than us this
opens up the possibility for extreme
good or extreme that the difference
between the two is whether or not this
consciousness has a conscience for rest
assured an AI of such intelligence will
make choices that affect real people
people that are beneficiaries or victims
people like Elaine in the end artificial
intelligence may be the last great human
invention let’s ensure it’s the last
great human invention for the right
reasons thank you [Applause]
Please follow and like us: