Press "Enter" to skip to content

Do robots feel emotions? | Sarah Cosentino | TEDxLakeComo


Translator: Roberto Minelli Reviewer: Michele Gianella
So, BB-8, do you feel like collaborating today?
Excuse us,
this is our very first interactive presentation,
so we are both quite nervous.
Hello!
Do you want to say hi? You don’t
Are you going to collaborate today?
Come on, we don’t have much time.
Go to sleep!
BB-8?
BB-8!
Go to sleep!
Alright – you’ll have to excuse it, it’s a bit nervous.
So, let me turn you off,
I’ll turn you off, because we have to talk about some serious stuff.
Let’s see if we can do this.
Come on, shut up for a second!
Excuse us.
This is my world.
and this is my dream:
a future in which robots will be among us,
in our daily lives,
and possibly will help us with our daily activities,
at work, at school, maybe they’ll even help us do our homework.
Unfortunately, this is all still yet to come,
first and foremost because robots
are complicated systems from an engineering perspective,
and also there’s still a lot of work to do
on their interface.
This because, of course, to be able to use a robot
what we need is an intuitive interface,
we don’t want to read 2,000 pages of the user manual
just to turn it on.
At the moment,
this is the average interface of a rather simple machine.
is this.
I dare you to use this washing machine without reading the user manual first.
I dare you to use one of our washing machines in Tokyo
without having the slightest idea of what the kanji on it are,
so you can well imagine the kind of interfaces we use
for slightly more sophisticated robots.
But, how can we develop a really intuitive interface?
It’s interesting to know that lots of studies have shown
that we tend to interface with machines, devices,
the more naturally,
as if we were talking to a person.
the more technologically advanced these devices are.
Of course, this was comforting to me
because ending up shouting at the GPS
when it got me lost in the middle of nowhere
made me feel a bit stupid.
But science proves
that it’s a natural thing for us to do that.
So, in order to develop an intuitive interface,
the best thing would be to start
from how we interact with other people.
We took a model of human interaction
that, as you can see, is a bit complicated
because in order to formulate our response,
what comes into play is what the other person says or does,
as well as the other person’s attitude,
the general context of the interaction.
And these two …
all these factors are taken into consideration
by two parallel processes:
cognitive processes and emotional processes.
The cognitive process basically
consists of all thoughts and reasoning
that we do and helps us formulate
a direct, sensible, possibly consistent and rational response
to the other person’s action or question.
On the other hand, emotional processes are not always rational:
they take into account many other factors
that we, sometimes, are unaware of
in terms of thought,
which sometimes results in totally unexpected responses.
So it is important,
in order to develop a totally intuitive interface,
to be able to properly interpret
and respond to both these processes.
Several studies, actually –
one of the most prominent is the Mehrabian one,
suggest that only 7% of interaction
is associated with direct actions or words – with language.
The rest takes place in an indirect manner,
through body language and changes in the pitch of our voice,
so robots must take all this into consideration
in order to interact with us
intuitively and directly.
An interesting concept for us robotics experts
is the Uncanny Valley concept,
or “Valle perturbante “, as per Wikipedia’s Italian page –
or maybe, a little bit more intelligently, what is called “discomfort zone”.
According to this theory,
the more an object, a robot, is similar to a person
in its interaction,
taking interaction in all its aspects –
not only physical appearance,
but also the way it moves, the way it interacts, the way it talks –
the more comfortable we are,
as if we were interacting with a person.
But at some point we fall in the “discomfort zone”,
when this object, this robot,
is perceived to be very similar to a human being
but one of these factors is not really up to scratch,
is not quite what we expect,
then we enter – we feel an unpleasant sensation,
we feel uneasy,
we are no longer able to interact properly,
and so we try to stop the interaction.
This results from our emotional response
to an inconsistency in the emotion …
in the emotional response of the robot as a rule,
so emotions are the bridge
that allows us to leave the discomfort zone
so that we can …
so that robots can be truly interactive
as we expect them to be.
Essentially, if you have read this book,
if men are from Mars and women are from Venus,
and they must learn communicate each other
based on each other’s expectations
otherwise there would be no dialogue,
robots come from Earth
and they too need
to interact with us based on our expectations.
Basically,
in order for us to build a totally intuitive man-robot interface,
the robot must, of course, be able to interpret correctly
and respond according – on the cognitive channel
with direct responses to direct questions,
direct responses to direct actions,
but it must also be able
to interpret correctly and respond on the emotional channel,
a bit more indirect, a bit more subtle,
but sometimes
this stands in direct contrast to the cognitive channel.
Clearly,
robots must know how to interpret emotions
to respond to us correctly,
but they must not, as people tend to think, feel emotions.
Why?
Simply because emotions
are the product of thinking, processes that are not entirely rational
and can therefore sort unexpected responses.
We don’t want the robot to have unexpected responses:
we want the robot to respond exactly as we expect it to do,
so we don’t want to fight with the robot as I’ve just done.
It is important that we make this distinction.
But from a practical point of view, how does communication occur?
So, we said
that communication mostly occurs on a non-verbal level.
Among non-verbal signals
are conscious signals that we are aware of making,
as well as nonconscious signals which we aren’t completely
aware of, of course.
So, these signals are divided into:
voice signals, paralanguage, anything that is not direct language,
exclamations, changes in the tone of our voice;
and then there is body language, movements.
There are also completely emotional signals,
signals that are coded gestures,
gestures that everybody knows the meaning of.
Just think if I point in a given direction.
Between coded gestures and emotional signals
lie creative gestures,
which are generally used in art
and are gestures that we intentionally make,
so as to convey a specific emotion.
A robot must be able to interpret all of these signals correctly,
and correctly respond to all these signals.
How do we interact physically?
Physically we use our bodies.
There are a lot of changes taking place in our body, during interaction.
Some are visible –
facial expressions, body movements –
some are invisible
and generally are the changes brought about by emotional responses.
For example:
changes in the heartbeat, in our breathing,
in the blood pressure that skyrockets when we get upset, etc.
So, we human beings
generally have five senses,
which aren’t many,
yet can interpret very complex signals,
and with these signals, these five senses,
we generally interpret
only the visible part of the changes in the body,
so facial expressions, the voice,
changes in the pitch of voice, the movements of the body.
In other words, if a robot is to be able
to interact with us in an intuitive way,
it has to be able to move correctly,
respond correctly, with a suitable tone
and make suitable facial expressions, if it comes with a face.
As for the robots, instead,
There are many types of robots, they come in many sizes, many shapes
and are generally equipped with many types of sensors,
which are only able to process very simple signals.
And so, for a robot to get
a clear picture of the global situation,
so of our cognitive and emotional state, of its interlocutor,
it has to be able to use its sensors
to detect all the possible variations
in the body of those that are interacting with it.
And so, basically,
I have become the psycho-physiologist, as they said yesterday, of robots.
Interesting, we mainly work with humanoids,
so we conducted studies on the posture of humanoids.
Clearly,
if a robot wants to express happiness, it shall have a correct posture;
if it wants to express sadness, a different posture will be needed.
All the studies have been made on ordinary people, generally on videos,
and applied to robots.
A robot fitted with a humanoid face, one of ours, of our laboratory,
has to be able to produce facial expressions
that look as much as possible
like the expressions we normally produce during our emotional interactions.
In my case, I am working with the saxophonist robot
that has to interpret the gestures of the dancer in this video,
extract the emotional information that the dancer wants to express
and reproduce it with a different means: music.
And now, to make you understand
how much more subtle interaction is,
I have brought my assistant,
who is a bit naughty, and I don’t know if it wants to collaborate or not.
This is to show you that a robot
that is not humanoid, that hasn’t got a face,
that hasn’t got many ways to communicate if not the vocal one,
is still able to, possibly,
generate some kind of emotional response …
let’s see if we can do it.
BB-8, are you ready?
Hello!
Don’t jump off the stage, you did it yesterday already.
Will you say hello to the audience?
It’s scared, clearly, this is really the first time
we held an interactive presentation.
Yes, yes, yes, don’t worry, don’t worry …
But it’s happy to be here.
Okay, what shall we do, shall we go?
Say bye to everyone …
Can we do this, BB-8?
Okay, fine, thanks.
(Applause)
Please follow and like us: