Emotions and Intelligent agency

Subbarao Kambhampati (rao@asu.edu)
Tue, 15 Dec 1998 14:23:21 -0700

I wanted to, but forgot, bringing up the general issue of emotions in
intelligent agency. Now that you are getting ready for letter grades, it
seems like an appropriate time to talk about emotions ;-)

You will note that even our most ambitious agent designs in Chapter 2
had no place for "emotions". Some people have said that "human level
intelligence" cannot be achieved by computers since computers don't have
"feelings".

As a graduate student, I remember sneering at those arguments by
classifying them in the same bag as those that say "flight is not
possible for humans since they don't have wings". True, computers may
not weep at the beauty of a Mozart symphony or feel awed by
the mellifluousness of Thyagaraja krithi. But we don't really care about
that. As long as computers are rational, we have achieved our goals, right?

And everyeone knows that emotions are a hindrance rather than a help to
rational behavior. Indeed, we have all been told that good reasoning
occurs when you have a "cool head" free of emotions. Emotions and
feelings have thus been characterized as a sort of "appendix" of human
neurological function-- useless stuff that we have to nevertheless put
up with.

But is the common-wisdom really accurate? Recent research in
neurobiology suggests that human rationality is irrevocably tied to
human emotions. Specifically, patients who had brain lesions in their
limbic systems, whose only effect is to erase the parts of brain
producings emotions and feelings, should be doing spectacularly well by
any rational standard of performance and achievement. But, neurologists
find that such patients are almost always failures in their daily life
taking decisions that are both irrational and counter to their own
well-being. The theory goes that given the maze of choices we face every
day, the only way of making decisions in a reasonable amount of time is to
have emotional biases--and refusing to make choices dooms more often than
making occasional wrong choices.

Another interesting line of research suggests that "Psychopaths" (the
mass killers, the compulsive married-with-children watchers etc. etc.) are
often characterized by an underdeveloped
emotional system. There was an experiment where several people,
including known psychopaths were asked to play a gamble--where the odds
are such that you can both win and lose spectacularly. They found that
after a succession of losses, the normal humans would be very reluctant
to continue gambling--and the eeg (electro-encephalogram) shows an
interesting correlation between their decision and the emotional system.
SPecifically, as the time comes to take the next gamble, a sharp rise in
their limbic system activity shows that some negative emotion dampens
the perceived utility of the gamble operation.

In contrast, the psychopaths keep playing until they lose all their
money, without particularly caring about the string of losses. The eeg
shows a marked absence of emotional peaks during their decision making.

So, avoiding the foolish "if you don;t have emotion, you must be a
psychopath" conclusion, what do we learn from the above?

We learn that human rationality is not just a result of logic and
reason, and that the lowly "emotions" play a very important part in it.
Loss of emotional side can have important material consequences for
human rationality.

This means that there is a role for emotions in intelligent agent design
afterall. The only question of course is how do we "use" the emotions in
decision making. THis is still largely a very open area--and we have to
perhaps wait for the neurologists to figure out how humans use them.
THe best current understanding is that emotions put us in the "right
region of the decision space" whence we can use the logical apparatus to
find the best decision in that region. Without emotions, we are stuck
with having to navigate the whole decisionspace with our reasoning
methods which can be quite time-consuming (necessitating shortcuts which
lead to low-utility decisions). In other words, emotions can be seen as
biasing the reasoning process to certain regions of the overall search
tree. Put this way, we can already see a rough outline of how we can use
emotions in intelligent agent design.

Let me reiterate--before logging off-- that I am not arguing that
computers need to appreciate music, weep at funerals etc to be called
intelligent. We can rightly constrain their spheres of operations to
preclude these. What I am saying is that even in limited spheres, the
ability to do efficient rational reasoning may depend on existence of
emotions.

THat is all.

If you are interested in this stuff, you might check out
"Descartes' Error -- Emotion, Reason and the Human brain" written by a
neurologist Antonio R. Damasio. [The phrase Descarte's error comes
because Descartes was the first "western" philosopher who talked about
the body-soul duality with his "ghost in the machine" argument and
suggested that reasoning rather than emotions underlie human
intelligence.]

signing off
Rao
----
"not to make-up your mind, but to open it.
To make the agony of decision-making so intense
that you can escape only by thinking..."

----------------
Subbarao Kambhampati, Dept. of CSE, Arizona State U, Tempe AZ 85281
email: rao@asu.edu Ph: 602 965-0113 Fax: 602 965-2751
http://rakaposhi.eas.asu.edu/rao.html