[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HAL program..



At 08:02 PM 11/28/2001 -0700, Timothy.Warnock@asu.edu wrote:

Regarding the HAL program on PBS, it seemed that the AI research was pretty
disparate with conflicting perceptions of what direction the field should
take. They seemed to dance around the idea of consciousness, refering to HAL,
(talking about emotions) but without any defined concept of computer
consciousness (or human consciousness for that matter).  I wasn't sure if that
was just for the purposes of the show, to entice the audience, or are AI
researchers really concerned with creating a conscious computer
mind?  And if so, do they have an idea of what that consciousness would be
like?

This is an interesting observation. Speaking for myself, I don't really care whether or not AI creates/emulates consciousness... however, neither do I
see any reason why that is ipso facto, unattainable. I think this is true of a large number of active researchers (as against philosophers).

 The consciousness debate, to a large extent, is a lightning rod. Many otherwise brilliant people have taken very anti-AI stance
just because they think it is not possible to emulate consciousness-- a sort of
 "If human brain is so simple that we can understand it, we will be so simple that we cannot"
stance. An interesting case in point is the set of books written by Roger Penrose--a brilliant mathamatical physicist--trying to somehow show that one can never achieve computer consciouness, and that consciousness is somehow inherently biological.

[[as an aside, there are so many anti-computer-consciousness experts that some irate AI folks have started a (somewhat ungracious) award called the "Simon Newcomb" award. Simon Newcomb was a 19th-ear;y 20th century astronomer, one of whose claims to fame were a series of rather passionately argued articles on how "heavier than air" flight is "scientifically" impossible.]]


The discussion on emotions on that program is quite practical, IMO. It is easy enough to see that it is important for computers to "fake" some emotions if they were to interact with humans. There are all sorts of studies showing that people spend more time working on computer quizzes etc. if the computer shows appropriate (if canned) emotional response when people falter. The whole Cynthia Brazelton bit with Kismet the cutie-pie robot in the HAL program is another point. Those big red lips and funny eyebrows of Kismet can have a utilitarian purpose beyond general cuteness. Rodney Brooks once said that they decided that since ultimately robots have to be "trained" by people and people are not too patient with random ugly windows pcs, it is better to take the evolutionary biases that people have about teaching kids and make the robots behave like kids.

What is more suprising is that understanding emotions may be important even to make the computer agents perform well on their own--even in the absence of other human interactions.  Check out this mail I sent to a previous class:

http://rakaposhi.eas.asu.edu/cse471/emotions.html

Rao
-----------------
   I'd rather learn from one bird how to sing              
   than teach ten thousand stars how not to dance          
                                                           
              --You shall above all things be glad and young
                EE Cummings