[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Thinking Cap on Intelligent Agents

 By now I hope you all had enough time to get yourself signed up to the class blog. As I said, participation is "required" in this class. Participation involves doing assigned readings, being attentive in the class, and most importantly, taking part in the class blog discussions.

While any discussions you want to do on the blog are fine, I will occasionally throw in topics I want you to discuss. For historical reasons, we will call them thinking-cap topics.  Here is the first discussion topic  for your edification.  

As for the quantity vs. quality of your comments, I suggest you go by the Woody Allen's sage advice in Love and Death (start 2:30)   --Rao ]]

Here are some of the things that I would like to see discussion/comments from the class. Please add your thoughts as comments to this blog post. Also, please check any comments already posted to see if your viewpoint is already expressed (remember--this is not a graded homework, but rather a discussion forum). Make sure to sign your posts with your first name (so others can respond by name)

1. Explain what you understand by the assertion in the class that often it is not the hardest environments but rather the medium-hard ones that give most challenges to the agent designer (e.g. stochastic is harder in this sense than non-deterministic; multi-agent is harder than  much-too-many-agents; partially accessible/observable is harder than full non-observable).

2. We said that accessibility of the environment can be connected to the limitations of sensing in that what is accessible to one agent may well be inaccessible/partially accessible to another. Can you actually think of cases where partial accessibility of the environment has nothing to do with sensor limitations of the agent?

3. Optimality--given that most "human agents" are anything but provably optimal, does it make sense for us to focus on optimality of our agent algorithms? Also, if you have more than one optimality objective ( e.g., cost of travel and time of travel), what should be the goal of an algorithm that aims to get "optimal" solutions? 

4. Prior Knowledge--does it make sense to consider agent architectures where prior knowledge and representing and reasoning with it play such central roles (in particular, wouldn't it be enough to just say that everything important is already encoded in the percept sequence)? Also, is it easy to compare the "amount" of knowledge that different agents start with? 

5. Environment vs. Agent complexity--One big issue in agent design is that an agent may have very strong limitations on its memory and computational resources. A desirable property of an agent architecture should be that we can instantiate it for any <agent, enviornment> pair, no matter how complex the enviornment and how simplistic the agent. Comment on whether or not whether or not this property holds for each of the architectures we saw. 

6.  We said goals and performance metrics are set from outside. But often talk about "setting their own goals". How does this square with the external nature of performance metric?

7. Anything else from the first two classes that you want to hold-forth on.