CSE 471 Homework 1 PART I: Agent/Environment Models For the following questions, please keep your answers as brief as possible. No reason to fill full pages. 1. Consider the properties "accessibility", "static-ness", "deterministic-ness" of environments. Assume that these are binary properties (yes/no) and give an example of an environment for each possible configuration of values to the variables (you can see that there will be 8 such confications). 2. We talked about accessibile/inaccessible, static/dynamic and deterministic/non-deterministic classifications for an environment. Comment, as precisely as you can, on which of these classifications depend also on the agent's own capabilities. For example, can the same environment be accessible for one agent and inaccessible (or less accessible) for another? Deterministic for one agent and non-deterministic (less deterministic) for another? Static for one agent and less static (dynamic) for another agent? What properties/capabilities of the agent wind up being critical in changing the classification? 3. Is the design of a rational agent going to be necessarily easier for a partially accessible environment as against complete inaccessible one? Give an example. 4. Will an agent that models its world at a finer level of granularity and does more deliberative reasoning going to necessarily do better performance-wise? Illustrate. 5. Use the vaccum agent scenario we discussed in the book to illustrate examples to justify your answers below. 5.1. Consider an environement E that is accessible for the agent A1 and inaccessible for the agent A2. Answer the following: (i) Will A2 be able to achieve _any_ goals in E that A1 can achieve? (ii) Will A2 be able to achieve _all_ goals that A1 can achieve? (iii) For the goals that both A1 and A2 can achieve, will A2 be able to attain the same level of performance that A2 can achieves (a) in terms of the time it takes to figure out what actions it should do (b) "quality" of those actions 5.2. Do the question 5.1 above but with respect to environment E' that is deterministic for A1 and stochastic for A2. PART II: Blind Search 6. It can be shown that asymptotically, Iterative deepening depthfirst search expands (b+1)/(b-1) nodes more than depth first search, when given a uniform tree of depth d and branching factor b. For the case where b=1, this formula evaluates to infinity. Does this make sense? If not, can you compute the ratio for this special case? Extra credit: Show how you can derive the ratio (b+1)/(b-1) in the question 6 above. 7. we have a unifrom search tree of depth d and branching factor b, where all the solutions are in the depth d, but the solutions are distributed *NON-UNIFORMLY*--they may all be clustered. 7.1. If you have to choose between use depth first or breadth first search, which would you choose? 7.2. Consider the following blind search strategy that works in iterations: Loop for b'= 1 to b do Generate a "sub-tree" S(b') where, for each node in the original tree, only the first b' of its children are considered (the rest are ignored in this iteration) Search S(b') for a goal If a goal is found, get out of the loop and return it od (Assue that the search of S(b') is done using the algorithm you chose in 7.1). 7.2.1. Argue that this search is likely to do better on the average than the best search you picked in 7.1 7.2.2. Draw a tree for b=3 and d=3 and show the subtrees searched in each iteration. 7.2.3. In many problems, if you make the wrong first move, you will then go into a region of the search tree that is completely barren of goal nodes. Argue that the algorithm is 7.2 effectively improves your chances, on the average, of doing well on such problems. Part III: A* Search 8. Suppose we have a heuristic h that over-estimates h* by atmost epsilon (i.e., for all n, 0<= h(n) <= h*(n)+epsilon). Show that A* search using h will get a goal whose cost is guaranteed to be at most epsilon more than that of the optimal goal. 9. Consider the following variant of the A* algorithm that we call New-A* algorithm. New-A* algorithm uses a queue management strategy that is different from A*: If there is a goal node on the open and it has the lowest f-value among all nodes on open, return it and terminate. If not, select a non-goal node with the largest f-value from open list and expand it, adding the children to the open list. Suppose the heuristic used of new-A* is admissible. a. Is new-a* guaranteed to terminate on all goal-bearing trees? b. If new-a* returns a solution, is it guaranteed to be optimal? c. Is new-a* going to be more or less efficient in general than A*? Justify your answers. 10 & 11: Do the questions at URL http://rakaposhi.eas.asu.edu/cse471/hw2-f06-qn3-4.pdf 12. Consider a version of A* where we use as evaluation function f(n) = (2 - w) g(n) + w h(n) where w is between 0 and 2; and h is an admissible heuristic. Consider the case where w=2. What sort of search is this? Is it guaranteed to be optimal? Will it be optimal if h=h*? How does it compare to hill-climbing search with a goodness function h(n)?