[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Clarifications - Intro to AI class



At 05:09 PM 9/16/2003, Shivashankar.Janakiraman@asu.edu wrote:


Hi Dr.Rao,

I had a couple of questions in connection to today's class.

1) You said that, in random restart hill climbing we reset the problem to the
original state and then choose another random state.
In case of tree traversal, doesnt that involve overhead of remembering the
whole path? i.e. in the case of the haystack ex u gave.
Or is it just the STATE that we remember.

Notice that in hillclimbing, the complete state of the search is captured by just the current
potential solution (we don't care about how we got to that solution).
So, restarting randomly means that you throw away the current potential solution
and replace it with a randomly generated potential solution.


2) Neighbourhood - How do we choose what is the optimum region to explore?
As always, we apply the goal test fn AFTER we explore the neighbourhood.
If we choose a bigger neighbourhood - in theory it means - quicker results.
But given a initial state there is a possibilty that we overshoot the goal.
Pl refer the diagram : If we are at pt A, and we take 1500 steps. we end up at
B. Is not this a waste of computation and an inefficient way?

Actually, since we care just about the goal and not the path to the goal,
we can stop as soon as we find a node that satisfies goal criterion (you don't have to
wait for it to be picked for expansion).  So, the worry about overshooting doesn't arise



Since most of the real applications deal with infinite search space or
infinitely large graph, this possibilty of overshooting the goal looks ugly.

whether or not you overshoot depends on whether you deal with discrete or
continous state space. OVershooting does arise in continuous version of hill-climbing procedures--eg. gradient descent method we will talk
about next class.


Towards the end of the search or say after a reasonable amt of time is spent 
Will it be sensible to vary the size of the neighbourhood just as we vary the P
(for greedy and random search). I am just extrapolating the iterative deepening
idea here.

That could make sense. I can't offhand remember any of the variants of hillclimbing
that does that though.


Rao

Thanks,
Shankari