Motto: A day spent staring at the powerpoint slides and listening to the
.WAV files by yourself
can save as much as 75 min in the class with Rao
(Here is a fully worked out example of variable elimination)
Planning graph heuristics continued; h-sum, h-level and their tradeoffs. H-relax as a superior middle-ground. Extracting relaxed plans. PG heuristics with progression vs. regression planners. Issues of PG heuristics with action costs.
Partial satisfaction planning--where goals have utilities and you can pick which goals to work on (take the net benefit as cumulative utility minus cumulative action cost). Digression: All-but-dissertation as a great partial satisfaction planning problem. Generalizing it with more general reward models, as well as stochastic dynamics to get to MDPs. Optimal solutions to MDPs as optimal policies. How policy depends on the reward struc
How to learn evaluation functions? Seague into learning. Idea of classification learning. Idea of representation for the hypotheses. The bruteforce view of classification learning as picking the hypothesis that best matches the training data. The size of the hypothesis space.
A brief discussion of naive bayes classifier and comparison of its performance to decision tree learner.