[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Readings for Tomorrow--LAO* and LRTDP



Part of what we will do tomorrow is to consider optimal policy
construction algorithms for
FOMDPs that make it look more like heuristic search that we have seen
for the rest of the semester.

In particular, we saw that classical planning can be seen as A*
search, and belief-space planning can be seen as
AO* search. Typical AO* search algorithms work on acyclic graphs (note
that AO* can be seen as a
problem decomposition framework, and cycles imply that you are
reducing a problem indirectly to itself).
 The LAO* paper below shows that FOMDP policy construction can be seen
as AO* search on cyclic
graphs.

LAO*
 http://www.cse.msstate.edu/~hansen/papers/laostar.pdf


Another idea for viewing value function computation is in terms of
fixed-depth expansion under a node
(as in game trees--in fact, in 471, I motivated game trees in terms of
RTDPs). The LRTDP algorithm
improves a bit on RTDP

LRTDP
 http://ftp.cs.ucla.edu/pub/stat_ser/R319.pdf


rao