[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Over-sensing and LCW actions.. (as well as planning with sensing as internet planning)
- To: "Rao Kambhampati" <rao@asu.edu>
- Subject: Over-sensing and LCW actions.. (as well as planning with sensing as internet planning)
- From: "Subbarao Kambhampati" <rao@asu.edu>
- Date: Fri, 28 Mar 2008 06:02:49 -0700
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; bh=bfD9tUxIQMmpd3hE/Vr/Nx46nwLDd+ZSNFvDUy95Djk=; b=Oxasfhp737oS6WwG34esrwOk7jBQG4p94Xcg0XDKDY6K6dTCiLaFPSsD09m1cI2HHLJZr0kmMUK+U4tNunUx4vP2iKMZ/H1n0NMJQ58WxslNWHDtZLquO2KRr/fKZZun44oUa3d0YmotfwP9zlqHFGqXUrUK/L/h+gztnlaX6yE=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=message-id:date:from:sender:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition:x-google-sender-auth; b=KCyN2HZvDKiePSeSS4hbS2A1Fp1exrXMq1VV73AJzn3cCT31JeZJuS/ofj/RJaTYh1w+UgWYx0nhdDRFnfAzJaAbhDb3jr5Q75BarUYyhOU/mg6dFaw/eTx5TQWh7t3QFH96mTqSw5SLrrlFC+wWw01F75TaVZO171SxneXJc48=
- Sender: subbarao2z2@gmail.com
1. We talked at length about how over-sensing during execution can
delay things and lead to
decidedly non-intelligent behavior (e.g. Sphexishness).
One relevant issue here is that if you *know* the state of the world
in some aspect, and you know that
you haven't done anything to change it, then there should be no reason
for you to check it again.
Assuming a single-agent world, if you start with complete state and do
deterministic actions, then you never have
to look (as we discussed). Even if we start with incomplete initial
state, we may know "everything" about *some aspects*
of the world. As long as the ensuing actions--of ourselves--don't
modify the completeness of that knowledge, we don't
need to look *for that aspect* of the world.
It all starts to look like managing "closed world" assumptions. In the clasical
planning, we know init state, and so we can start with closed-world
assumption. No actions modify that closed world assumption.
In general belief-space planning, we don't have full closed world
assumption, but may have *partial* closed world assumption. As
we do actions, we may lose or acquire closed world assumption. In a
desktop (or unix) world, for example, we may start knowing
names of the files in the current directly as well as the sizes of all
the files in the directory. After we run a latex command, we still
know
the names of all files in the directory (even though latex makes new
files-- we know what they will be -- .aux, .bbl etc). We however no
longer
know the sizes of all files (since the sizes of .aux and .bbl files
generated will depend on the files you latexed in a complex way and
you can't
model it a priori). So, we lose closed world knowledge of the file
sizes. If we need that, we will have to do an "ls -s" action (a
sensing action).
If we just happen to do "rm *" action in that directory, we again get
full knowledge on both files and sizes of the directory.
In the paper below
http://www.cs.washington.edu/homes/etzioni/papers/xii-aaai94.pdf
Golden et. al. formalize this notion of starting with and tracking
local closedworld assumptions (LCWs) .
Their main contribution is to note not only the normal effects of the
actions, but also their meta-effects on closed world assumptions
(e.g. see the latexing and rm'ing actions above). Neat paper to read.
=========================================
2. When discussing progression planning in the presence of sensing
actions, I pointed out that there are two non-deterministic
branches: one which picks a causative action to execute and the other
which picks a sensing action to execute.
I also mentioned that if you always pick the causative action branch,
you get "conformant" or "no-sensing" plans (if you succeed).
A related question is what happens if you always pick only sensing
action branch? You get a pure sensing plan.
We can see a use for "pure causative plan" (conformant plan)--an agent
which has no sensors has to deal just with those.
Of what use can pure sensing plans be?
Well--if all you can do is sense some database, then your plans will
be just pure sensing actions. In particular, plans whose main purpose
is to
gather information can be thought of in terms of pure (or almost
entirely) sensing plans.
When you do planning on the web--for example--most often, your actions
involve sensing (look at this database, take a value from there and
plug it into a sensing query for this other database etc.) that leave
the world as it is, and only modify *your knowledge* of it.
Of course, you can also sometimes have causative actions (e..g.
updates--credit card
transactions etc) that modify the state of some database, and not just
the state of your knowledge.
So, not surprisingly, planning for information gathering involves
mostly things of pure sensing actions. The good part about sensing
actions is
that there are never any negative interactions among them (your brain
doesn't explode because you learned knowledge in the wrong order ;-).
So, for sensing planning the big issue is not so much about subgoal
interactions, but rather about reducing sensing.
Not surprisingly, the LCW stuff discussed above--wind up being relevant.
See http://rakaposhi.eas.asu.edu/ijcai-ig.pdf
which talks about how LCW information can be used to reduce the number
of information sources
that the agent has to sense to get all answers for a query.
(The paper http://rakaposhi.eas.asu.edu/ig-tr.pdf
provides a somewhat dated tutorial introduction to planning for
information gathering).
Rao