February 28, 2000. Planning Notes Taken by Romeo Sanchez. AGENDA - Scheduling -Remainder of PCP -Discussion of iterative approaches Read the Tutorial of Knowledge based and scheduling. Suggested pages: 72-93, 135-140. -Constraint Posting -EBL/DDB (if we have time). -Smith (Metric/time) - First Important Point: Sign for your meeting with Dr. Rao, such that you can agree in what are you going to do for your term/implementation project. Well, starting our discussion we found a bug in page 64 of the tutorial on the SP-PCP dominance conditions for case 2: The correct one is SP(j->i)=0, SP(i->j)<0 then j->i [case 2]. PCP analysis In the PCP method they use the slack based analysis where basically SP(i->j)= [lft(j)-est(i)]-{dur(j)+dur(i)} is the total amount of time that will provide the slack. This algorithm basically follows the next steps: - Pick a pair of operations (i,j) -pick task pair with minimum amount of slack -pick ordering in task that gives us the maximum amount of slack. These two steps are known as: Min-slack selection (choose operation pair with overall minimum SP), and Max-Slack Posting (Post precedence constraint that leaves maximum remaining slack). We can write this as: choose bs(i,j)= min {SP(i->j), SP(j->i)} / Normalizing factor. Pick (i,j) ordering: Max {SP(i->j), SP(j->i)} Such that after committing to an order you have to choose the one that maximizes the slack. This technique can be seen as that one of most constrained variable - least constrained value. If you want to clarify these concepts a suggestion is to read page 117 of the article "A generic framework for Constraint-Directed search and Scheduling", by Beck and Fox. As you can see in the last equation, we have a normalizing factor. If we do not take into consideration this factor we can fall into the problem of considering that the long duration tasks are not as critical as the small duration tasks, which it is not n ecessary true. Basically we are looking for an absolute value with respect to the total range, that's why we need to normalize. These are some of the comparisons between the PCP algorithm and the ORR-FSS algorithm. ORR-FSS PCP Encoding: Point Based Ordering (between tasks). Propagation: Edge Finding ARC-B Variable Ord: Resource contention based. Slack Based Search: Standard with chronological backtracking. Systematic Search One important issue remained until now is that none has talked about combining different approaches and test which of the techniques can work in both areas. What encoding can be the best then? Well, here we have to consider not only the number of variabl es of the encoding but also the size of the domain for each variable. For example, ordering encodings have larger number of variables but only 2 values for each of the variables. In the case of point based encodings, they have smaller number of variables but larger domains. There are also another differences between these two approaches. In Point Based encodings search is made considering start times for each of the tasks (defining exactly the start time, similar to a state based approach). In the orderin g encodings you have to figure out the ordering and find then any topological sort that satisfies it (a partial order of start and end times, similar to partial plan approach). So, any shortest path could be a topological sort (numeric computation, not se arch anymore). Then, scheduling has similar characteristics than planning. For example, in the case of Point Based when we do scheduling we know exactly how our schedule is being constructed, if we find a sequence that is wrong we just kill the sequence and we backtrack . In the case of ordering encodings we do not know exactly which is the state so it becomes a little costlier. That led us to the next conclusion: If you know the state information then it is easier to check what constraints are holding and it is easier t o specify domain control knowledge. This assumption can be applied in planning as in scheduling. Until now we have studied scheduling considering single capacity resources. Scheduling people are not able to handle yet multicapacity resources in a clean way. In a single capacity resource the conflict resolution between resources is based in two, in ot her words, it involves just a pair of tasks. This is completely different from multicapacity resources where it can be "m" tasks interacting (the computation of the heuristics is completely different since this interaction is not between a pair of actions anymore). In the tutorial, the systematic approaches for scheduling end on page 70. - Iterative Approach. In an iterative approach, you construct a schedule and you commit to different states (you shift) if some constraints are violated. This approach of minimal modifications is not always easy (in the case of planning it makes planning harder). Iterative approaches basically make only local changes. Some concepts related to this area are: Temporal planning which is just scheduling + action selection. There is a mixed initiative between planning and scheduling, which expects an expert user in the loop (during the process) and helps the planner or scheduler to come with a good result. The next question in our discussion is, what people do once they have found a conflict? -Read page 71 of the tutorial for an outline of the different approaches for handling conflicts. There are different approaches depending of what kind of search you want to implement, for example: -In the case of systematic search you need to backtrack, so maybe you want to remember your failure such that you can avoid it next time. Chronological backtracking, backjumping, dependency-directed backtracking (DDB) ,and constraint recording (EBL) are s ome of the techniques that can be employed to handle conflict resolution. There are another procedures which are considered as Incomplete Search Approaches, for example: -Heuristic backtracking from where you just jump around the world (the heuristic should handle this jumping), local search. - Re-Starting: Re-starts to top, but you should guarantee that the next time you restart you should come with a new complete operation, if not there is not reason for doing it (randomness issue). Restarts basically involve random permutations of values or variables. Depending on your heuristics every equaled classes are permuted, if not you lose the properties of your heuristics supposing that this one is very good. There are some variations in the iterative approaches which can be used for scheduling. We can consider Iterative Broadening, which is as iterative deepening with the difference that in the leaf nodes you do not look in contiguous regions. We should consider also the Heavy Tail Distribution (Shelman 1997 paper). In general they explain that the curves of problems have exponential decay, but some of these problems have a very good number of solutions very easy to find and very hard to find. For this special case the restarts procedures discussed before can really help, and you can restrict them to a determined number of backtracks (for example a Max constant). So what solutions do we have so far, well we can implement a variation of systematic search using Iterative broadening, random restart or just give up and use GSAT. What makes the difference between systematic and local search is based in the basic proper ty of systematic search: "Do not look at any node more than once and do not forget to look at any node", which is basically different from what local search does. One approach that tries to combine heuristics ideas and restarts is LDS (limited discrepancy search). it basically assumes that you have a good heuristic but also considers that you can get to a final end. So, you would need to backtrack. Normal backtrack ing commits to the last action executed, but LDS assumes that your heuristic is going to be wrong in the higher levels (it has a higher probability to fail in upper levels), so it tries to backtrack to those nodes. Basically, you have to remember the stat e of the search in this algorithm, with this design you can get close to iterative search but with the advantage of using a heuristics. The disadvantage is that it is harder to keep track of the choices. Behavior of LDS in search trees... 1 2 3 0 0 0 / / \ / 0 0 0 0 / / / \ 0 0 0 0 / go up / / / 0 0 0 0 Fail Next class... -Dynamic Backtracking (exploiting the local gradient) Check references for more information. -EBL/DDB explanation Send comments to rsanchez@asu.edu.