Example: Air Cargo Transport
Example: Air Cargo Transport
Example: Air Cargo Transport
Classical planning is the name given to early planning systems. Includes both state-space and plan-
space planning algorithms. One in which a state of the world is represented by a collection of variables
The problem-solving agent can find sequences of actions that result in a goal state. But it deals with
atomic representations of states and thus needs good domain-specific heuristics to perform well. The
hybrid propositional logical agent can find plans without domain-specific heuristics because it uses
domain-independent heuristics based on the logical structure of the problem. But it relies on ground
(variable-free) propositional inference, which means that it may be swamped when there are many actions
and states. For example, in the wumpus world, the simple action of moving a step forward had to be
repeated for all four agent orientations, T time steps, and n 2 current locations. In response to this,
planning researchers have settled on a factored representation— one in which a state of the world is
represented by a collection of variables. We use a language called PDDL, the Planning Domain
Definition Language that allows us to express all 4Tn2 actions with one action schema. PDDL describes
the four things we need to define a search problem: the initial state, the actions that are available in a
state, the result of applying an action, and the goal test.
Each state is represented as a conjunction of fluent that are ground, functionless atoms.
Eg. Poor ^ Unknown
Actions are described by a set of action schemas that implicitly define the A CTIONS(s)
and RESULT(s, a) functions needed to do a problem-solving search.
The result of executing action a in state s is defined as a state s which is represented by the set of fluents
formed by starting with s, removing the fluents that appear as negative literals in the action’s effects.
The goal is just like a precondition: a conjunction of literals (positive or negative) that may contain
variables,
air cargo transport problem involving loading and unloading cargo and
flying it from place to place. The problem can be defined with three actions: Load , Unload,
and Fly. The actions affect two predicates: In(c, p) means that cargo c is inside plane p, and
Planning can also be done using a backward search. The idea is to start at the goal, and apply
inverses of the planning operators to produce subgoals, stopping if we produce a set of subgoals
that is satisfied by the initial state.
The set of all states that are predecessors of states in Sg is
All of the heuristics we have suggested can suffer from inaccuracies. This section shows
how a special data structure called a planning graph can be used to give better heuristic
estimates.
we can search for a solution over the space formed by the planning graph,
using an algorithm called GRAPHPLAN.
A planning graph is polynomial size approximation to this tree that can be constructed
quickly. The planning graph can’t answer definitively whether G is reachable from S0,
but it can estimate how many steps it takes to reach G. The estimate is always correct
when it reports the goal is not reachable, and it never overestimates the number of steps,
so it is an admissible heuristic.
A planning graph is a directed graph organized into levels: first a level S0 for the initial
state;
then a level A0 consisting of nodes for each ground action that might be applicable in
S0;
then alternating levels So followed by Ai; until we reach a termination condition
Goal: (and(dinner)(Present)(not(garbage))
Problems:
Logic (FOL) as we had it referred to a static world. How to represent the change in the FOL ?
How to represent actions we can use to change the world?
Provides a framework for representing change, actions and reasoning about them
• Situation calculus – based on first-order logic, – a situation variable models new states of the world
– action objects model activities – uses inference methods developed for FOL to do the reasoning.
• Logic for reasoning about changes in the state of the world
• The world is described by: – Sequences of situations of the current state – Changes from one
situation to another are caused by actions
• The situation calculus allows us to: – Describe the initial state and a goal state – Build the KB that
describes the effect of actions (operators) – Prove that the KB and the initial state lead to a goal
state extracts a plan as side-effect of the proof.
First-order logic lets us get around this limitation by replacing the notion of linear time with a notion of
branching situations, using a representation called situation calculus that works like this:
An alternative is to represent plans as partially ordered structures: a plan is a set of actions and a
set of constraints of the form Before(ai, aj) saying that one action occurs before another.
Example:
Remove(Spare, Trunk ) and Remove(Flat, Axle) can be done in either order as long as they are
both completed before the PutOn(Spare, Axle) action.
Partially ordered plans are created by a search through the space of plans rather than
through the state space. We start with the empty plan consisting of just the initial state and
the goal, with no actions in between.
GRAPHPLAN records mutexes to point out where the difficult interactions are.
SATPLAN represents a similar range of mutex relations, but does so by using the general
CNF form rather than a specific data structure. Forward search addresses the problem
heuristically by trying to find patterns (subsets of propositions) that cover the
independent sub problems. Since this approach is heuristic, it can work even when the
sub problems are not completely independent.
Example: for the Remote Agent planner that commanded NASA’s Deep Space One
spacecraft, it was determined that the propositions involved in commanding a spacecraft
are serializable. This is perhaps not too surprising, because a spacecraft is designed by its
engineers to be as easy as possible to control (subject to other constraints). Taking
advantage of the serialized ordering of goals, the Remote Agent planner was able to
eliminate most of the search. This meant that it was fast enough to control the spacecraft
in real time, something previously considered impossible.
The approach we take in this section is “plan first, schedule later”: that is, we divide
the overall problem into a planning phase in which actions are selected, with some ordering
constraints, to meet the goals of the problem, and a later scheduling phase, in which temporal
information is added to the plan to ensure that it meets resource and deadline constraints.
A typical job-shop scheduling problem, consists of aset of jobs, each of which consists a
collection of actions with ordering constraints among them. Each action has a duration and a set
of resource constraints required by the action.
Each constraint specifies a type of resource (e.g., bolts, wrenches, or pilots), the number of that
resource required, and whether that resource is consumable (e.g., the bolts are no longer
available for use) or reusable (e.g., a pilot is occupied during a flight but is available again when
the flight is over). Resources can also be produced by actions with negative consumption,
including manufacturing, growing, and resupply actions. A solution to a job-shop scheduling
problem must specify the start times for each action and must satisfy all the temporal ordering
constraints and resource constraints. As with search and planning problems, solutions can be
evaluated according to a cost function; this can be quite complicated, with nonlinear resource
costs, time-dependent delay costs, and so on.
.
The central idea of aggregation is to group individual objects into quantities when the objects are
all in distinguishable with respect to the purpose at hand. In our assembly problem, it does not
matter which inspector inspects the car, so there is no need to make the distinction. (The same
idea works in the missionaries-and-cannibals problem.)
HIERARCHICAL PLANNING
Hierarchical planning in real world requires modeling an efficient,semantics and flexible
knowledge representation for both planning and domain knowledge.
hierarchical problem solving has been used as a method to reduce the computational
cost of planning.
The idea of hierarchical problem solving is to distinguish between goal and action of
different degree of importance and solve the most important problem first.
Its main advantage derives from the fact that by emphasizing certain activities while
temporarily ignoring others, it is possible to obtain a much smaller search space in which
to find a plan.
hierarchical task-network planning (HTN), where a planning problem and operators are
organized into a set of tasks. A high-level task can be reduced to a set of ordered lower-
level tasks, and a task can be reduced in several ways.
In artificial intelligence, hierarchical task network (HTN) planning is an approach to
automated planning in which the dependency among actions can be given in the form
of hierarchically structured networks.
Example:
Example:
Planning strategies:
Consider this problem: given a chair and a table, the goal is to have them match—have
the same color. In the initial state we have two cans of paint, but the colors of the paint and
the furniture are unknown. Only the table is initially in the agent’s field of view:
Init(Object(Table) ∧ Object(Chair ) ∧ Can(C1) ∧ Can(C2) ∧ InView(Table))
Goal (Color (Chair, c) ∧ Color (Table, c))
There are two actions: removing the lid from a paint can and painting an object using the
paint from an open can. The action schemas are straightforward, with one exception: we now
allow preconditions and effects to contain variables that are not part of the action’s variable
list. That is, Paint(x, can) does not mention the variable c, representing the color of the
paint in the can. In the fully observable case, this is not allowed—we would have to name
the action Paint(x, can, c). But in the partially observable case, we might or might not
know what color is in the can. (The variable c is universally quantified, just like all the other
variables in an action schema.)
Action(RemoveLid(can),
PRECOND:Can(can)
EFFECT:Open(can))
Action(Paint(x , can),
PRECOND:Object(x) ∧ Can(can) ∧ Color (can, c) ∧ Open(can)
EFFECT:Color (x , c))
1) Sensorless planning:
Sensorless planning is also know as conformant. These kinds of planning are not based on any
based on any perception. The algorithm ensures that plan should reach its goal at any cost.
2) Contingent planning
Contingent planning are sometimes termed as conditional planning and deals with bounded
indeterminacy. Agent makes a plan, evaluate the plan and then execute it fully or partially
depending on the condition.
Example:
For the partially observable painting problem with the percept axioms
given earlier, one possible contingent solution is as follows:
[LookAt (Table), LookAt (Chair ),
if Color (Table, c) ∧ Color (Chair, c) then NoOp
else [RemoveLid(Can1), LookAt (Can1),RemoveLid (Can2), LookAt (Can2),
if Color (Table, c) ∧ Color (can, c) then Paint(Chair , can)
else if Color (Chair, c) ∧ Color (can, c) then Paint(Table, can)
else [Paint(Chair , Can1), Paint (Table, Can1)]]]
3) Online replanning
In this kind of planning agent can employ any of the strategy of planning .It observe the
plan execution and if needed , replan it and again executes and observes.
Example:
Imagine watching a spot-welding robot in a car plant. The robot’s fast, accurate motions are
repeated over and over again as each car passes down the line. Although technically impressive,
the robot probably does not seem at all intelligent because the motion is a fixed,
preprogrammed sequence; the robot obviously doesn’t “know what it’s doing” in any meaningful
sense. Now suppose that a poorly attached door falls off the car just as the robot is
about to apply a spot-weld. The robot quickly replaces its welding actuator with a gripper,
picks up the door, checks it for scratches, reattaches it to the car, sends an email to the floor
supervisor, switches back to the welding actuator, and resumes its work. All of a sudden,
the robot’s behavior seems purposive rather than rote; we assume it results not from a vast,
precomputed contingent plan but from an online replanning process—which means that therobot
does need to know what it’s trying to do.
The online agent has a choice of how carefully to monitor the environment. We distinguish
three levels:
Action monitoring: before executing an action, the agent verifies that all the
preconditions
still hold.
Plan monitoring: before executing an action, the agent verifies that the remaining plan
will still succeed.
Goal monitoring: before executing an action, the agent checks to see if there is a better
sset of goals it could be trying to achieve.
MULTIAGENT PLANNING
So far, we have assumed that only one agent is doing the sensing, planning, and acting.
When there are multiple agents in the environment, each agent faces a multiagent planning
problem in which it tries to achieve its own goals with the help. Multiagent planning is
necessary when there are other agents in the environment with which to cooperate or compete.
Joint plans can be constructed, but must be augmented with some form of coordination if two
agents are to agree on which joint plan to execute.
2) multibody planning.: When the effectors are physically decoupled into detached units—
as in a fleet of delivery robots in a factory— multieffector planning becomes multibody
planning.
3) Coordination: In a multibody robotic doubles team, a single plan dictates which body
will go where on the court and which body will hit the ball. In a multiagent doubles team,
on the other hand, each agent decides what to do; without some method for coordination.
Knowledge Representation:
It is also observed that most of the times reasoning happens at the level of category rather
than an individual object. As object is categorized it becomes easy to infer their behavior
based on their category and also to predicate or generate future inference about the
object.
For example, from its green and yellow mottled skin, one-foot diameter, ovoid shape, red
flesh, black seeds, and presence in the fruit aisle, one can infer that an object is a
watermelon;
from this, one infers that it would be useful for fruit salad.
There are two choices for representing categories in first-order logic: predicates and
objects.
Categories serve to organize and simplify the knowledge base through inheritance. If
we say that all instances of the category Food are edible, and if we assert that Fruit is a
subclass of Food and Apples is a subclass of Fruit , then we can infer that every apple is
edible. We say that the individual apples inherit the property of edibility, in this case
from
their membership in the Food category.
First-order logic makes it easy to state facts about categories, either by relating objects
to categories or by quantifying over their members. Here are some types of facts, with
examples of each:
1. An object is a member of a category.
BB9 ∈ Basketballs
2. A category is a subclass of another category.
Basketballs ⊂ Balls
3. All members of a category have some properties.
(x∈ Basketballs) ⇒ Spherical (x)
4. Members of a category can be recognized by some properties.
Orange(x) ∧ Round (x) ∧ Diameter(x)=9.5__ ∧ x∈ Balls ⇒ x∈ Basketballs
EVENTS:
Event showed how situation calculus represents actions and their effects. Situation calculus is
limited in its applicability: it was designed to describe a world in which actions are discrete,
instantaneous, and happen one at a time. Consider a continuous action, such as filling a bathtub.
Situation calculus can say that the tub is empty before the action and full when the action is
done, but it can’t talk about what happens during the action. It also can’t describe two actions
happening at the same time—such as brushing one’s teeth while waiting for the tub to fill. To
handle such cases we introduce an alternative formalism known as event calculus, which is
based on points of time rather than on situations.
Situation calculus can only specify condition at the start of the action and at the end of the the
action, but it cannot represent what happened during the action was taking place.
One more limitation of situation calculus is that it cannot represent simultaneous action. For eg.
Writing assignments while watching TV programs. To handle such things we have event
calculus.
Event calculus is based on time points instead of only start state and end state.