Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter3 ProblemSolvingBySearching

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Chapter 3

Solving Problems by Searching


Problem-Solving Agents
 The correct action to take is not immediately
obvious, and agent may need to plan ahead: to
consider a sequence of actions that form a path
to a goal state. Such an agent is called a
problem-solving agent, and
 the computational process it undertakes is called
search.
 Agents that use factored or structured
representations of states are called planning
agents
Problem-Solving Agents
The agent
o Goal formulation: adopts the goal of reaching
Bucharest
o Problem formulation: devises a description of the
states and actions necessary to reach the goal.
o Search: simulates sequences of actions by searching
until the goal
o Execution: can now execute the actions in the
solution, one at a time.
Search problems and Solutions
Formal Definition of a search problem
 A set of possible states called state space.
 The initial state
 A set of one or more goal states.
 Actions available to the agent.
 A transition model, which describes what each action does.
 An action cost function, denoted by c(s, a, s′).
Example Problems
A Standardized Problem:
 is intended to illustrate or exercise various
problem-solving methods.
 It can be given a concise, exact description and
hence is suitable as a benchmark for Real-world
problem researchers.
A Real World Problem, such as robot navigation,
is one whose solutions people actually use, and
whose formulation is idiosyncratic,
not standardized, because, for example, each robot
has different sensors that produce different data.
A Standardized Problem
The vacuum world problem as follows:
States: vacuum environment with n cells has n*2^n states.
Initial state: Any state can be designated as the initial
state.
Actions: suck, move Left and move Right.
Transition Model: suck.
Goal State: The states in which every cell is clean.
Action cost: Each action costs 1.
8-puzzle

1 2 1 2 3
4 5 3 4 5 6
7 8 6 7 8
Initial State Goal State
8-puzzle
1 2
4 5 3
7 8 6

1 5 2
1 2 1 2
4 3
4 5 3 4 5 3
7 8 6 7 8 6 7 8 6

...
8-puzzle
• States: A state description specifies the location of
each of the tiles.
• Initial state: Any state can be designated as the
initial state.
• Actions: moving Left, Right, Up, or Down.
• Transition model: Maps a state and action to a
resulting state
• Goal state: Although any state could be the goal,
we typically specify a state with the numbers in
order.
• Action cost: Each action costs 1.
Real-world problems
Route-finding problem: is defined in terms of specified
locations and transitions along edges between them
 Touring problems: Ex. traveling salesperson problem
(TSP) is a touring problem in which every city on a map
must be visited
 A VLSI layout: positioning millions of components and
connections on a chip to
 minimize area,
 minimize circuit delays,
 minimize stray capacitances, and
 Maximize manufacturing yield.
Real-world problems
Robot navigation: is a generalization of the
route-finding problem, where search space
becomes many-dimensional.
Automatic assembly sequencing: the aim is to
find an order in which to assemble the parts of
some object.
 Ex: Protein design: finds a sequence of amino
acids that will fold into a three-dimensional
protein with the right properties to cure some
disease
Search Algorithms
Distinction between the state space and the search tree.
Search Algorithms
A sequence of search trees generated by a
graph search
Search Algorithms
The separation property of graph search, illustrated
on a rectangular-grid problem.
Best-first search
 Find a node, n, with minimum value of some
Best-first search evaluation function, f (n).
 The algorithm returns either an indication of
failure, or a node that represents a path to a
goal.
Search data structures: To represent a node in the
tree:
• node.STATE:
• node.PARENT:
• node.ACTION:
• node.PATH-COST:
Best-first search
Three kinds of queues are used in search algorithms:
• A priority queue first pops the node with the
minimum cost according to some evaluation
function, f . It is used in best-first search.
• A FIFO queue it is used in breadth-first search.
• A LIFO queue it is used in depth-first search.
Measuring problem-solving performance
Evaluate an algorithm’s performance in four ways:
• Completeness: Is the algorithm guaranteed to find a
solution when there is one, and to correctly report failure
when there is not?
• Cost optimality: Does it find a solution with the lowest
path cost of all solutions?
• Time complexity: How long does it take to find a
solution? measured in seconds, or number of states and
actions considered.
• Space complexity: How much memory is needed to
perform the search?
Uninformed search
• No clue about how close a state is to the
goal(s).
• Given a state, we only know whether it is a goal
state or not
• Cannot say one nongoal state looks better than
another nongoal state
• Can only traverse state space blindly in hope of
somehow hitting a goal state at some point
– Also called blind search
– Blind does not imply unsystematic!
Breadth-first search
• All actions have the same cost.
• Evaluation function f (n) is the depth of the node
• Additional efficiency with a couple of tricks:
 A FIFO queue faster than a priority queue, and will
give us the correct order of nodes.
 Once we’ve reached a state, we can never find a better
path to the state.
• Always finds a solution with a minimal number of actions
• Generates nodes at depth d, after generating all the nodes
at depth d −1.
• All actions have the same cost.
• For solution is at depth d total number of nodes generated
1+b+b2+b3+· · ·+bd = O(bd).
• Both time and space complexity are O(bd).
Properties of breadth-first search
• Nodes are expanded in the same order in which they are
generated
– Fringe can be maintained as a First-In-First-Out (FIFO) queue

• BFS is complete: if a solution exists, one will be found


• BFS finds a shallowest solution
– Not necessarily an optimal solution

• If every node has b successors (the branching factor),


first solution is at depth d, then fringe size will be at least
bd at some point
– This much space (and time) required
Dijkstra’s algorithm or uniform-cost search
o Evaluation function is the cost of the path from the root to the current
node.
o Theoretical computer science community: Dijkstra’s algorithm
o AI community: Uniform-cost search
o Algorithm’s worst-case time and space complexity is
o When all action costs are equal, uniform-cost search is similar to
breadth-first search.
o Uniform-cost search is complete and is cost-optimal
Dijkstra’s algorithm
Depth-first search and the problem of memory
Depth-first search
• always expands the deepest node
• could be implemented as a call to BEST-FIRST-SEARCH
where the evaluation function f is the negative of the depth.
• usually implemented not as a graph search but as a tree-like
• proceeds immediately to the deepest level
• search then “backs up” to the next deepest node that still has
unexpanded successors.
• is not cost-optimal;
• returns the first solution it finds, even if it is not cheapest
• don’t keep a reached table at all.
• memory complexity of only O(bm), where b is the branching
factor and m is the maximum depth of the tree.
Depth-first search
Disadvantages of DFS
• In cyclic state spaces it can get stuck in an
infinite loop;
• In infinite state spaces, it is not systematic:
• It can get stuck going down an infinite path,
even if there are no cycles.
• Thus, depth-first search is incomplete.
Backtracking search
• A variant of depth-first search with less memory.
• Only one successor is generated at a time.
• Each partially expanded node remembers which
successor to generate next.
• Successors are generated by modifying the
current state rather than a brand-new state.
• This reduces the memory requirements a path of
O(m) actions;
• A significant savings over O(bm) of depth-first
search.
• For backtracking have to undo each action
• Backtracking is critical for robotic assembly.
Depth-limited Search
• A version of depth-first search in which we
supply a depth limit, ℓ,
• Treat all nodes at depth ℓ
• Time complexity is O(bℓ)
• Space complexity is O(bℓ).
• Sometimes the diameter of the state-space
graph, gives a better depth limit
• A poor choice for ℓ the algorithm will fail,
making it incomplete again.
• It may not be optimal if the problem has
more than one solution.
Depth-limited Search
Iterative Deepening Search
 This search algorithm finds out the best depth limit and
does it by gradually increasing the limit until a goal is
found.
 Like BFS, it is optimal for the same cost actions, and is
complete on finite acyclic state spaces.
 Combines the benefits of BFS and DFS’s memory
efficiency.
 The time complexity is O(bd) when there is a solution,
or O(bm) when there is none.
 Unlike BFS it generates a new level by repeating the
previous levels, thereby saving memory.
 The worst-case time complexity is O(bd) like as BFS.
 The space complexity O(bd).
Iterative Deepening Search

1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Iterative Deepening Search
Total number of nodes generated in the worst case is
N(IDS) = (d)b1 +(d−1)b2+(d−2)b3 · · ·+bd ,
If b = 10 and d = 5, the numbers are
N(IDS) = 50+400+3,000+20,000+100,000 = 123,450
N(BFS) = 10+100+1,000+10,000+100,000 = 111,110 .

Summary: This method is the preferred uninformed


search method when the search state space is larger than
can fit in memory and the depth of the solution is not
known.
Bidirectional search
Bidirectional search
It replaces one single search graph with two
small sub graphs in which one starts the search
from an initial vertex and other starts from goal
vertex. The search stops when these two graphs
intersect each other.
 Motivation: bd/2 +bd/2 is much less than bd
 Time Complexity: O(bd).
 Space Complexity: O(bd).
Comparing uninformed search algorithms
Informed (Heuristic) Search Strategies
 Uses Informed search domain-specific hints
about the location of goals.
 Can find solutions more efficiently than an
uninformed strategy.
 The hints come in the form of a heuristic
function, denoted h(n).
 h(n) = cheapest path cost
Greedy best-first search
It is a form of best-first search that expands first
the node with lowest h(n) value and the evaluation
function f (n) = h(n).
Greedy best-first search
o Time Complexity: The worst case time
complexity of Greedy best first search is O(bm).
o Space Complexity: The worst case space
complexity of Greedy best first search is O(bm).
Where, m is the maximum depth of the search
space.
o It is also incomplete, even if the given state
space is finite.
o Greedy best first search algorithm is not
optimal.
Greedy best-first search
Values of h—straight-line distances to Bucharest.
Greedy best-first search
Greedy best-first search
A∗ search
 It is a best-first search with
o evaluation function f (n) = g(n)+h(n),
o g(n) is path cost from initial state to node n, and
o h(n) is the estimated cost of the shortest path
from n to a goal state.
 A* algorithm returns the path occurred
first, and it does not search for all
remaining paths.
 The efficiency of A* algorithm depends on
the quality of heuristic.
 A* algorithm expands all nodes which
satisfy the condition f(n)

A search
A* algorithm is complete as long as:
Branching factor is finite.
Cost at every action is fixed.

 A* is cost-optimal depending admissible heuristic.

Time and Space Complexity: O(bd)

An admissible heuristic is one that never


overestimates the cost to reach a goal.

A search
Initialization: {(S, 5)}
Iteration1: {(S--> A, 4), (S-->G, 10)}
Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}
Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A--
>B, 7), (S-->G, 10)}
Iteration 4 will give the final result, as S--->A--->C--->G it provides
the optimal path with cost 6.

A search

A search
Weighted A ∗ search
Evaluation function f (n) = g(n)+W ×h(n), for someW > 1

A∗ search: g(n)+h(n) (W = 1)
Uniform-cost search (Dijsktra’s): g(n) (W = 0)
Greedy best-first search: h(n) (W = ∞)
Weighted A∗ search: g(n)+W ×h(n) (1 <W < ∞)

A∗ search Weighted A∗ search


Iterative-deepening A* search
 IDA* is a DFS.
 gives us the benefits of A*
 without the requirement to keep all reached states in
memory,
 at a cost of visiting some states multiple times.
 It is a commonly used algorithm for problems that do not
fit in memory.
 cutoff is the f -cost (g+h);
 at each iteration, the cutoff value is the smallest f -cost of
any node that exceeded the cutoff on the previous
iteration.
 each iteration it finds a node just beyond that f-contour,
which used as the next contour
 If the optimal solution has cost C*, then there can be no
more than C* iterations
Iterative Deepening A*
• One big drawback of A* is the space requirement:
similar problems as uniform cost search, BFS
• Limited-cost depth-first A*: some cost cutoff c,
any node with g(n)+h(n) > c is not expanded,
otherwise DFS
• IDA* gradually increases the cutoff of this
• Can require lots of iterations
– Trading off space and time…
– RBFS algorithm reduces wasted effort of IDA*, still linear
space requirement
Recursive best-first search (RBFS)
o Mimic the operation of standard best-first search
using only linear space.
o Resembles a recursive DFS.
o Rather than continuing indefinitely down the
current path, it uses the f limit variable to keep
track of the f -value of the best alternative path
ancestor.
o If the current node exceeds f limit , the recursion
unwinds back and replaces the f -value with the
best f -value of its children.
Recursive best-first search (RBFS)
Recursive best-first search (RBFS)
Heuristic Functions
 There are 9!/2=181,400 reachable states in an 8-puzzle,
which easily keeps them all in memory.
 For the 15-puzzle, there are 16!/2 states—over 10 trillion
So search space will need the help of a good admissible
heuristic function.
two commonly used candidates:
o h1 = the number of misplaced tiles (blank not included).
 For eight puzzle problem all eight tiles are out of
position, so the start state has h1 = 8.
o h2 = the sum of the distances of the tiles from their goal
positions.
 City-block distance or Manhattan distance
h2= 3+1+2+2+2+3+3+2= 18.
Effect of heuristic accuracy on performance
 One way to characterize the quality of a heuristic is the
effective branching factor b∗.
 Uniform tree of depth d contains N +1 nodes, i.e.
N+1 = 1+b∗+(b∗)2+· · ·+(b∗)d
Where N is the number of nodes generated and b∗ is the
branching factor.
 A∗ finds a solution at depth 5 using 52 nodes, then the
effective branching factor is 1.92.
 Effective branching factor vary across problem instances,
 For a specific domain (such as 8-puzzles) it is fairly
constant.
 A well-designed heuristic would have a value of b∗ close
to 1
h2 dominates h1 i.e. A∗ using h2 will never expand more
nodes than A∗ using h1
Generating heuristics from relaxed problems
• A problem with fewer restrictions on the actions is called a
relaxed problem.
• State-space graph of the relaxed problem is a supergraph of
the original state space
• The cost of an optimal solution to a relaxed problem is an
admissible heuristic for the original problem
• Example 8-puzzle problem
A tile can move from square X to square Y if
X is adjacent to Y and Y is blank,
Three relaxed problems by removing one or both of the
conditions are:
(a) A tile can move from square X to square Y if X is adjacent to Y.
(b) A tile can move from square X to square Y if Y is blank.
(c) A tile can move from square X to square Y.
Designing heuristics
• One strategy for designing heuristics: relax the problem
(make it easier)
• “Number of misplaced tiles” heuristic corresponds to
relaxed problem where tiles can jump to any location, even
if something else is already there
• “Sum of Manhattan distances” corresponds to relaxed
problem where multiple tiles can occupy the same spot
• Another relaxed problem: only move 1,2,3,4 into correct
locations
• The ideal relaxed problem is
– easy to solve,
– not much cheaper to solve than original problem
• Some programs can successfully automatically create
heuristics
Admissibility
• A heuristic is admissible if it never overestimates
the distance to the goal
– If n is the optimal solution reachable from n’, then g(n)
≥ g(n’) + h(n’)

• Straight-line distance is admissible: can’t hope


for anything better than a straight road to the
goal
• Admissible heuristic means that A* is always
optimistic
More about heuristics
1 2
4 5 3
7 8 6
• One heuristic: number of misplaced tiles
• Another heuristic: sum of Manhattan distances of tiles to
their goal location
– Manhattan distance = number of moves required if no other tiles
are in the way
• Admissible? Which is better?
• Admissible heuristic h1 dominates admissible heuristic h2 if
h1(n) ≥ h2(n) for all n
– Will result in fewer node expansions
• “Best” heuristic of all: solve the remainder of the problem
optimally with search
– Need to worry about computation time of heuristics…

You might also like