l2 Search
l2 Search
l2 Search
Artificial Intelligence
Search
Search
• Search permeates all of AI
• What choices are we searching through?
– Problem solving
Action combinations (move 1, then move 3, then move 2...)
– Natural language
Ways to map words to parts of speech
– Computer vision
Ways to map features to object model
– Machine learning
Possible concepts that fit examples seen so far
– Motion planning
Sequence of moves to reach goal destination
• An intelligent agent is trying to find a set or sequence of actions to
achieve a goal
• This is a goal-based agent
Problem-solving Agent
SimpleProblemSolvingAgent(percept)
state = UpdateState(state, percept)
if sequence is empty then
goal = FormulateGoal(state)
problem = FormulateProblem(state, goal)
sequence = Search(problem)
action = First(sequence)
sequence = Rest(sequence)
Return action
Assumptions
• Static or dynamic?
Environment is static
Assumptions
• Static or dynamic?
• Fully or partially observable?
Environment is discrete
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
Environment is deterministic
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
Environment is sequential
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
• Single agent or multiple agent?
Assumptions
• Static or dynamic?
• Fully or partially observable?
• Discrete or continuous?
• Deterministic or stochastic?
• Episodic or sequential?
• Single agent or multiple agent?
Search Example
Formulate goal: Be in
Bucharest.
Operators:
• fill jug x from faucet
• pour contents of jug x in jug y until y full
• dump contents of jug x down drain
Goal: (2,n)
b=2
Example trees
• Features
Simple to implement
Complete
Finds shortest solution (not necessarily least-cost unless all operators have equal cost)
Analysis
• See what happens with b=10
– expand 10,000 nodes/second
– 1,000 bytes/node
Depth Nodes Time Memory
2 1110 .11 seconds 1 megabyte
4 111,100 11 seconds 106 megabytes
6 107 19 minutes 10 gigabytes
8 109 31 hours 1 terabyte
10 1011 129 days 101 terabytes
12 1013 35 years 10 petabytes
15 1015 3,523 years 1 exabyte
Depth-First Search
• QueueingFn adds the children to the
front of the open list
• BFS emulates FIFO queue
• DFS emulates LIFO stack
• Net effect
– Follow leftmost path to bottom, then
backtrack
– Expand deepest node first
Algorithm: Depth First Search
1.If the initial state is a goal state, quit and return
success.
2.Otherwise, loop until success or failure is signaled.
a) Generate a state, say E, and let it be the successor
of the initial state. If there is no successor, signal
failure.
b) Call Depth-First Search with E as the initial state.
c) If success is returned, signal success. Otherwise
continue in this loop.
DFS Examples
Example trees
Analysis
• Time complexity
In the worst case, search entire space
Goal may be at level d but tree may continue to level m, m>=d
O(bm)
Particularly bad if tree is infinitely deep
• Space complexity
Only need to save one set of children at each level
1 + b + b + … + b (m levels total) = O(bm)
For previous example, DFS requires 118kb instead of 10 petabytes for d=12 (10 billion times less)
• Benefits
May not always find solution
Solution is not necessarily shortest or least cost
If many solutions, may find one quickly (quickly moves to depth d)
Simple to implement
Space often bigger constraint, so more usable than BFS for large problems
Advantages of Depth-First Search
DFS BFS
Complete N Y
Optimal N N
Heuristic N N
Time bm bd+1
Space bm bd+1
Avoiding Repeated States
Can we do it?
• QueueingFn is SortByCostSoFar
• Cost from root to current node n is g(n)
– Add operator costs along path
• First goal found is least-cost solution
• Space & time can be exponential because large
subtrees with inexpensive steps may be explored
before useful paths with costly steps
• If costs are equal, time and space are O(bd)
– Otherwise, complexity related to cost of optimal
solution
UCS Example
Open list: C
UCS Example
Open list: S(5) N(5) R(6) Z(6) F(6) G(7) D(8) L(10)
UCS Example
global maxima
values
local maxima
states
Hill Climbing Issues
• Foothills or local maxima is a state that is better than all its
neighbours but is not better than some other states farther
away.
• At a local maximum, all moves appear to make things worse.
Foothills are potential traps for the algorithm.
• A plateau is a flat area of the search space in which a whole
set of neighbouring states have the same value. Not possible
to determine the best direction in which to move by making
local comparisons.
• A ridge is a special kind of local maximum. It is an area of the
search space that is higher that the surrounding areas and
that itself has a slope. Any point on a ridge can look like peak
because movement in all probe directions is downward.
Comparison of Search Techniques
DFS BFS UCS IDS Best HC
Complete N Y Y Y N N
Optimal N N Y N N N
Heuristic N N N N Y Y
Time bm bd+1 bm bd bm mn
Space bm bd+1 bm bd bm b
Beam Search
• QueueingFn is sort-by-h
– Only keep best (lowest-h) n nodes on open list
• n is the “beam width”
– n = 1, Hill climbing
– n = infinity, Best first search
Example
Example
Example
Example
Example
Example
Example
Example
Example
Comparison of Search Techniques
DFS BFS UCS IDS Best HC Beam
Complete N Y Y Y N N N
Optimal N N Y N N N N
Heuristic N N N N Y Y Y
Time bm bd+1 bm bd bm bm nm
Space bm bd+1 bm bd bm b bn
A*
• QueueingFn is sort-by-f
– f(n) = g(n) + h(n)
• Note that UCS and Best-first both improve
search
– UCS keeps solution cost low
– Best-first helps find solution quickly
• A* combines these approaches
Power of: f
• If heuristic function is wrong it either
– overestimates (guesses too high)
– underestimates (guesses too low)
• Overestimating is worse than underestimating
• A* returns optimal solution if h(n) is admissible
– heuristic function is admissible if never
overestimates true cost to nearest goal
– if search finds optimal solution using admissible
heuristic, the search is admissible
Overestimating
A (15)
3 3
2
15 6 20 10 5
A* applied to 8 puzzle
A* search applet
Example
Example
Example
Example
Example
Example
Example
Example
Optimality of A*
• Suppose a suboptimal goal G2 is on the open list
• Let n be unexpanded node on smallest-cost path to
optimal goal G1
11111 11000
00000 00111
Mutation
• With small probability, randomly alter 1 bit
• Minor operator
• An insurance policy against lost bits
• Pushes out of local minima
Population: Goal: 0 1 1 1 1 1
E) 0 | 0 1 1 0 0 Score: 4 G) 0 1 1 | 1 0 0 Score: 3
F) 1 | 1 1 0 1 1 Score: 3 H) 0 0 1 | 0 1 0 Score: 6
G) 0 1 1 0 1 | 0 Score: 4 I) 0 0 | 1 0 1 0 Score: 6
H) 1 0 1 1 0 | 1 Score: 2 J) 0 1 | 1 1 0 0 Score: 3