Problem Solving
Problem Solving
Problem Solving
Problem-solving
Solving problems by searching
Informed Search and Exploration
Constraint Satisfaction Problems
Adversarial Search
Artificial Intelligence
3
Artificial Intelligence
EXAMPLE PROBLEMS
Toy Problems
Vacuum world Problem.
States : The agent is in one of two locations,
each of which might or might not
contain dirt. Thus there are 2 x 22 = 8
possible world states.
Initial State : Any state can be designated as the initial
state
Successor function : This generates the legal states that
result from trying the three actions (left,right,suck)
Goal test : This checks whether all the squares are clean.
Path Cost : Each step costs 1, so the path cost is the
number of steps in the path.
4
Artificial Intelligence
Real-world Problems
Route-finding problem
Knapsack Problem
5
Artificial Intelligence
6
Artificial Intelligence
7
Artificial Intelligence
Depth-first Search
Search Strategy Properties
Criteria
– Completeness: if there is a solution
will the algorithm find it?
– Time complexity: how much time
does the algorithm take to arrive at a
solution, if one exists?
– Space complexity: how much space
does the algorithm require?
– Optimality: is the solution
optimal? 8
Artificial Intelligence
In-Completeness of DFS
DFS is not complete
– fails in infinite-depth spaces, spaces
with loops
Variants
– limit depth of search
– avoid re-visiting nodes.
– avoid repeated states along path
=> complete in finite spaces
9
Artificial Intelligence
DFS Summary
Advantages
– Low space complexity
– Good chance of success when there are
many solutions.
– Complete if there is a solution shorter than
the depth limit.
Disadvantages
– Without the depth limit search may
continue down an infinite branch.
– Solutions longer than the depth limit will not
be found.
– The solution found may not be the shortest
solution.
11
Artificial Intelligence
Breadth-first Search
Expand node with minimal depth.
avoid revisiting nodes. Since every
node is in memory, the additional
cost is negligible.
12
Artificial Intelligence
BFS Performance
Properties
– Complete: Yes (if b is finite)
– Time complexity: 1+b+b^2+…+b^l
= O(b^l)
– Space complexity: O(b^l) (keeps
every node in memory)
– Optimal: Yes (if cost=1 per step); not
optimal in general
» where b is branching factor and
» l is the depth of the shortest solution
13
Artificial Intelligence
1 10
5 B 5
S G
– Node discovery
15 5
C
15
Artificial Intelligence
Limit=2
Limit=3
…
18
Searching Strategies
Artificial Intelligence
Numerical demonstration:
Let b=10, l=5.
– BFS resource use (memory and # nodes
expanded)
1+10+100+1000+10000+100000 = 111,111
– Iterative Deepening resource use
» Memory requirement: 10*5 = 50
» # expanded nodes
6+50+400+3000+20000+100000 = 123,456
=> re-searching cost is small compared
with the cost of expanding the leaves
20
Artificial Intelligence
Bidirectional Search
Simultaneously search both forward
from the initial state and backward from
the goal, and stop when the two
searches meet in the middle.
Start Goal
21
Artificial Intelligence
22
Searching Strategies
Artificial Intelligence
23
Searching Strategies
Artificial Intelligence
Contingency problems
If the environment is partially observable or if actions are uncertain, then
agent’s percepts provide new information after each action.
Each possible percept defines a contingency that must be planned for.
A problem is called adversarial if the uncertainty is caused by the actions of
another agent.
Exploration Problems
When the states and action of the environment are unknown, the agent must
act to discover them.
26
Artificial Intelligence
Search Strategies
Uninformed search (= blind search)
– have no information about the number of steps or
the path cost from the current state to the goal
Informed search (= heuristic search)
– have some domain-specific information
– we can use this information to speed-up search
– e.g. Bucharest is southeast of Arad.
– e.g. the number of tiles that are out of place in an 8-
puzzle position
– e.g. for missionaries and cannibals problem, select
moves that move people across the river quickly
27
Artificial Intelligence
Best-First-Search Performance
Completeness
– Complete if either finite depth, or minimum drop in h value for each operator
Time complexity
– Depends on how good the heuristic function is
– A “perfect” heuristic function will lead search directly to the goal
– We rarely have “perfect” heuristic function
Space Complexity
– Maintains fringe of search in memory
– High storage requirement
Optimality
– Non-optimal solutions
3 2 x
29
Artificial Intelligence
Hill-Climbing
Simple loop that continually moves in the direction of
decreasing value
does not maintain a search tree, so the node data
structure need only record the state and its evaluation.
Always try to make changes that improve the current
state
Steepest-descent: pick the steepest next state
Hill-Climbing(initial-state)
state initial-state
do forever
next minimum valued successor of state
if (h(next) >= h(state)) return current
state next
31
Artificial Intelligence
8-queens
State contains 8 queens on the board
Successor function returns all states generated by moving
a single queen to another square in the same column
(8*7 = 56 next states)
h(s) = number of queens that attack each other in state s.
Drawbacks
Local maxima/minima : halt with local
maximum/minimum
Plateaux : random walk
Ridges : oscillate from side to side, limit progress
• Random-Restart Hill-Climbing
Conducts a series of hill-climbing searches from
randomly generated initial states. 33
Artificial Intelligence
Hill-Climbing Performance
Completeness
– Not complete, does not use systematic
search method
Time complexity
– Depends on heuristic function
Space Complexity
– Very low storage requirement
Optimality
– Non-optimal solutions
– Often results in locally optimal solution
34
Artificial Intelligence
Search Performance
Heuristic 1: Heuristic 2:
8-Square
Tiles out of place Manhattan distance*
Search Algorithm expanded solution length expanded solution length
Iterative Deepening 1105 9
hill-climbing 2 no solution found 10 9
best-first 495 24 9 9
3 2 5 h1 = 7 1 2
7 1 3 4 5
h2 = 2+1+1+2+1+1+1+0=9
4 6 8 6 7 8
=> Choice of heuristic is critical to heuristic search algorithm performance.
35