Ai Problem Solving
Ai Problem Solving
Ai Problem Solving
Intelligence
Solving Problems by Searching
Presented by: Dinh Han
7 • Summary
The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck
Time complexity would be O(bd+1) (since applying the goal test to nodes)
Space complexity is O(bd)
D E F G D E F G D E F G
DFS is not optimal
H I J K L M N O H I J K L M N O H I J K L M N O b: branching factor
A A A
m: maximum depth
B C B C C Space complexity ~ O(bm)
D E F G D E F G E F G Time complexity ~ O(bm)
H I J K L M N O I J K L M N O J K L M N O
A A
C B C C
E F G E F G F G
J K L M N O K L M N O L M N O
A A A
C C C
F G F G F G
L M N O L M N O M N O
Figure 3.16
9/6/18 Artificial Intelligence 30
M
Depth limited search
• Depth-limited search: nodes at depth l are treated as if they have no
successors
• Time complexity: O(bl)
• Space complexity: O(bl)
• Considering diameter of the state space gives us a better depth limit
Figure 3.17
9/6/18 Artificial Intelligence 32
diameter
Iterative deepening depth-first search
Figure 3.18
failure
A A A A
function problem returns
for depth to ∞ do
result ← problem depth
if result ̸= then return result
Figure 3.18
A A
A A A A
B C B C B C B C
A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O
A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O
A A A A
B C B C B C B C
D E F G D E F G D E F G D E F G
H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O
Start Goal
Figure 3.20
R predecessors x
x
x
3.4.7 Comparing uninformed search strategies
a a,b a a,d
∗
O(bd ) O(b1+⌊C /ϵ⌋ ) O(bm ) O(bℓ ) O(bd ) O(bd/2)
∗
O(bd ) O(b1+⌊C /ϵ⌋ ) O(bm) O(bℓ) O(bd) O(bd/2)
c c c,d
Figure 3.21 b d
m l
a b
b ≥ϵ
c d
ϵ
Arad Mehadia
Bucharest Neamt
Craiova Oradea
Drobeta Pitesti
Eforie Rimnicu Vilcea
Fagaras Sibiu
Giurgiu Timisoara
Hirsova Urziceni
Iasi Vaslui
Lugoj Zerind
A shorter one:
(d) After expanding Fagaras Arad -> Rimnicu Vilcea -> Pitesti -> Bucharest
Figure 3.23
9/6/18 Artificial Intelligence 39
hSLD h
Figure 3.23
hSLD h
Gn n
A* search
(e) After expanding Fagaras
(a) The initial state
Stages in anFigure
A∗ search ∗ Nodes are labeled with f = g + h.
3.24 for Bucharest. f = g+h
The h values are the straight-line distances to Bucharest
h
Optimality of A*
Optimality of A*
∗ the tree-search version of A∗ is
optimal if h(n) is admissible, while the graph-search version is optimal if h(n) is consistent.
• A∗ has the following properties:
• The tree-search version of A∗ is optimal if h(n) is admissible
g
• The graph-search version is optimal if h(n) is consistent
∗
f
• if h(n) is consistent, then the values of f(n) along any path are
if h(n) is consistent, then the values of
nondecreasing
f (n) along any path are nondecreasing.
• Proof: Suppose nʹ is a successor of n; then g(nʹ) = g(n)
n′ n + c(n,
g(n′ ) = g(n) + c(n, a, na,
′ ) nʹ)
a • So,
380
400
420
∗
Recursive best-first search (RBFS)
function problem returns
return problem problem ∞
function problem node f limit returns f
if problem node then return node
successors ←
for each action in problem node do
problem node action successors
if successors then return failure ∞
for each s in successors do f
s f ← max(s.g + s.h , node.f )
loop do
best ← f successors
if best .f > f limit then return failure best f
alternative ← f successors
result best f ← problem best min( f limit , alternative)
if result ̸= failure then return result
∗
Recursive best-first search (RBFS)
(a) After expanding Arad, Sibiu, ∞
(c) After switching back to Rimnicu Vilcea ∞
and Rimnicu Vilcea
and expanding Pitesti
Figure 3.27 f
f
(c) After switching
9/6/18 back to Rimnicu Vilcea ∞ Artificial Intelligence 51
and expanding Pitesti
SMA* (simplified MA*)
• SMA∗ proceeds just like A∗, expanding the best leaf until memory is full
• It cannot add a new node to the search tree without dropping an old one
• SMA* always drops the worst leaf node—the one with the highest f-value
• SMA* then backs up the value of the forgotten node to its parent
• In this way, the ancestor of a forgotten subtree knows the quality of the
best path in that subtree
• SMA* regenerates the subtree only when all other paths have been shown
to look worse than the path it has forgotten
• SMA∗ expands the best leaf and deletes the worst leaf
• Often cases that SMA∗ is forced to switch back and forth continually
among many candidate solution paths => thrashing problem (paging)
Q & A?
9/6/18 Artificial Intelligence 54