Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ai Problem Solving

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Artificial

Intelligence
Solving Problems by Searching
Presented by: Dinh Han

9/6/18 Artificial Intelligence 1


References and Evaluation
• References:
• Stuart J. R, Peter N., Artificial Intelligence - A Modern Approach, 3rd edition,
Pearson, 2016
• Prateek J., Artificial Intelligence with Python, Packt Publishing, 2017
• Evaluation:
• Projects/paper presentation/writing papers: 30%
• Middle exam: 30%
• Final exam: 40%
• Prerequisites:
• Students should have fundamentals knowledge of discrete mathematics, data
structures and algorithms, complexity of algorithms

9/6/18 Artificial Intelligence 2


Outline
1 • Defining problems and solutions

2 • Toy and real-world problems

3 • Searching for solutions

4 • Infrastructure for search solutions

5 • Measuring problem-solving solutions

6 • Uninformed and Informed search strategies

7 • Summary

9/6/18 Artificial Intelligence 3


A simplified road map of part of Romania

9/6/18 Artificial Intelligence 4


Well-defined problems and solutions
A problem can be defined formally by five components:
• Initial state: In (Arad)
• Actions: given a status s; ACTIONS(s) returns the set of actions
that can be executed in s
• Transition model: A description of what each action does; implicitly
define the state space (graph). A path in the state space is a
sequence of states connected by a sequence of actions
• RESULT(s, a): returns the state that results from doing action a in state s
• Ex: RESULT(In(Arad),Go(Zerind)) = In(Zerind)
• Goal test: determines whether a given state is a goal state; eg.,
{In(Bucharest)}
• Path cost: a function that assigns a numeric cost to each path; step
cost of taking action a in state s to reach state sʹ is denoted by c(s,a,sʹ)
9/6/18 Artificial Intelligence 5
What is a Solution?
• A solution to a problem is an action sequence that
leads from the initial state to a goal state

• Solution quality is measured by the path cost function

• An optimal solution has the lowest path cost among


all solutions

9/6/18 Artificial Intelligence 6


Example Problems
• A toy problem is intended to illustrate or exercise various problem-
solving methods
• A real-world problem is one whose solutions people actually care
about

9/6/18 Artificial Intelligence 7


Vacuum world

The state space for the vacuum world. Links denote actions: L = Left, R = Right, S = Suck

9/6/18 Artificial Intelligence 8


Vacuum world
• States: agent location and the dirt locations. The agent is in one of two
locations, each of which might or might not contain dirt. Thus, there are 2
× 22 = 8 possible world states. A larger environment with n locations has n ×
2n states
• Initial state: Any state can be designated as the initial state
• Actions: In this simple environment, each state has just three actions: Left,
Right, and Suck
• Transition model: The actions have their expected effects, except that
moving Left in the leftmost square, moving Right in the rightmost square,
and Sucking in a clean square have no effect
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the
path

9/6/18 Artificial Intelligence 9


8-puzzle

A typical instance of the 8-puzzle.

9/6/18 Artificial Intelligence 10


8-puzzle
• States: A state description specifies the location of each of the eight
tiles and the blank in one of the nine squares
• Initial state: Any state can be designated as the initial state
• Actions: The simplest formulation defines the actions as movements
of the blank space Left, Right, Up, or Down
• Transition model: Given a state and action, this returns the resulting
state
• Goal test: This checks whether the state matches the goal
• Path cost: Each step costs 1, so the path cost is the number of steps in
the path

9/6/18 Artificial Intelligence 11


8-queens

Almost a solution to the 8-queens problem

9/6/18 Artificial Intelligence 12


8-queens
• States: Any arrangement of 0 to 8 queens on the board is a state
• Initial state: No queens on the board
• Actions: Add a queen to any empty square
• Transition model: Returns the board with a queen added to the specified square
• Goal test: 8 queens are on the board, none attacked

In this formulation, we have 64 · 63 · · · 57 ≈ 1.8×1014 possible sequences to


investigate => we can reduce the number of cases:
• States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in the
leftmost n columns, with no queen attacking another
• Actions: Add a queen to any square in the leftmost empty column such that it is
not
This formulation reduces the 8-queens state space from 1.8×1014 to just 2,057
What if we solve the million-queens problem???

9/6/18 Artificial Intelligence 13


Donald Knuth’s (1964) Toy Problem
• Starting with the number 4, a sequence of factorial, square root, and
floor operations will reach any desired positive integer. For example,
we can reach 5 from 4 as follows:

If the number is 5, the state space will be 620,448,401,733,239,439,360,000 (approaching infinite)

9/6/18 Artificial Intelligence 14


Donald Knuth’s (1964) Toy Problem
• States: Positive numbers
• Initial state: 4
• Actions: Apply factorial, square root, or floor operation (factorial for
integers only)
• Transition model: As given by the mathematical definitions of the
operations
• Goal test: State is the desired positive integer

9/6/18 Artificial Intelligence 15


Real-world problems
Airline travel problems
• States: Each state obviously includes a location (e.g., an airport) and the
current time, fare bases, domestic or international…
• Initial state: This is specified by the user’s query
• Actions: Take any flight from the current location, in any seat class, leaving
after the current time, leaving enough time for within-airport transfer if
needed
• Transition model: The state resulting from taking a flight will have the
flight’s destination as the current location and the flight’s arrival time as the
current time
• Goal test: Are we at the final destination specified by the user?
• Path cost: This depends on monetary cost, waiting time, flight time,
customs and immigration procedures, seat quality, time of day, type of
airplane, frequent-flyer mileage awards…
9/6/18 Artificial Intelligence 16
Real-world problems
• Touring Problems
• Traveling salesperson problem (TSP): is known to be NP-Hard
• VLSI layout
• Robot navigation: a generalization of the route-finding problem
• Automatic assembly sequencing

9/6/18 Artificial Intelligence 17


Searching for Solutions

9/6/18 Artificial Intelligence 18


Searching for Solutions

9/6/18 Artificial Intelligence 19


Exploring Nodes

9/6/18 Artificial Intelligence 20


Infrastructure for search algorithms
For each node n of the tree, we have a structure that contains four
components:
• n.STATE: the state in the state space to which the node corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path from the
initial state to the node

9/6/18 Artificial Intelligence 21


Infrastructure for search algorithms

9/6/18 Artificial Intelligence 22


Measuring problem-solving performance
• Completeness: Is the algorithm guaranteed to find a solution when
there is one?
• Optimality: Does the strategy find the optimal solution (an optimal
solution has the lowest path cost among all solutions)
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to perform the
search?

9/6/18 Artificial Intelligence 23


Uninformed (Blind) Search Strategies
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening depth-first search
• Bidirectional search

9/6/18 Artificial Intelligence 24


Breadth-first search (BFS)

9/6/18 Artificial Intelligence 25


BFS - Complexity

Imagine searching a uniform tree where every state has b successors.


The total number of nodes generated is

Time complexity would be O(bd+1) (since applying the goal test to nodes)
Space complexity is O(bd)

9/6/18 Artificial Intelligence 26


Time and memory requirements for breadth-first
search
• Assume branching factor b = 10; 1 million nodes/second; 1000
bytes/node

9/6/18 Artificial Intelligence 27


Uniform-cost search

9/6/18 Artificial Intelligence 28


Uniform-cost search
• An algorithm that is optimal with any step-cost function
=> Instead of expanding the shallowest node, uniform-cost search
expands the node n with the lowest path cost => need priority queue
Key difference: The first is that the goal test is applied to a node when it
is selected for expansion.

C*: the cost of the optimal solution


ε: small positive constant

9/6/18 Artificial Intelligence 29


Depth-first search (DFS)
A A A
M is goal
B C B C B C

D E F G D E F G D E F G
DFS is not optimal
H I J K L M N O H I J K L M N O H I J K L M N O b: branching factor
A A A
m: maximum depth
B C B C C Space complexity ~ O(bm)
D E F G D E F G E F G Time complexity ~ O(bm)
H I J K L M N O I J K L M N O J K L M N O

A A

C B C C

E F G E F G F G

J K L M N O K L M N O L M N O

A A A

C C C

F G F G F G

L M N O L M N O M N O

Figure 3.16
9/6/18 Artificial Intelligence 30
M
Depth limited search
• Depth-limited search: nodes at depth l are treated as if they have no
successors
• Time complexity: O(bl)
• Space complexity: O(bl)
• Considering diameter of the state space gives us a better depth limit

9/6/18 Artificial Intelligence 31


Depth limited search
function problem limit returns
return problem problem limit
function node problem limit returns
if problem node then return node
else if limit then return cutoff
else
cutoff occurred ? ←
for each action in problem node do
child ← problem node action
result ← child problem limit −1
if result cutoff then cutoff occurred ? ←
else if result ̸= failure then return result
if cutoff occurred ? then return cutoff else return failure

Figure 3.17
9/6/18 Artificial Intelligence 32

diameter
Iterative deepening depth-first search

function problem returns


for depth to ∞ do
result ← problem depth
if result ̸= then return result

Figure 3.18

failure

9/6/18 A A Artificial Intelligence 33

A A A A
function problem returns
for depth to ∞ do
result ← problem depth
if result ̸= then return result

Figure 3.18

Iterative deepening depth-first search failure

A A

A A A A

B C B C B C B C

A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O

A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O

A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

H I J K L M N O H I J K L M N O H I J K L M N O H I J K L M N O

9/6/18 Figure 3.19 Artificial Intelligence 34


Bidirectional search

Start Goal

Figure 3.20

9/6/18 Artificial Intelligence 35

R predecessors x
x
x
3.4.7 Comparing uninformed search strategies

Comparing uninformed search strategies

a a,b a a,d

O(bd ) O(b1+⌊C /ϵ⌋ ) O(bm ) O(bℓ ) O(bd ) O(bd/2)

O(bd ) O(b1+⌊C /ϵ⌋ ) O(bm) O(bℓ) O(bd) O(bd/2)
c c c,d

Figure 3.21 b d
m l
a b
b ≥ϵ
c d
ϵ

9/6/18 Artificial Intelligence 36


Informed (Heuristic) Search Strategies
• Uses problem-specific knowledge beyond the definition of the problem
itself => can find solutions more efficiently than can an uninformed
strategy
• The general approach we consider is called best-first search
• Best-first search is an instance of the general TREE-SEARCH or GRAPH-
SEARCH in which a node is selected for expansion based on an evaluation
function
• The evaluation function f(n) is construed as a cost estimate, so the node
with the lowest evaluation is expanded first
• Most best-first algorithms include as a component of f a heuristic function,
denoted h(n)
• h(n) = estimated cost of the cheapest path from the state at node n to a
goal state
• if n is a goal node, then h(n) = 0
9/6/18 Artificial Intelligence 37
Greedy best-first search
• Tries to expand the node that is closest to the goal
• Route-finding problems in Romania:
• use the straight- line distance heuristic, which we will call hSLD

Arad Mehadia
Bucharest Neamt
Craiova Oradea
Drobeta Pitesti
Eforie Rimnicu Vilcea
Fagaras Sibiu
Giurgiu Timisoara
Hirsova Urziceni
Iasi Vaslui
Lugoj Zerind

Figure 3.22 Values of h SLD —straight-line distances to Bucharest


hSLD
9/6/18 Artificial Intelligence 38
Greedy best-first search
(a) The initial state

(b) After expanding Arad

Stages in a greedy best-first tree search for


Bucharest with the straight-line distance
heuristic h SLD . Nodes are labeled with their h-
(c) After expanding Sibiu values.

A result based on Greedy best-first search


Arad -> Sibiu -> Fagaras -> Bucharest

A shorter one:
(d) After expanding Fagaras Arad -> Rimnicu Vilcea -> Pitesti -> Bucharest

Figure 3.23
9/6/18 Artificial Intelligence 39
hSLD h

Conditions for optimality: Admissibility and consistency


ADMISSIBLE
HEURISTIC h(n) admissible heuristic
never overestimates g(n)
A* search: Minimizing the total estimated solution cost
• It evaluates nodes by combining g(n), the cost to reach the node, and
h(n), the cost to get from the node to the goal: f(n) = g(n) + h(n)
• f (n) = estimated cost of the cheapest solution through n
• provided that the heuristic function h(n) satisfies certain conditions,
A∗ search is both complete and optimal.
• The algorithm is identical to UNIFORM-COST-SEARCH except that A∗
uses g + h instead of g

9/6/18 Artificial Intelligence 40


A* search: Minimizing the total estimated solution cost
(a) The initial state

(b) After expanding Arad

Stages in a greedy best-first tree search for


Bucharest with the straight-line distance heuristic
(c) After expanding Sibiu h SLD . Nodes are labeled with their h-values

(d) After expanding Fagaras

Figure 3.23
hSLD h

9/6/18 for optimality: Admissibility and consistency


Conditions Artificial Intelligence 41
ADMISSIBLE
HEURISTIC h(n) admissible heuristic
never overestimates g(n)
n f (n) = g(n) + h(n)
f (n)
n
Conditions for optimality: Admissibility g and consistency
(monotonicity)
hSLD
• The first condition for optimality is that h(n) be an admissible
heuristic might
• Admissible heuristic is one that never overestimates the cost to reach
the goal
CONSISTENCY
• Admissible heuristics are by nature optimistic
consiste
MONOTONICITY

• Slightly stronger condition called consistency
n n ′ n
• A heuristic h(n) is consistent if, for every node n and every successor
nʹ of n generated by any action a, the estimated cost of reaching the
n
goal from n is no greater than the step cost of getting to nʹ plus the
n′
estimated cost of reaching the goal from nʹ:
h(n) ≤ c(n, a, n′ ) + h(n′ ) .
TRIANGLE
9/6/18
INEQUALITY Artificial Intelligence triangle inequality 42

Gn n
A* search
(e) After expanding Fagaras
(a) The initial state

(b) After expanding Arad

(c) After expanding Sibiu

(f) After expanding Pitesti

(d) After expanding Rimnicu Vilcea

Stages in anFigure
A∗ search ∗ Nodes are labeled with f = g + h.
3.24 for Bucharest. f = g+h
The h values are the straight-line distances to Bucharest
h

(e) After expanding


9/6/18 Fagaras Artificial Intelligence 43
hSLD

Optimality of A*
Optimality of A*
∗ the tree-search version of A∗ is
optimal if h(n) is admissible, while the graph-search version is optimal if h(n) is consistent.
• A∗ has the following properties:
• The tree-search version of A∗ is optimal if h(n) is admissible
g
• The graph-search version is optimal if h(n) is consistent

f
• if h(n) is consistent, then the values of f(n) along any path are
if h(n) is consistent, then the values of
nondecreasing
f (n) along any path are nondecreasing.
• Proof: Suppose nʹ is a successor of n; then g(nʹ) = g(n)
n′ n + c(n,
g(n′ ) = g(n) + c(n, a, na,
′ ) nʹ)

a • So,

f (n′ ) = g(n′ ) + h(n′ ) = g(n) + c(n, a, n′ ) + h(n′ ) ≥ g(n) + h(n) = f (n) .


• Whenever A∗ selects a node n for expansion, the optimal path to that node
whenever A∗ selects a node n for expansion, the optimal path
to that nodehas been found
has been found.
n′ n
9 ∗
9/6/18 Artificial Intelligence 44
Map of Romania showing contours
• f = 380, f = 400, and f = 420, with Arad as the start state
• Nodes inside a given contour have f-costs less than or equal to the contour value

380

400

420

9/6/18 Artificial Intelligence 45

Figure 3.25 f = 380 f = 400 f = 420


f
Map of Romania showing contours
If C* is the cost of the optimal solution path, then we can say the following:

• A* expands all nodes with f(n) < C*


• A* might then expand some of the nodes right on the “goal contour” (where f(n) =
C*) before selecting a goal node
• A* expands no nodes with f(n) > C*, such these nodes are called pruned
Note:
Any algorithm that does not expand all nodes with f(n) < C* runs the risk of missing
the optimal solution
The absolute error is defined as ∆ ≡ h* − h, where h* is the actual cost of getting
from the root to the goal. The relative error is defined as ϵ ≡ (h* − h)/h*
The time complexity ~ O(bϵd) = O((bϵ)d). It keeps all generated nodes in memory,
A∗ usually runs out of space long
A∗ is not practical for many large-scale problems
9/6/18 Artificial Intelligence 46
Memory-bounded heuristic search
• Iterative-deepening A* (IDA*) algorithm: adapt the idea of iterative
deepening to the heuristic search context to A*
• The main difference between IDA* and standard iterative deepening
is that the cutoff used is the f -cost (g + h) rather than the depth
• At each iteration, the cutoff value is the smallest f-cost of any node
that exceeded the cutoff on the previous iteration
• IDA* is practical for many problems with unit step costs and avoids
the substantial overhead associated with keeping a sorted queue of
nodes

9/6/18 Artificial Intelligence 47


Recursive best-first search (RBFS)
• RBFS: a simple recursive algorithm that attempts to mimic the
operation of standard best-first search, but using only linear space
• Similar to recursive depth-first search, but it uses the f-limit variable
to keep track of the f-value of the best alternative path available from
any ancestor of the current node
• If the current node exceeds this limit, the recursion unwinds back to
the alternative path
• As the recursion unwinds, RBFS replaces the f-value of each node
along the path with a backed-up value—the best f-value of its
children
• RBFS remembers the f-value of the best leaf in the forgotten subtree
and can therefore decide whether it’s worth reexpanding the subtree
at some later time

9/6/18 Artificial Intelligence 48


Recursive best-first search (RBFS)
function problem returns
return problem problem ∞
function problem node f limit returns f
if problem node then return node
successors ←
for each action in problem node do
problem node action successors
if successors then return failure ∞
for each s in successors do f
s f ← max(s.g + s.h , node.f )
loop do
best ← f successors
if best .f > f limit then return failure best f
alternative ← f successors
result best f ← problem best min( f limit , alternative)
if result ̸= failure then return result

9/6/18 Figure 3.26 Artificial Intelligence 49


Recursive best-first search (RBFS)
function problem returns
return problem problem ∞
function problem node f limit returns f
if problem node then return node
successors ←
for each action in problem node do
problem node action successors
if successors then return failure ∞
for each s in successors do f
s f ← max(s.g + s.h , node.f )
loop do
best ← f successors
if best .f > f limit then return failure best f
alternative ← f successors
result best f ← problem best min( f limit , alternative)
if result ̸= failure then return result

9/6/18 Figure 3.26 Artificial Intelligence 50


Recursive best-first search (RBFS)
(a) After expanding Arad, Sibiu, ∞
(c) After switching back to Rimnicu Vilcea ∞
and Rimnicu Vilcea
and expanding Pitesti

(b) After unwinding back to Sibiu ∞


and expanding Fagaras

Figure 3.27 f

f
(c) After switching
9/6/18 back to Rimnicu Vilcea ∞ Artificial Intelligence 51
and expanding Pitesti
SMA* (simplified MA*)
• SMA∗ proceeds just like A∗, expanding the best leaf until memory is full
• It cannot add a new node to the search tree without dropping an old one
• SMA* always drops the worst leaf node—the one with the highest f-value
• SMA* then backs up the value of the forgotten node to its parent
• In this way, the ancestor of a forgotten subtree knows the quality of the
best path in that subtree
• SMA* regenerates the subtree only when all other paths have been shown
to look worse than the path it has forgotten
• SMA∗ expands the best leaf and deletes the worst leaf
• Often cases that SMA∗ is forced to switch back and forth continually
among many candidate solution paths => thrashing problem (paging)

9/6/18 Artificial Intelligence 52


Summary
Ø How to define problems and solve them

Ø Toy problems vs real-world problems

Ø Presenting state spaces

Ø Understanding and implementing uninformed (blind) search and


informed (heuristic) search strategies

9/6/18 Artificial Intelligence 53


Thank you for your attention!

Q & A?
9/6/18 Artificial Intelligence 54

You might also like