Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Problem Solving

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 85
At a glance
Powered by AI
The key takeaways are about problem solving techniques including formulating goals and problems, searching for solutions, and evaluating different search strategies.

The four general steps in problem solving are: goal formulation, problem formulation, search, and execution.

The components of a problem formulation are the initial state, successor function, goal test, and path cost.

Problem Solving

Russell and Norvig: Chapter 3

CSMSC 421 Fall 2006

Problem-Solving
Agent
sensors

environment

agent
actuators

Problem-Solving Agent
sensors

?
environment

agent
actuators

Formulate Goal
Formulate Problem
States
Actions
Find Solution

Example: Route finding

Holiday Planning
On holiday in Romania; Currently in Arad.
Flight leaves tomorrow from Bucharest.
Formulate Goal:
Be in Bucharest

Formulate Problem:
States: various cities
Actions: drive between cities

Find solution:
Sequence of cities: Arad, Sibiu, Fagaras,
Bucharest

Problem Solving
States
Actions
Start

Solution

Goal

Vacuum World

Problem-solving agent
Four general steps in problem solving:

Goal formulation
What are the successful world states

Problem formulation
What actions and states to consider given the goal

Search
Determine the possible sequence of actions that

lead to the states of known values and then


choosing the best sequence.

Execute
Give the solution perform the actions.

Problem-solving agent
function SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action
static: seq, an action sequence
state, some description of the current world state
goal, a goal
problem, a problem formulation
state UPDATE-STATE(state, percept)
if seq is empty then
goal FORMULATE-GOAL(state)
problem FORMULATE-PROBLEM(state,goal)
seq SEARCH(problem)
action FIRST(seq)
seq REST(seq)
return action

Assumptions Made (for


now)
The
The
The
The

environment is static
environment is discretizable
environment is observable
actions are deterministic

Problem formulation
A problem is defined by:

An initial state, e.g. Arad


Successor function S(X)= set of action-state pairs
e.g. S(Arad)={<Arad Zerind, Zerind>,}

intial state + successor function = state space


Goal test, can be
Explicit, e.g. x=at bucharest
Implicit, e.g. checkmate(x)

Path cost (additive)


e.g. sum of distances, number of actions executed,
c(x,a,y) is the step cost, assumed to be >= 0

A solution is a sequence of actions from initial to goal state.


Optimal solution has the lowest path cost.

Selecting a state space


Real world is absurdly complex.
State space must be abstracted for problem solving.

State = set of real states.


Action = complex combination of real actions.

e.g. Arad Zerind represents a complex set of possible


routes, detours, rest stops, etc.
The abstraction is valid if the path between two states is
reflected in the real world.

Solution = set of real paths that are solutions in the


real world.
Each abstract action should be easier than the real
problem.

Example: vacuum world

States??
Initial state??
Actions??
Goal test??
Path cost??

Example: vacuum world

States?? two locations with or without dirt: 2 x 2 2=8 states.


Initial state?? Any state can be initial
Actions?? {Left, Right, Suck}
Goal test?? Check whether squares are clean.
Path cost?? Number of actions to reach goal.

Example: 8-puzzle

States??
Initial state??
Actions??
Goal test??
Path cost??

Example: 8-puzzle

States?? Integer location of each tile


Initial state?? Any state can be initial
Actions?? {Left, Right, Up, Down}
Goal test?? Check whether goal configuration is reached
Path cost?? Number of actions to reach goal

Example: 8-puzzle
8

Initial state

Goal state

Example: 8-puzzle

Example: 8-puzzle
Size of the state space = 9!/2 = 181,440
15-puzzle .65 x 1012

0.18 sec
6 days

24-puzzle .5 x 1025
12 billion years

10 million states/sec

Example: 8-queens

Place 8 queens in a chessboard so that no two queen


are in the same row, column, or diagonal.

A solution

Not a solution

Example: 8-queens
problem

Incremental formulation vs. complete-state formulation


States??
Initial state??
Actions??
Goal test??
Path cost??

Example: 8-queens
Formulation #1:
States: any arrangement of
0 to 8 queens on the board
Initial state: 0 queens on the
board
Actions: add a
queen in any square
Goal test: 8 queens on the
board, none attacked
Path cost: none

648 states with 8 queens

Example: 8-queens
Formulation #2:
States: any arrangement of
k = 0 to 8 queens in the k
leftmost columns with none
attacked
Initial state: 0 queens on the
board
Successor function: add a
queen to any square in the
leftmost empty column such
that it is not attacked
by any other queen
2,067 states Goal test: 8 queens on the
board

Real-world Problems
Route finding
Touring problems
VLSI layout
Robot Navigation
Automatic assembly sequencing
Drug design
Internet searching

Example: robot assembly

States??
Initial state??
Actions??
Goal test??
Path cost??

Example: robot assembly

States?? Real-valued coordinates of robot joint angles;


parts of the object to be assembled.
Initial state?? Any arm position and object configuration.
Actions?? Continuous motion of robot joints
Goal test?? Complete assembly (without robot)
Path cost?? Time to execute

Basic search algorithms


How do we find the solutions of previous
problems?

Search the state space (remember complexity


of space depends on state representation)
Here: search through explicit tree generation
ROOT= initial state.
Nodes and leafs generated through successor function.

In general search generates a graph (same


state through multiple paths)

Simple Tree Search


Algorithm
function TREE-SEARCH(problem, strategy) return solution
or failure
Initialize search tree to the initial state of the problem
do
if no candidates for expansion then return failure
choose leaf node for expansion according to strategy
if node contains goal state then return solution
else expand the node and add resulting nodes to the
search tree
enddo

Exercise #1: Getting an


Intro
Aka: the art of schmoozing

Take home points


Difference between State Space
and Search Tree

Search of State Space

Search of State Space

Search State Space

Search of State Space

Search of State Space

Search of State Space

search tree

Take home points


Difference between State Space
and Search Tree
Blind Search
Learn names,

State space vs. search


tree

A state is a (representation of) a physical configuration


A node is a data structure belong to a search tree

A node has a parent, children, and ncludes path cost, depth,

Here node= <state, parent-node, action, path-cost, depth>


FRINGE= contains generated nodes which are not yet
expanded.

Tree search algorithm


function TREE-SEARCH(problem,fringe) return a solution
or failure
fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]),
fringe)
loop do
if EMPTY?(fringe) then return failure
node REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe INSERT-ALL(EXPAND(node, problem), fringe)

Tree search algorithm (2)

function EXPAND(node,problem) return a set of nodes


successors the empty set
for each <action, result> in SUCCESSOR-FN[problem]
(STATE[node]) do
s a new NODE
STATE[s] result
PARENT-NODE[s] node
ACTION[s] action
PATH-COST[s] PATH-COST[node] + STEP-COST(node, action,s)
DEPTH[s] DEPTH[node]+1
add s to successors
return successors

Search Strategies
A strategy is defined by picking the order of node
expansion
Performance Measures:

Completeness does it always find a solution if one exists?


Time complexity number of nodes generated/expanded
Space complexity maximum number of nodes in memory
Optimality does it always find a least-cost solution

Time and space complexity are measured in terms of

b maximum branching factor of the search tree


d depth of the least-cost solution
m maximum depth of the state space (may be )

Uninformed search
strategies
(a.k.a. blind search) = use only information
available in problem definition.

When strategies can determine whether one nongoal state is better than another informed search.

Categories defined by expansion algorithm:

Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search.
Bidirectional search

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (1)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (2, 3)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (3, 4, 5)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (4, 5, 6, 7)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (5, 6, 7, 8)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (6, 7, 8)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (7, 8, 9)

3
5

6
9

Breadth-First Strategy
Expand shallowest unexpanded node
Implementation: fringe is a FIFO queue
New nodes are inserted at the end of the queue

1
2
4
8

FRINGE = (8, 9)

3
5

6
9

Breadth-first search:
evaluation
Completeness:

Does it always find a solution if one exists?

YES
If shallowest goal node is at some finite depth d
Condition: If b is finite

(maximum num. of succ. nodes is finite)

Breadth-first search:
evaluation
Completeness:

YES (if b is finite)

Time complexity:

Assume a state space where every state has b


successors.
root has b successors, each node at the next level has

again b successors (total b2),


Assume solution is at depth d
Worst case; expand all but the last node at depth d
Total numb. of nodes generated:
1 + b + b2 + + bd + b(bd-1) = O(bd+1)

Breadth-first search:
evaluation
Completeness:

YES (if b is finite)

Time complexity:

Total numb. of nodes generated:

1 + b + b2 + + bd + b(bd-1) =

O(bd+1)
Space complexity:O(bd+1)

Breadth-first search:
evaluation
Completeness:

YES (if b is finite)

Time complexity:

Total numb. of nodes generated:

1 + b + b2 + + bd + b(bd-1) = O(bd+1)

Space complexity:O(bd+1)
Optimality:

Does it always find the least-cost solution?


In general YES
unless actions have different cost.

Breadth-first search:
evaluation
lessons:

Memory requirements are a bigger problem than execution


time.
Exponential complexity search problems cannot be solved
by uninformed search methods for any but the smallest
DEPTH
NODES
TIME
MEMORY
instances.
2

1100

0.11seconds

1megabyte

111100

11seconds

106
megabytes

107

19minutes

10gigabytes

109

31hours

1terabyte

10

1011

129days

101terabytes

12

1013

35years

10petabytes

14

1015

3523years

1exabyte

Assumptions: b = 10; 10,000 nodes/sec; 1000 bytes/node

Uniform-cost search
Extension of BF-search:

Expand node with lowest path cost

Implementation: fringe = queue ordered by


path cost.
UC-search is the same as BF-search when all
step-costs are equal.

Uniform-cost search
Completeness:

YES, if step-cost > (smal positive constant)

Time complexity:

Assume C* the cost of the optimal solution.

Assume that every action costs at least


Worst-case:

C */

O(b

Space complexity:

Idem to time complexity

Optimality:

nodes expanded in order of increasing path cost.


YES,
if complete.

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

FRINGE = (1)
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

FRINGE = (2, 3)
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

FRINGE = (4, 5, 3)
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-First Strategy
Expand deepest unexpanded node
Implementation: fringe is a LIFO queue
(=stack)
1
2
4

3
5

Depth-first search:
evaluation
Completeness;

Does it always find a solution if one exists?

NO
unless search space is finite and no loops

are possible.

Depth-first search:
evaluation
Completeness;

NO unless searchm space is finite.

O(b )

Time complexity;

Terrible if m is much larger than d (depth


of optimal solution)
But ifmany solutions, then faster than BFS

Depth-first search:
evaluation
Completeness;

NO unless search space is finite.

O(b m )
Time complexity;
O(bm 1)
Space complexity;

Backtracking search uses even less


memory

One successor instead of all b.

Depth-first search:
evaluation
Completeness;

NO unless search space is finite.

Time complexity;O(b m )
Space complexity;O(bm 1)
Optimality; No

Depth-Limited Strategy
Depth-first with depth cutoff k (maximal depth below
which nodes are not expanded)
Three possible outcomes:

Solution
Failure (no solution)
Cutoff (no solution within cutoff)

Solves the infinite-path problem.


If k< d then incompleteness results.
If k> d then not optimal.
Time complexity: O(bk)
Space complexity O(bk)

Iterative Deepening
Strategy
Repeat for k = 0, 1, 2, :
Perform depth-first with depth cutoff k
Complete
Optimal if step cost =1
Time complexity is:
(d+1)(1) + db + (d-1)b2 + + (1) bd = O(bd)

Space complexity is: O(bd)

Comparison of Strategies
Breadth-first is complete and
optimal, but has high space
complexity
Depth-first is space efficient, but
neither complete nor optimal
Iterative deepening combines
benefits of DFS and BFS and is
asymptotically optimal

Bidirectional Strategy
2 fringe queues: FRINGE1 and FRINGE2

Time and space complexity = O(bd/2) << O(bd)


The predecessor of each node should be efficiently compu

Summary of algorithms
Criterion

Breadth
First

Uniform
cost

Depth
First

Depth
limited

Iterative
deepening

Bidirectio
nalsearch

Complete
?

YES*

YES*

NO

YES

YES*

Time

bd+1

bC*/e

bm

YES,
ifld
bl

bd

bd/2

Space

bd+1

bC*/e

bm

bl

bd

bd/2

Optimal?

YES*

YES*

NO

NO

YES

YES

Repeated states
Failure to detect repeated states can turn a solvable
problems into unsolvable ones.

Avoiding Repeated States


Requires comparing state
descriptions
Breadth-first strategy:

Keep track of all generated states


If the state of a new node already
exists, then discard the node

Avoiding Repeated States


Depth-first strategy:

Solution 1:
Keep track of all states associated with nodes in

current tree
If the state of a new node already exists, then discard
the node

Avoids loops
Solution 2:
Keep track of all states generated so far
If the state of a new node has already been

generated, then discard the node

Space complexity of breadth-first

Graph search algorithm


Closed list stores all expanded nodes
function GRAPH-SEARCH(problem,fringe) return a solution or failure
closed an empty set
fringe INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
if STATE[node] is not in closed then
add STATE[node] to closed
fringe INSERT-ALL(EXPAND(node, problem), fringe)

Graph search, evaluation


Optimality:

GRAPH-SEARCH discard newly discovered paths.


This may result in a sub-optimal solution
YET: when uniform-cost search or BF-search with constant step

cost

Time and space complexity,

proportional to the size of the state space


(may be much smaller than O(bd)).

DF- and ID-search with closed list no longer has linear


space requirements since all nodes are stored in closed
list!!

Summary
Problem Formulation: state space, initial
state, successor function, goal test, path
cost
Search tree state space
Evaluation of strategies: completeness,
optimality, time and space complexity
Uninformed search strategies: breadthfirst, depth-first, and variants
Avoiding repeated states

You might also like