Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Uninformed Search

Download as pdf or txt
Download as pdf or txt
You are on page 1of 72

Solving problems by searching:

Uninformed Search
University of Zanjan
2
Outline

 Search Problems

 Uninformed Search Methods


 Depth-First Search
 Breadth-First Search
 Uniform-Cost Search

3
Problem-Solving Agents
 Problem Formulation: process of deciding what actions
and states to consider
 States of the world
 Actions as transitions between states

 Goal Formulation: process of deciding what the next goal


to be sought will be

 Agent must find out how to act now and in the future to
reach a goal state
 Search: process of looking for solution (a sequence of actions
that reaches the goal starting from initial state)

4
Problem-Solving Agents
 A goal-based agent adopts a goal and aim at satisfying it
(as a simple version of intelligent agent maximizing a performance measure)

 “How does an intelligent system formulate its problem as


a search problem”
 Goal formulation: specifying a goal (or a set of goals) that agent
must reach
 Problem formulation: abstraction (removing detail)
 Retaining validity and ensuring that the abstract actions are easy to
perform

5
Vacuum world state space graph

2 × 22 = 8
States

 States? dirt locations & robot location


 Actions? Left, Right, Suck
 Goal test? no dirt at all locations
 Path cost? one per action
6
Example: 8-puzzle

9!/2 = 181,440
States

 States? locations of eight tiles and blank in 9 squares


 Actions? move blank left, right, up, down (within the board)
 Goal test? e.g., the above goal state
 Path cost? one per move
7 Note: optimal solution of n-Puzzle family is NP-complete
Example: 8-queens problem

64 × 63 × ⋯ × 57
≃ 1.8 × 1014 States

 Initial State? no queens on the board


 States? any arrangement of 0-8 queens on the board is a state
 Actions? add a queen to the state (any empty square)
 Goal test? 8 queens are on the board, none attacked
 Path cost? of no interest
search cost vs. solution path cost

8
Example: 8-queens problem
(other formulation)

2,057 States

 Initial state? no queens on the board


 States? any arrangement of k queens one per column in the leftmost k
columns with no queen attacking another
 Actions? add a queen to any square in the leftmost empty column such
that it is not attacked by any other queen
 Goal test? 8 queens are on the board
 Path cost? of no interest
9
Example: Knuth problem
 Knuth Conjecture: Starting with 4, a sequence of factorial,
square root, and floor operations will reach any desired
positive integer.

 Example: 4! ! = 5

 States? Positive numbers


 Initial State? 4
 Actions? Factorial (for integers only), square root, floor
 Goal test? State is the objective positive number
 Path cost? Of no interest
10
Search Problems Are Models

11
Search and Models
 Search operates over models of the world
 The agent doesn’t actually try all the plans out in the real
world!
 Planning is all “in simulation”
 Your search is only as good as your models…

12
Search Problems
 A search problem consists of:

 A state space

“N”, 1.0
 A successor function
(with actions, costs)

“E”, 1.0

 A start state and a goal test

 A solution is a sequence of actions (a plan) which transforms


the start state to a goal state

13
What’s in a State Space?

The world state includes every last detail of the environment

A search state keeps only the details needed for planning (abstraction)

 Problem: Pathing  Problem: Eat-All-Dots


 States: (x,y) location  States: {(x,y), dot booleans}
 Actions: NSEW  Actions: NSEW
 Successor: update location only  Successor: update location
 Goal test: is (x,y)=END and possibly a dot boolean
 Goal test: dots all false

14
State Space Sizes?
 World state:
 Agent positions: 120
 Food count: 30
 Ghost positions: 12
 Agent facing: NSEW

 How many
 World states?
120x(230)x(122)x4
 States for pathing?
120
 States for eat-all-dots?
120x(230)

15
Quiz: Safe Passage

 Problem: eat all dots while keeping the ghosts perma-scared


 What does the state space have to specify?
 (agent position, dot booleans, power pellet booleans, remaining scared time)

16
State Space
 State space: set of all reachable states from initial state
 Initial state, actions, and transition model together define it

 It forms a directed graph


 Nodes: states
 Links: actions

 Constructing this graph on demand

17
State Space Graphs

 State space graph: A mathematical


representation of a search problem
 Nodes are (abstracted) world configurations
 Arcs represent successors (action results)
 The goal test is a set of goal nodes (maybe only one)

 In a state space graph, each state occurs only


once!

 We can rarely build this full graph in memory


(it’s too big), but it’s a useful idea

18
State Space Graphs and Search Trees

20
Search Trees

This is now / start


“N”, 1.0 “E”, 1.0

depth=1 Possible futures


cost =1

 A search tree:
 A “what if” tree of plans and their outcomes
 The start state is the root node
 Children correspond to successors
 Nodes show states, but correspond to PLANS that achieve those states
 Nodes contain problem state, parent, path length, a depth, and a cost
 For most problems, we can never actually build the whole tree

21
State Space Graphs vs. Search Trees

Each NODE in the


State Space Graph search tree is an Search Tree
entire PATH in the
state space graph. S

a G d e p
b c
b c e h r q
e
d f a a h r p q f
S h We construct both
on demand – and p q f q c G
p q r
we construct as q c a
G
little as possible.
a

22
Tree search algorithm
 Basic idea
 offline, simulated exploration of state space by generating successors of
already-explored states

function TREE-SEARCH( problem) returns a solution, or failure


initialize the frontier using the initial state of problem
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solution
expand the chosen node, adding the resulting nodes to the frontier
Frontier: all leaf nodes available for expansion at any given point

Different data structures (e.g, FIFO, LIFO) for frontier can cause different
orders of node expansion and thus produce different search algorithms.

23
Example: Romania
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest
Map of Romania
 Initial state
 currently in Arad

 Formulate goal
 be in Bucharest

 Formulate problem
 states: various cities
 actions: drive between cities

 Solution
 sequence of cities, e.g.,Arad, Sibiu, Fagaras, Bucharest
24
Tree search example

25
Tree search example

26
Tree search example

27
Searching with a Search Tree

 Search:
 Expand out potential plans (tree nodes)
 Maintain a frontier of partial plans under consideration
 Try to expand as few tree nodes as possible

28
Tree Search

29
Quiz: State Space Graphs vs. Search Trees

Consider this 4-state graph: How big is its search tree (from S)?

S G

Important: Lots of repeated structure in the search tree!

30
Search Example: Romania

31
Graph Search
 Redundant paths in tree search: more than one way to get from
one state to another
 may be due to a bad problem definition or the essence of the problem
 can cause a tractable problem to become intractable

function GRAPH-SEARCH( problem) returns a solution, or failure


initialize the frontier using the initial state of problem
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state then return the corresponding solution
add the node to the explored set
expand the chosen node, adding the resulting nodes to the frontier
only if not in the frontier or explored set
explored set: remembered every explored node

32
Graph Search
 Example: rectangular grid

explored
frontier

33
Search for 8-puzzle Problem

Start Goal

34 Taken from: http://iis.kaist.ac.kr/es/


Implementation: states vs. nodes
 A state is a (representation of) a physical configuration

 A node is a data structure constituting part of a search tree


includes state, parent node, action, path cost g(x), depth

35
General Tree Search
 Important ideas:
 frontier
 Expansion
 Exploration strategy

 Main question: which frontier nodes to explore?

36
Uninformed (blind) search strategies
 No additional information beyond the problem definition
 Breadth-First Search (BFS)
 Uniform-Cost Search (UCS)
 Depth-First Search (DFS)
 Depth-Limited Search (DLS)
 Iterative Deepening Search (IDS)

37
Example: Tree Search

a G
b c
e
d f
S h
p q r

38
Breadth-First Search

39
Breadth-First Search

Strategy: expand a a G
shallowest node first b c
e
Implementation: d f
frontier is a FIFO queue S h
p q r

d e p
Search
b c e h r q
Tiers
a a h r p q f

p q f q c G

q c G a

40
Search Algorithm Properties

 Complete: Guaranteed to find a solution if one exists?


 Optimal: Guaranteed to find the least cost path?
 Time complexity? 1 node
b
 Space complexity? … b nodes
b2 nodes
m tiers
 Cartoon of search tree:
 b is the branching factor
 m is the maximum depth
bm nodes
 d is the depth of the shallowest goal
 solutions at various depths

41
Breadth-First Search (BFS) Properties

 What nodes does BFS expand?


 Processes all nodes above shallowest solution b
1 node
… b nodes
 Let depth of shallowest solution be d d tiers
b2 nodes

 How much space does the frontier take? bd nodes

 Is it complete? bm nodes

 Is it optimal?

42
Properties of breadth-first search
 Complete?
 Yes (for finite 𝑏 and 𝑑)

 Time
 𝑏 + 𝑏2 + 𝑏3 + ⋯ + 𝑏𝑑 = 𝑂(𝑏𝑑 ) total number of generated nodes
 goal test has been applied to each node when it is generated

 Space explored frontier

 𝑂(𝑏 𝑑−1 ) + 𝑂(𝑏𝑑 ) = 𝑂(𝑏𝑑 )


 Tree search does not save much space while may cause a great time excess

 Optimal?
 Yes, if path cost is a non-decreasing function of d
 e.g. all actions having the same cost

43
Properties of breadth-first search
 Space complexity is a bigger problem than time complexity
 Time is also prohibitive
 Exponential-complexity search problems cannot be solved by
uninformed methods (only the smallest instances)

1 million node/sec, 1kb/node 𝑏 = 10


d Time Memory
6 1.1 secs 1 gigabytes
8 2 minutes 103 gigabytes
10 3 hours 10 terabytes
12 13 days 1 pentabyte
14 3.5 years 99 pentabytes
16 350 years 10 exabytes
44
Depth-First Search

45
Depth-First Search

Strategy: expand a a G
deepest node first b c

Implementation: e
d f
frontier is a LIFO stack S h
p q r

d e p

b c e h r q

a a h r p q f

p q f q c G

q c G a

46
Search Algorithm Properties

47
Depth-First Search (DFS) Properties

 What nodes DFS expand? b


1 node
… b nodes
 Some left prefix of the tree.
b2 nodes
 Could process the whole tree!
m tiers
 If m is finite, takes time O(bm)

 How much space does the frontier take?


bm nodes
 Only has siblings on path to root, so O(bm)

 Is it complete?
 m could be infinite, so only if we prevent cycles (more later)

 Is it optimal?
 No, it finds the “leftmost” solution, regardless of depth or cost

48
Properties of DFS
 Complete?
 Not complete (repeated states & redundant paths)

 Time
 𝑂(𝑏𝑚): terrible if 𝑚 is much larger than 𝑑
 In tree-version, 𝑚 can be much larger than the size of the state space

 Space
 𝑂(𝑏𝑚), i.e., linear space complexity for tree search
 So depth first tree search as the base of many AI areas
 Recursive version called backtracking search can be implemented in
𝑂(𝑚) space

 Optimal?
 No
DFS: tree-search version

49
Video of Demo Maze Water DFS/BFS (part 1)

50
Video of Demo Maze Water DFS/BFS (part 2)

51
Quiz: DFS vs BFS

52
Quiz: DFS vs BFS

 When will BFS outperform DFS?

 When will DFS outperform BFS?

53
Depth Limited Search
 Depth-first search with depth limit 𝑙 (nodes at depth 𝑙 have no successors)
 Solves the infinite-path problem
 In some problems (e.g., route finding), using knowledge of problem to specify 𝑙

 Complete?
 If 𝑙 > 𝑑, it is complete

 Time
 𝑂(𝑏𝑙 )

 Space
 𝑂(𝑏𝑙)

 Optimal?
 No

54
Iterative Deepening

 Idea: get DFS’s space advantage with BFS’s


time / shallow-solution advantages b

 Run a DFS with depth limit 1. If no solution…
 Run a DFS with depth limit 2. If no solution…
 Run a DFS with depth limit 3. …..

 Isn’t that wastefully redundant?


 Generally most work happens in the lowest
level searched, so not so bad!

55
IDS: Example l =0

56
IDS: Example l =1

57
IDS: Example l =2

58
IDS: Example l =3

59
Iterative Deepening Search (IDS)

 Combines benefits of DFS & BFS


 DFS: low memory requirement
 BFS: completeness & also optimality for special path cost functions

 Not such wasteful (most of the nodes are in the bottom level)

60
Properties of iterative deepening search
 Complete?
 Yes (for finite 𝑏 and 𝑑)

 Time
 𝑑 × 𝑏1 + (𝑑 − 1) × 𝑏2 + ⋯ + 2 × 𝑏 𝑑−1 + 1 × 𝑏 𝑑 = 𝑂(𝑏𝑑 )

 Space
 𝑂(𝑏𝑑)

 Optimal?
 Yes, if path cost is a non-decreasing function of the node depth

 IDS is the preferred method when search space is large and


the depth of solution is unknown

61
Iterative deepening search
 Number of nodes generated to depth d:
𝑁𝐼𝐷𝑆 = 𝑑 × 𝑏1 + (𝑑 − 1) × 𝑏2 + … + 2 × 𝑏 𝑑−1 + 1 × 𝑏 𝑑
= 𝑂(𝑏𝑑 )

 For 𝑏 = 10, 𝑑 = 5, we compute number of generated nodes:


 NBFS = 10 + 100 + 1,000 + 10,000 + 100,000 = 111,110
 NIDS = 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
 Overhead of IDS = (123,450 - 111,110)/111,110 = 11%

62
Cost-Sensitive Search

a GOAL
2 2
b c
3
2
1 8
2 e
3 d
f
9 8 2
START h
1 4 2

p 4 r
1 q
5

BFS finds the shortest path in terms of number of actions.


It does not find the least-cost path. We will now cover
a similar algorithm which does find the least-cost path.

63
Uniform Cost Search

64
Uniform Cost Search

2 a G
Strategy: expand a cheapest b c
node first: 1 8 2
2 e
3 d f
frontier is a priority queue 9 2
S h 8
(priority: cumulative cost) 1
1 p q r
15

S 0

d 3 e 9 p 1

b 4 c e 5 h 17 r 11 q 16
11
Cost a 6 a h 13 r 7 p q f
contours
p q f 8 q c G

q 11 c G 10 a

65
Uniform-Cost Search (UCS)
 Expand node 𝑛 (in the frontier) with the lowest path cost 𝑔(𝑛)
 Extension of BFS that is proper for any step cost function

 Implementation: Priority queue (ordered by path cost) for


frontier

 Equivalent to breadth-first if all step costs are equal


 Two differences
 Goal test is applied when a node is selected for expansion
 A test is added when a better path is found to a node currently on the frontier

80 + 97 + 101 < 99 + 211

66
Uniform Cost Search (UCS) Properties
 What nodes does UCS expand?
 Processes all nodes with cost less than cheapest solution!
 If that solution costs C* and arcs cost at least  , then the
“effective depth” is roughly C*/ b
… c1
 Takes time O(bC*/) (exponential in effective depth) C*/ “tiers” c2
c3
 How much space does the frontier take?
 Has roughly the last tier, so O(bC*/)

 Is it complete?
 Assuming best solution has a finite cost and minimum arc
cost is positive, yes!

 Is it optimal?
 Yes!

67
Uniform-cost search (proof of optimality)
 Lemma: If UCS selects a node 𝑛 for expansion, the optimal
solution to that node has been found.

Proof by contradiction: Another frontier node 𝑛′ must exist on the


optimal path from initial node to 𝑛 (using graph separation property).
Moreover, based on definition of path cost (due to non-negative step
costs, paths never get shorter as nodes are added), we have 𝑔 𝑛′
≤ 𝑔 𝑛 and thus 𝑛′ would have been selected first.

⇒ Nodes are expanded in order of their optimal path cost.

68
Properties of uniform-cost search
 Complete?
 Yes, if step cost ≥ 𝜀 > 0 (to avoid infinite sequence of zero-cost
actions)

 Time
1+𝐶 ∗ Τ𝜀
 Number of nodes with “𝑔 ≤ cost of optimal solution”, 𝑂(𝑏 )
where 𝐶 ∗ is the optimal solution cost
 𝑂(𝑏 𝑑+1 ) when all step costs are equal

 Space
∗ Τ𝜀
 Number of nodes with 𝑔 ≤ cost of optimal solution, 𝑂(𝑏1+𝐶 )

 Optimal?
 Yes – nodes expanded in increasing order of 𝑔(𝑛)

Difficulty: many long paths of actions may exist with cost ≤ 𝐶 ∗


69
Uniform Cost Issues

 Remember: UCS explores increasing cost


contours … c1
c2
c3

 The good: UCS is complete and optimal!

 The bad:
 Explores options in every “direction”
 No information about goal location

Start Goal
 We’ll fix that soon!

70
The One Queue

 All these search algorithms are the


same except for frontier strategies
 Conceptually, all frontiers are priority
queues (i.e. collections of nodes with
attached priorities)
 Practically, for DFS and BFS, you can
avoid the log(n) overhead from an
actual priority queue, by using stacks
and queues
 Can even code one implementation
that takes a variable queuing object

71
Bidirectional search
 Simultaneous forward and backward search (hoping that they
meet in the middle)
 Idea: 𝑏 𝑑/2 + 𝑏 𝑑/2 is much less than 𝑏 𝑑
 “Do the frontiers of two searches intersect?” instead of goal test
 First solution may not be optimal

 Implementation
 Hash table for frontiers in one of these two searches
 Space requirement: most significant weakness
 Computing predecessors?
 May be difficult
 List of goals? a new dummy goal
 Abstract goal (checkmate)?!

72
Summary of algorithms (tree search)

a Complete if b is finite
b Complete if step cost ≥ ε>0
c Optimal if step costs are equal
d If both directions use BFS

73

You might also like