Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 3 Problems Solving by Searching

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 78

Solving Problems By searching

Lecture-3

Abu Saleh Musa Miah


Outline
• Problem-solving agents
• Problem types
• Problem formulation
• Example problems
• Basic search algorithms
 Breadth-first search
 Depth-first search
 Uniform cost search
 Depth-first iterative deepening
 Example problems revisited
Problem Solving Agent

 One kind of goal-based agent called a problem-solving agent


 Goal-based agents that use more advanced structured
representations are usually called planning agents.
 Imagine an agent in the city of Arad, Romania, enjoying a touring
holiday.
 Problem formulation is the process of deciding what actions
and states to consider, given a goal
Problem Solving Agent

 To build a system to solve a particular problem, we need


to do 04 things:
1. Define the problem precisely. Specify Initial/final solutions
2. Analyze the problem. A few very important features can have
an immense impact on the appropriateness of various possible
techniques for solving the problem
3. Isolate & represent the task knowledge that is necessary to
solve the problem
4. Choose the best problem-solving technique (s) & apply it
(them) to the particular problem
Basic concepts
• State: finite representation of the world that you want to explore at a
given time
.
A problem can be defined formally by four components
 An initial state is the description of the starting configuration of
the agent/The problem at the beginning.
 An action or an Operator: a function that transforms a state into
another (also called rule, transition, successor function, production,
action).
 Goal state: desired end state (can be several)/ test to determine if the
goal has been reached.
 Path Cost: The cost of a plan is referred to as the path cost
The path cost is a positive number, and a common path cost may be
the sum of the costs of the steps in the path.
Search Problem
The search problem is to find a sequence of actions
which transforms the agent from the initial state to a
goal state g∈G. A search problem is represented by a
4-tuple {S, s0, A, G}.
S: set of states
s0 ∈ S : initial state
A: S􀃆 Actions that transform one state to another state
G : goal, a set of states. G ⊆ S
Search Problem
 The sequence of actions is called a solution plan.
 Solution path is a path from the initial state to a goal
state.
 P = {a0, a1, … , aN} which leads to traversing a number of
states {s0, s1, … , sN+1∈G}.
 A sequence of states is called a path.
 The cost of a path is a positive number.
 In many cases the path cost is computed by taking the
sum of the costs of each action.
Production Systems
• search forms the core of many intelligent processes, it is useful to structure
AI programs in a way that facilitates describing & performing the search
process.
• A production system consist of:
- A set of rules, each consisting of a left side (a pattern) that determines the
applicability of the rule & a right side that describes the operation to be
performed if the rule is applied
- One ore more knowledge/databases that contain whatever information is
appropriate for the particular task. Some parts of the database may be
permanent, while other parts of it may pertain only to the solution of the
current problem. The information in these databases may be structured in
any appropriate way.
- A control strategy that specifies the order in which the rules will be compared
to the database & a way of resolving the conflicts that arise when several
rules match at once
- A rule applier
Problem formulation/Searching
process
formulation of the problem of getting to Bucharest in terms of the initial state, actions, transition model,
goal test, and path cost. This formulation seems reasonable, but it is still a model—an abstract
mathematical description—and not real thing.
The generic searching process can be very simply described in terms
of the following steps:
Do until a solution is found or the state space is
exhausted.
1. Check the current state
2. Execute allowable actions to find the successor states.
3. Pick one of the new states.
4. Check if the new state is a solution state
If it is not, the new state becomes the current state and the
process is repeated
Problem-solving agents
• Restricted form of general agent:

Note: this is offline problem solving; solution executed\eyes


closed.
Online problem solving involves acting without complete
knowledge.
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest

Formulate goal:
• be in Bucharest
Formulate problem:
• states: various cities
• actions: drive between cities

Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Problem types
• Deterministic, fully observable  single-state problem
– Agent knows exactly which state it will be in; solution is a
sequence
• Non-observable  sensorless problem (conformant
problem)
– Agent may have no idea where it is; solution is a sequence
• Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
– solution is a contingent plan or a policy
– often interleave search, execution
– Unknown state space  exploration problem (“online”)
Example: vacuum world
• Single-state, start in #5. Solution??
[Right; Suck]
• Conformant, start in {1; 2; 3; 4; 5; 6; 7; 8}
e.g., Right goes to {2; 4; 6; 8}. Solution??
[Right; Suck; Left; Suck]
• Contingency, start in #5
Murphy's Law: Suck can dirty a clean carpet
Local sensing: dirt, location only. Solution??
[Right; if dirt then Suck]
Single-state problem formulation
• A problem is defined by four items:
initial state e.g., “at Arad"
successor function S(x) = set of action - state pairs
• e.g., S(Arad) = {<Arad  Zerind; Zerind>,…}
goal state, can be
• explicit, e.g., x = “at Bucharest"
• implicit, e.g., NoDirt(x)
path cost (additive)
• e.g., sum of distances, number of actions executed, etc.
• c(x; a; y) is the step cost, assumed to be  0
A solution is a sequence of actions
• leading from the initial state to a goal state
Selecting a state space
• Real world is absolutely complex
state space must be abstracted for problem solving
(Abstract) state = set of real states
• (Abstract) action = complex combination of real actions
e.g., “Arad  Zerind” represents a complex set of
possible routes, detours, rest stops, etc.
• For guaranteed reliability, any real state "in Arad“ must
get to some real state "in Zerind"
• (Abstract) solution =
set of real paths that are solutions in the real world
• Each abstract action should be “easier” than the
original problem
Vacuum world state space graph
1 2

3 4

5 6

8
7

• states??: integer dirt and robot locations (ignore dirt


amounts etc.)
• actions??: Left, Right, Suck, NoOp
• goal test??: no dirt
• path cost??: 1 per action (0 for NoOp)
Example: The 8-puzzle

• Initial states??: integer locations of tiles (ignore intermediate positions)


• actions??: move blank left, right, up, down (ignore unjamming etc.)
• Transition model: Given a state and action
• goal test??: = goal state (given)
• path cost??: 1 per move
• [Note: optimal solution of n-Puzzle family is NP-hard]
8-puzzle,

 It is NP-complete, so one does not expect to find methods


significantly better in the worst case than the search algorithms
described in this chapter and the next.
 The 8-puzzle has 9!/2 = 181, 440 reachable states and is easily
solved.
 The 15-puzzle (on a 4×4 board) has around 1.3 trillion states, and
random instances can be solved optimally in a few milliseconds by
the best search algorithms.
 The 24-puzzle (on a 5 × 5 board) has around 1025 states, and
random instances take several hours to solve optimal
Problem Definition - Example, 8 puzzle

A small portion of the state space of 8-puzzle is shown below. Note that
we do not need to generate all the states before the search begins. The
states can be generated when required.
Figure 8.2 Breadth-First Search of the(c)Eight-Puzzle
2000-2002 SNU CSE Biointelligence
21
8-queens problem
 Place eight queens on a chessboard such that no
queen attacks any other.

 A queen attacks any piece in the same row, column


or diagonal. Figure 3.5 shows an attempted
solution that fails: the queen in the rightmost
column is attacked by the queen at the top left.

 There are two main kinds of formulation.

 An incremental formulation involves operators


that augment the state description, starting with an
empty state; for the 8-queens problem, this means
that each action adds a queen to the state.

 A complete-state formulation starts with all 8


queens on the board and moves them around.
8-queens problem
 The first incremental formulation one might try is the following:

 States: Any arrangement of 0 to 8 queens on the board is a state.

 Initial state: No queens on the board.

 Actions: Add a queen to any empty square.

 Transition model: Returns the board with a queen added to the specified
square.
 Goal test: 8 queens are on the board, none attacked.

In this formulation, we have 64 · 63 · · · 57 ≈ 1.8 × 1014 possible sequences to


investigate. A better formulation would prohibit placing a queen in any square that
is already attacked:
 States: All possible arrangements of n queens (0 ≤ n ≤ 8), one per column in
the leftmost n columns, with no queen attacking another.
 Actions: Add a queen to any square in the leftmost empty column such that it is
not attacked by any other queen.
Example: robotic assembly

• states??: real-valued coordinates of robot joint angles


parts of the object to be assembled
• actions??: continuous motions of robot joints
• goal test??: complete assembly with no robot included!
• path cost??: time to execute
Tree search algorithms
• Basic idea:
• offline, simulated exploration of state space by
generating successors of already-explored
states (i.e., expanding states)
Tree search example
Implementation: states vs. nodes
• A state is a (representation of) a physical configuration
• A node is a data structure constituting part of a search tree
includes state, parent node, action, path cost g(x), depth
States do not have parents, children, depth, or path cost!

• The EXPAND function creates new nodes, filling in the various fields &
using the SUCCESSORFN of the problem to create the corresponding
states.
Implementation: general tree search
Search strategies
• A search strategy is defined by picking the order of node expansion
• Strategies are evaluated along the following dimensions:
Completeness--does it always find a solution if one exists?
Time complexity--number of nodes generated/expanded
Space complexity--maximum number of nodes in memory
Optimality--does it always find a least-cost solution?
Time and space complexity are measured in terms of
• b--maximum branching factor of the search tree
• d--depth of the least-cost solution
• m--maximum depth of the state space (may be ∞)
State space Problem formulation
Goal based agent searching Algorithm

Searching Algorithm

Uninformed Search Informed Search

1. Breadth First Search Algorithm 1. greedy Search Algorithm


2. Depth First Search Algorithm 2. A* Algorithm
3. Iterative Deepening A IDA*
4. Memory bounded A SMA*
Uninformed Search/Blind Search Control Strategy
 Uninformed search strategies use only the information
available in the problem definition

 Example:
 Breadth First Search(BFS),
 Depth First Search(DFS),
 Depth Limited Search (DLS)
 Iterative Deepening Search(IDS)
Breadth-first search
 Breadth-first search is a simple strategy in which the root node is expanded first,
then all the successors of the root node are expanded next, then their successors,
and so on.
 In general,all the nodes are expanded at a given depth in the search tree before
any nodes at the next level are expanded.
Breadth-first search
Step 1
• Expand shallowest
unexpanded node
• Implementation:
• fringe is a FIFO queue,
i.e., new successors go at
end

Step 2 Step 3 Step 4


BFS: an Example
 A breadth-first search (BFS)
A explores nodes nearest the
root before exploring nodes
B C
further away
 For example, after searching A,
then B, then C, the search
D E F G
proceeds with D, E, F, G
 Node are explored in the order
H I J K A B C D E F G H I J K L M N O
PQ
L M N O P Q J will be found before N

35
How to do breadth-first
searching
Put the root node on a queue;
while (queue is not empty) {
remove a node from the queue;
if (node is a goal node) return success;
put all children of node onto the queue;
}
return failure;
Just before starting to explore level n, the queue holds
all the nodes at level n-1
In a typical tree, the number of nodes at each level
increases exponentially with the depth
Memory requirements may be infeasible

36
Breadth First Search
Home Work for BFS algorithm
Properties of breadth-first search
• Complete?? Yes (if b is infinite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) i.e., exp. in d
• Space? O(bd+1) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step); not optimal in
general

Space is the big problem; can easily generate


nodes at 100MB/sec
so 24hrs = 8640GB.
Pros of BFS
 BFS is a systematic search strategy- all nodes at level n are
considered before going to n+1 th level.
If there is a solution, then BFS is guaranteed to find it.
Furthermore, if there are multiple solutions, then a minimal solution
(minimum # of steps) will be found.
This is guaranteed by the fact that longer paths are never explored until
all shorter ones have already been examined
This contrasts with DFS, which may find a long path to a solution in
one part of the tree, when a shorter exists in some other, unexplored
part of thee tree

Disadvantages of BFS:
1. All nodes are to be generated at any level. So even unwanted nodes are
to be remembered. Memory wastage.
2. Time and space complexity is exponential type- Hurdle
Uniform-cost search
• Expand least-cost unexpanded node
• Implementation:
• fringe = queue ordered by path cost, lowest first
 Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε)) where C* is the cost of the optimal solution
• Space? # of nodes with g ≤ cost of optimal solution,
O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing order of g(n)
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO stack, i.e., put successors at
front
Depth-first search
DFS Example
 A depth-first search (DFS)
A explores a path all the way to a
leaf before backtracking and
B C
exploring another path
 For example, after searching A,
then B, then D, the search
D E F G
backtracks and tries another
path from B
H I J K  Node are explored in the order
ABDEHLMNIOPCFG
L M N O P Q JKQ
 N will be found before J

52
How to do depth-first searching
 Put the root node on a stack;
while (stack is not empty) {
remove a node from the stack;
if (node is a goal node) return success;
put all children of node onto the stack;
}
return failure;
At each step, the stack contains some nodes from each
of a number of levels
The size of stack that is required depends on the
branching factor b
While searching level n, the stack contains approximately
b*n nodes
When this method succeeds, it doesn’t give the path
53
Depth-first
Home Work for DFS algorithm
Advantages of DFS:
1. Memory requirements in DFS are less compared to BFS
as only nodes on the current path are stored.
2. DFS may find a solution without examining much of the
search space of all.

Disadvantages of DFS:
1. but it is not guaranteed to find a solution even
where one is guaranteed.
This search can go on deeper and deeper into the search
space and thus can get lost. This is referred to as blind
alley.!
Properties of depth-first search
• Complete? No: fails in infinite-depth spaces,
spaces with loops
– Modify to avoid repeated states along path
complete in finite spaces
• Time? O(bm): terrible if m is much larger than d
– but if solutions are dense, may be much faster than
breadth-first
 Space? O(bm), i.e., linear space!
 Optimal? No
Pros of DFS
• DFS requires less memory since only the nodes on the current
path are stored
• This contrast with BFS, where all of the tree that has so far been
generated must be stored
• By chance, DFS may find a solution without examine much of
the search space at all
• Contrast with BFS, in which all parts of the tree must be
examined to level n before any nodes on level n+1 can be
examined.
• This is particularly significant if many acceptable solutions exist,
• DFS can stop when one of them is found
Comparative analysis (b = 3,d = 2)

FIFO

LIFO

Max-list-length = 1 + d(b- Max-list-length = bd


Depth-limited searching
Depth-first searches may be performed with a
depth limit:
 boolean limitedDFS(Node node, int limit, int depth) {
if (depth > limit) return failure;
if (node is a goal node) return success;
for each child of node {
if (limitedDFS(child, limit, depth + 1))
return success;
}
return failure;
}
 Since this method is basically DFS, if it succeeds then the path
to a goal node is in the stack

67
Iterative deepening search

-Iterative deepening is DFS to a fixed depth in the tree being


searched.
-If no solution is found up to this depth then the depth to be
searched is increased and the whole `bounded' depth-first
search begun again.
-It works by setting a depth of search -say, depth 1- and doing
depth-first search to that depth.
-If a solution is found then the process stops -otherwise,
increase the depth by, say, 1 and repeat until a solution is
found.
Iterative deepening search l =0/1/2
Iterative deepening search l =3
Properties of iterative deepening search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
Can be modified to explore uniform-cost tree
Number of nodes generated in a depth-limited search to depth d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

Number of nodes generated in an iterative deepening search to depth d with branching


factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
 IDS does better because
For b = 10, d = 5, other nodes at depth d
NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
are not expanded
• NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
 BFS can be modified to
• Overhead = (123,456 - 111,111)/111,111 = 11%
apply goal test when a
Iterative deepening search

Worst Case: goal d


DFS: 11 nodes
BFS: 04 nodes
IDS:04 nodes
Iterative deepening is a popular method of
search. Why?
• -DFS can be implemented to be much cheaper than BFS in terms of
memory usage -but it is not guaranteed to find a solution even where
one is guaranteed.
• -On the other hand, BFS can be guaranteed to terminate if there is a
winning state to be found & will always find the `quickest' solution (in
terms of how many steps need to be taken from the root node). Very
expensive method in terms of memory usage.
• -Iterative deepening is liked because it is an effective compromise
between the two other methods of search.
• It is a form of DFS with a lower bound on how deep the search can go.
• Iterative deepening terminates if there is a solution.
• It can produce the same solution that BFS would produce but does not
require the same memory usage (as for BFS).
Summary of algorithms
Repeated states
• Failure to detect repeated states can turn a
linear problem into an exponential one!
Graph search
Summary
Problem formulation usually requires abstracting away
real-world details to define a state space that can feasibly
be explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space and not
much more time than other uninformed algorithms
• Graph search can be exponentially more efficient than
tree search
Confession
It is possible that some sentences or some information were
included in these slides without mentioning exact references.
I am sorry for violating rules of intellectual property. When I
will have a bit more time, I will try my best to avoid such
things.
These slides are only for students in order to give them very
basic concepts about the giant, “Networking”, not for
experts.
Since I am not a network expert, these slides could have
wrong/inconsistent information…I am sorry for that.
Students are requested to check references and Books, or to
talk to Network engineers.

You might also like