Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Finding Optimal Path: By: Dr. Anjali Diwan

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Finding Optimal Path

By: Dr. Anjali Diwan


Brute Force

► The brute force algorithm tries out all the possibilities till a satisfactory
solution is not found.
► Brute force search is the most common search algorithm as it does not require
any domain knowledge.
► All that is required is a state description, legal operators, the initial state
and the description of a goal state.
► It does not improve the performance and completely relies on the computing
power to try out possible combinations.
► The brute force algorithm searches all the positions in the text between 0 and
n-m, whether the occurrence of the pattern starts there or not.
► After each attempt, it shifts the pattern to the right by exactly 1 position.
The time complexity of this algorithm is O(m*n).
► If we are searching for n characters in a string of m characters, then it will
take n*m tries.

One example is to make an attempt to break the 5 digit password; then brute
force may take up to 105 attempts to crack the code.
► A brute force approach is an approach that finds all the possible solutions to
find a satisfactory solution to a given problem.
► Algorithm may have two approaches Optimizing and Satisficing.
► Often Brute force algorithms require exponential time. Various heuristics and
optimization can be used:
► Heuristic: A rule of thumb that helps you to decide which possibilities we should
look at first.
► Optimization: A certain possibilities are eliminated without exploring all of them.
Example
► Brute force search considers each and every state of a
tree, and the state is represented in the form of a node.
As far as the starting position is concerned, we have two
choices, i.e., A state and B state. We can either generate
state A or state B. In the case of B state, we have two
states, i.e., state E and F.
► In the case of brute force search, each state is
considered one by one. As we can observe in the above
tree that the brute force search takes 12 steps to find
the solution.
Advantages and Disadvantages of a
brute-force algorithm
► Advantages of the brute-force algorithm:
► This algorithm finds all the possible solutions, and it also guarantees that it finds the
correct solution to a problem.
► This type of algorithm is applicable to a wide range of domains.
► It is mainly used for solving simpler and small problems.
► It can be considered a comparison benchmark to solve a simple problem and does not
require any particular domain knowledge.
► Disadvantages of a brute-force algorithm:
► It is an inefficient algorithm as it requires solving each and every state.
► It is a very slow algorithm to find the correct solution as it solves each state without
considering whether the solution is feasible or not.
► The brute force algorithm is neither constructive nor creative as compared to other
algorithms.
Basic Concept of this
Branch and bound algorithm have four steps:

Step 1: Traverse the root


node.
► Branch and bound algorithms are used to find the Step 2: Traverse any
optimal solution for combinatory, discrete, and neighbour of the root node
general mathematical optimization problems. In that is maintaining least
general, given an NP-Hard problem, a branch and bound distance from the root node.
algorithm explores the entire search space of possible
solutions and provides an optimal solution. Step 3: Traverse any
► It is used for solving the optimization problems and neighbour of the neighbour of
minimization problems. the root node that is
maintaining least distance
► It is similar to the backtracking since it also uses the
from the root node.
state space tree.
Step 4: This process will
continue until we are getting
the goal node.
Algorithm:
Step 1: PUSH the root node into the stack.

Step 2: If stack is empty, then stop and return failure.

Step 3: If the top node of the stack is a goal node, then stop and return success.

Step 4: Else POP the node from the stack. Process it and find all its successors.
Find out the path containing all its successors as well as predecessors and then
PUSH the successors which are belonging to the minimum or shortest path.

Step 5: Go to step 5.

Step 6: Exit.

STEP1: Consider node A as root node. STEP2: Now the stack will be
Find it’s successor C,F,B------- A
Calculate the distance form the root: As Bis on top of the satck calculate neighbours of
B = 0+5 B
F = 0+9 D = 0+5+4
C = 0+7 E = 0+5+6

Here B is having least distance The least distance is D from B. So it will be on


top of the stack.
AS the top of stack is D. So calculate neighbours of D.
C,F,D --------- B

C = 0+5+4+8
F = 0+5+4+3

C,f….. D
The least distance is F from D and it is our Goal node. So stop and return
success.
Hence the searching path will be A-B -D-F
Algorithm: Best first search

1. Start with OPEN containing with just the initial state.


2. Until a goal is found or there are no node left on OPEN do:
1. Pick the best node in open
2. Generate it’s successors.
3. For each successor do:
1. If it has not been generated before, evaluate it, add it to OPEN and record it’s parent.
2. If it has generated before, change the parent if this new path is better than the
previous one. In that case, update the cost of getting to this node and to any successors
that this node may already, have.
Estimation of the cost
Best first search (e.g. distance to the goal)

• Step 1: Put the initial node x0 and its heuristic value H(x0) to the
open list.
• Step 2: Take a node x from the top of the open list. If the open list is
empty, stop with failure. If x is the target node, stop with success.
• Step 3: Expand x and get a set S of child nodes. Add x to the closed
list.
• Step 4: For each x’ in S but not in the closed list, estimate its
heuristic value H. If x’ is not in the open list, put x’ along with the
edge (x,x’) and H into the open list; otherwise, if H is smaller than
the old value H(x’), update x’ with the new edge and the new
heuristic value.
• Step 5: Sort the open list according to the heuristic values of the
nodes, and return to Step 2.
Example 2.6 p. 25

Steps Open List Closed List

0 {(0,0),4} --

1 {(1,0),3} {(0,1),3} (0,0)


2 1 0
2 {(2,0),2} {(1,1),2} (0,0) (1,0) (0,2 (1,2) (2,2
{(0,1),3} ) )
2 1
3 {(1,1),2} {(0,1),3} (0,0) (1,0) (2,0) 3
(1,1)
(0,1 (2,1
4 {(2,1),1} {(1,2),1} (0,0) (1,0) (2,0) (1,1) ) )
{(0,1),3} 4 3 2
5 {(2,2),0} {(1,2),1} (0,0) (1,0) (2,0) (1,1) (0,0 (1,0 (2,0
) ) )
{(0,1),3} (2,1)
6 (2,2)=target node. (0,0) (1,0) (2,0) (1,1)
(2,1)
The A* Algorithm
• We cannot guarantee the optimal solution using the
“best-first search”. This is because the true cost from the
initial node to the current node is ignored.
• The A* algorithm can solve this problem (A=admissible).
• In the A* algorithm, the cost of a node is evaluated using
both the estimated cost and the true cost as follows:

f(𝑥) = g(𝑥) + h(𝑥)


It has been proved the A* algorithm can obtain
the best solution provided that the estimated
cost h(x) is always smaller (conservative) than
the best possible value h*(x)
A* algorithm has 3
parameters:
► g : the cost of moving from the initial cell to the current cell. Basically, it is
the sum of all the cells that have been visited since leaving the first cell.
► h : also known as the heuristic value, it is the estimated cost of moving from
the current cell to the final cell. The actual cost cannot be calculated until the
final cell is reached. Hence, h is the estimated cost. We must make sure that
there is never an over estimation of the cost.
► f : it is the sum of g and h. So, f = g + h

► The algorithm makes its decisions is by taking the f-value into account. The
algorithm selects the smallest f-valued cell and moves to that cell. This
process continues until the algorithm reaches its goal cell.
Example
► The initial node is A and the goal node is E.
The A* Algorithm
• Step 1: Put the initial node x0 and its cost F(x0)=H(x0) to
the open list.
• Step 2: Get a node x from the top of the open list. If the
open list is empty, stop with failure. If x is the target node,
stop with success.
• Step 3: Expand x to get a set S of child nodes. Put x to
the closed list.
The A* Algorithm
• Step 4: For each x’ in S, find its cost
𝐹 = 𝐹(𝑥) + 𝑑(𝑥, 𝑥’) + [𝐻(𝑥’) − 𝐻(𝑥)]
– If x’ is in the closed list but the new cost is smaller than the old
one, move x’ to the open list and update the edge (x,x’) and the
cost.
– Else, if x’ is in the open list, but the new cost is smaller than the
old one, update the edge (x,x’) and the cost.
– Else (if x’ is not in the open list nor in the closed list), put x’ along
with the edge (x,x’) and the cost F to the open list.
• Step 5: Sort the open list according to the costs of the
nodes, and return to Step 2.
Example 2.7 p. 27

1 2 1 0
(0,2) (1,2) (2,2) (0,2) (1,2) (2,2)

2 1 1
3 3 2 1
(0,1) (1,1) (2,1) (0,1) (1,1) (2,1)

2 3
5 4 3 2

(0,0) (1,0) (2,0) (0,0) (1,0) (2,0)

True Estimated cost


cost
Example 2.7 p. 27
Steps Open List Closed List
0 {(0,0), 4} --
1 {(0,1),5} {(1,0),8} {(0,0),4}
2 {(0,2),6} {(1,0),8} {(0,0),4} {(0,1),5}
3 {(1,2),6} {(1,0),8} {(0,0),4} {(0,1),5} {(0,2),6}
4 {(1,0),8} {(1,1),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}}

5 {(1,1),8} {(2,0),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8}

6 {(2,0),8} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8}


{(2,1),10}
7 {(2,1),10} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8}
{(2,0),8}
8 {(2,2),10} {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8}
{(2,0),8} {(2,1),10}
9 (2,2)=target {(0,0),4} {(0,1),5} {(0,2),6} {(1,2),6}} {(1,0),8} {(1,1),8}
node {(2,0),8} {(2,1),10}
What is an A* Algorithm?

A* uses weighted graphs in its


• It is a searching algorithm implementation. A weighted
that is used to find the graph uses numbers to represent
shortest path between an the cost of taking each path or
initial and a final point. course of action
• It searches for shorter paths
first, thus making it an
optimal and complete
algorithm. An optimal
algorithm will find the least
cost outcome for a
problem, while a complete
algorithm finds all the
possible outcomes of a
problem.
Property of the A* Algorithm

• The A* algorithm can obtain the best solution


because it considers both the cost calculated up
to now and the estimated future cost.
• A sufficient condition for obtaining the best
solution is that the estimated future cost is
smaller than the best possible value.
• If this condition is not satisfied, we may not be
able to get the best solution. This kind of
algorithms are called A algorithms.
Comparison between A* algorithms
• Suppose that A1 and A2 are two A* algorithms. If for any
node x we have H1(x)>H2(x), we say A1 is more efficient.
• That is, if H(x) is closer to the best value H*(x), the
algorithm is better.
• If H(x)=H*(x), we can get the solution immediately
(because we now which way to go from any “current”
node).
• On the other hand, if H(x)=0 for all x (no estimation, most
conservative), the A* algorithm becomes the uniform
cost search algorithm.
Some heuristics for finding H(x)

• For any given node x, we need an estimation function to find H(x).


• For maze problem, for example, H(x) can be estimated by using the
Manhattan distance between the current node the target node. This distance
is usually smaller than the true value (and this is good) because some edges
may not exist in practice (See questions given below).
• For more complex problems, we may need a method (e.g. neural network)
to learn the estimation function from experiences or observed data.

• If there is no edge between two nodes x and x’, what


is the cost d(x,x’)?
• What will happen is the weights are smaller than 1?
Generalization of Search

• Generally speaking, search is a problem for finding the


best solution x* from a given domain D.
• Here, D is a set containing all “states” or candidate
solutions.
• Each state is often represented as a n-dimensional state
vector or feature vector.
• To see if a state x is the best (or the desired) one or not,
we need to define an objective function (e.g. cost
function) f(x). The problem can be formulated by

min f(x), for x in D


IDA-Star(IDA*) Algorithm

• Iterative deepening A* (IDA*) is a graph traversal and path search algorithm


that can find the shortest path between a designated start node and any
member of a set of goal nodes in a weighted graph.

• It is a variant of iterative deepening depth-first search.


• Since it is a depth-first search algorithm, its memory usage is lower than in A*.

• IDA* is a memory constrained version of A*

• it has the optimal characteristics of A* to find the shortest path but it uses less
memory than A*.
Pros and Cons
► Pros:
• It will always find the optimal solution provided that it exist’s and that if a
heuristic is supplied it must be admissible.
• Heuristic is not necessary, it is used to speed up the process.
• Various heuristics can be integrated to the algorithm without changing the
basic code.
• The cost of each move can be tweaked into the algorithms as easily as the
heuristic
• Uses a lot less memory which increases linearly as it doesn’t store and forgets
after it reaches a certain depth and start over again.

► Cons:
• Doesn’t keep track of visited nodes and thus explores already explored nodes
again.
• Slower due to repeating the exploring of explored nodes.
• Requires more processing power and time than A*.
Steps of Algorithm

1. For each child of the current node


2. If it is the target node, return
3. If the distance plus the heuristic exceeds the current threshold, return
this exceeding threshold
4. Set the current node to this node and go back to 1.
5. After having gone through all children, go to the next child of the
parent (the next sibling)
6. After having gone through all children of the start node, increase the
threshold to the smallest of the exceeding thresholds.
7. If we have reached all leaf (bottom) nodes, the goal node doesn’t exist.
The steps the algorithm performs on
Example •
this graph if given node 0 as a
starting point and node 6 as the goal,
in order, are:
• Iteration with threshold: 6.32
• Visiting Node 0
• Visiting Node 1
• Breached threshold with heuristic:
8.66
• Visiting Node 2
• Breached threshold with heuristic:
7.00
• Iteration with threshold: 7.00
• Visiting Node 0
• Visiting Node 1
• Breached threshold with heuristic:
8.66
• Visiting Node 2
• Visiting Node 5
• Breached threshold with heuristic: 8.83
• Iteration with threshold: 8.66
• Visiting Node 0
• Visiting Node 1
• Visiting Node 3
• Breached threshold with heuristic: 12.32
• Visiting Node 4
• Breached threshold with heuristic: 8.83
• Visiting Node 2
• Visiting Node 5
• Breached threshold with heuristic: 8.83
• Iteration with threshold: 8.83
• Visiting Node 0
• Visiting Node 1
• Visiting Node 3
• Breached threshold with heuristic: 12.32
• Visiting Node 4
• Visiting Node 2
• Visiting Node 5
• Visiting Node 6
• Found the node we’re looking for!
► Final lowest distance from node 0 to node 6: 9
► Runtime of the Algorithm
► The runtime complexity of Iterative Deepening A Star is in principle the
same as Iterative Deepening DFS. In practice, though, if we choose a
good heuristic, many of the paths can be eliminated before they are
explored making for a significant time improvement

► Space of the Algorithm


► The space complexity of Iterative Deepening A Star is the amount of
storage needed for the tree or graph. O(|N|), |N| = number of Nodes in
the tree or graph, which can be replaced with b^d for trees, where b is
the branching factor and d is the depth. Additionally, whatever space
the heuristic requires.
AO* Search: (And-Or) Graph

• The Depth first search and Breadth first search given earlier for OR trees or
graphs can be easily adopted by AND-OR graph.

• The main difference lies in the way termination conditions are determined,
since all goals following an AND nodes must be realized;
• Where as a single goal node following an OR node will do. So for this purpose
we are using AO* algorithm.

► Like A* algorithm here we will use two arrays and one heuristic function.

► OPEN: It contains the nodes that has been traversed but yet not been marked
solvable or unsolvable.
► CLOSE: It contains the nodes that have already been processed.
Algorithm:
► Step 1: Place the starting node into OPEN.

► Step 2: Compute the most promising solution tree say T0.

► Step 3: Select a node n that is both on OPEN and a member of T0. Remove it from OPEN and
place it in

► CLOSE
► Step 4: If n is the terminal goal node then leveled n as solved and leveled all the ancestors of
n as solved. If the starting node is marked as solved then success and exit.

► Step 5: If n is not a solvable node, then mark n as unsolvable. If starting node is marked as
unsolvable, then return failure and exit.

► Step 6: Expand n. Find all its successors and find their h (n) value, push them into OPEN.

► Step 7: Return to Step 2.

► Step 8: Exit.
► Advantages:

► It is an optimal algorithm.

► If traverse according to the ordering of nodes. It can be used for both OR and
AND graph.

► Disadvantages:

► Sometimes for unsolvable nodes, it can’t find the optimal path. Its complexity
is than other algorithms.

You might also like