Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

1.module - Searching 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

Problem, Problem Spaces

and Search
Problem Solving: Introduction
Problem
•Problems are the issues which comes across any system. A solution is
needed to solve that particular problem.
Problem Searching
•In general, searching refers to as finding information one needs.
•Searching is the most commonly used technique of problem solving in
artificial intelligence.
•The searching algorithm helps us to search for solution of particular
problem
Problem Formulation
•Initial state
•Action
•Transition model
•Path cost
•Goal state
Road map of Romania
Road map of Romania
• The initial state that the agent starts in. For example, the initial state for
our agent in Romania might be described as In(Arad)
• Action: A description of the possible actions available to the agent.
{Go(Sibiu), Go(Timisoara), Go(Zerind)}.
• TRANSITION MODEL : RESULT(s, a) that returns the state that results
from doing action a in state
• RESULT(In(Arad),Go(Zerind)) = In(Zerind)
• Path cost: assign numeric cost to each path. In present case distance in
KM.
• goal test, which determines whether a given state is a goal state.
Vacuum World Problem
8 puzzle problem
8 – queen problem
State Problem
• A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal Test. A function that looks at the current state returns whether
or not it is the goal state.
• Rules. describe the action available
A water Jug Problem

•We are given two jugs, a 4-gallon one and


3-gallon one. Neither has any measuring
marked on it. There is a pump, which can be
used to fill the jugs with water. How can we get
exactly 2 gallons of water into 4-gallon jug?
A water Jug Problem
• We are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marked on it. There is a pump, which can be used to fill the jugs
with water. How can we get exactly 2 gallons of water into 4-gallon jug?
🡪 Solution:
The state space for this problem can be described as the set of ordered pairs
of integers (X, Y) such that X = 0, 1, 2, 3 or 4 and Y = 0, 1, 2 or 3; X is the
number of gallons of water in the 4-gallon jug and Y the quantity of water
in the 3-gallon jug.
The start state is (0, 0) and the goal state is (2, n) for any value of n, as the
problem does not specify how many gallons need to be filled in the
3-gallon jug
Water Jug Problem
As in chess playing they are represented as rules whose left side are matched against the current state
and their right sides describe the new state which results from applying the rule.
Water Jug Problem
In order to describe the operators completely here are some assumptions, not
mentioned, in the problem state.
1. We can fill a jug from the pump.
2. We can pour water out a jug, onto the ground.
3. We can pour water out of one jug into the other.
4. No other measuring devices are available.
Process: Rule whose left side matches the current state is chosen, the
appropriate change to the state is made as described in the corresponding right
side and the resulting state is checked to see if it corresponds to a goal state.
The loop continues as long as it does not lead to the goal. The speed with which
the problem is solved depends upon the mechanism, control structure, which is
used to select the next operation.
Solution
There are several sequences of operators which will solve the problem, two such
sequences are shown
The Missionaries and Cannibals Problem.
•Three missionaries and three cannibals find themselves on
one side of a river. They have would like to get to the other
side. But the missionaries are not sure what else the
cannibals agreed to. So the missionaries managed the trip
across the river in such a way that the number of missionaries
on either side of the river is never less than the number of
cannibals who are on the same side. The only boar available
holds only two at a time. How can everyone get across the
river without the missionaries risking being eaten?
Solution
•The state for this problem can be defined as
•{(i, j)/ i=0, 1, 2, 3, : j=0, 1, 2, 3}
•where i represents the number missionaries in one side of a
river . j represents the number of cannibals in the same side
of river. The initial state is (3,3), that is three missionaries and
three cannibals one side of a river , (Bank 1) and ( 0,0) on
another side of the river (bank 2) . the goal state is to get (3,3)
at bank 2 and (0,0) at bank 1.
8 Puzzle Problem
• The 8 puzzle consists of eight numbered, movable tiles set in a 3x3
frame. One cell of the frame is always empty thus making it possible to
move an adjacent numbered tile into the empty cell. Such a puzzle is
illustrated in following diagram.
Travelling Salesman Problem
• A traveler needs to visit all the cities from a list, where distances between all the cities are known and
each city should be visited just once. What is the shortest possible route that he visits each city
exactly once and returns to the origin city?
• No.of Cities Possible Routes
1
Routes between cities is proportional to factorial of
2 1-2-1 No.of cities-1
3 1-2-3-1 For 10 cities = 9!
1-3-2-1 It is combinatorial explosion because no. of routes
increases so rapidly no practical solution for realistic
4 1-2-3-4-1, 1-2-4-3-1 no.of cities
1-3-2-4-1, 1-3-4-2-1,
1-4-2-3-1, 1-4 -3-2-1
TSP
Summarizingly any problem can be solved by
the following series of step:
1. Define a state space which contains all the possible configurations of
the relevant objects and even some impossible ones.
2. Specify one or more states within that space which would describe
possible situations from which the problem solving process may start.
These states are called the initial states.
3. Specify one or more states which would be acceptable as solutions to
the problem. These states are called goal states.
4. Specify a set of rules which describe the actions (operators) available
and a control strategy to decide the order of application of these rules.
Search Strageries
• We need to define the problem statement first and then generate the
solution keeping the condition in mind
• Some of the most popular problem solving with help of AI are:
chess,travelling salesman,water jug,Tower of Hanoi,N-queen problem
Search Strageries
Search Strageries
• Uniformed Search: No additional information about states beyond that
provided in the problem definition. All they can do is generate
successors and distinguish a goal state from a non-goal state.All search
strategies are distinguished by the order in which nodes are expanded.
• INFORMED SEARCH: Strategies that know whether one non-goal state is
“more promising” than another are called informed search or heuristic
search
Control Strategies
• If more than one rule have its left side match the current state, we need
to decided control strategies.
• If the control strategies are not systematic, we may explore a useless
sequence of operators several times before finding a solution.
• For water jug problem we may construct a tree with initial state as root.
Generate offspring of root by applying the applicable rule to its initial
state
• Continue this process until some rule produce a goad state.
• This process is called breadth first search
Breadth First Search

Breadth-first search is a simple strategy in which the root node is expanded first, then all the
successors of the root node are expanded next, then their successors, and so on.
FIFO queue
The goal test is applied to each node when it is generated rather than when it is selected for expansion
Breadth First Search
• In this strategy, the root node is expanded first, then all the nodes
generated by the root node are expanded next, and then their
successors,and so on.
BFS for 8 puzzle problem
Uniform – Cost search
• Expands the node with the lowest path cost from the available nodes
• Different from BFS
• First goal state is applied to a node when it is selected for expansion
rather than when it is generated
• Test is added in case a better path is found to a node currently on the
frontier.
Depth-first search
• DFS always expands the deepest node in the current
frontier of search tree
• The search tree proceeds to the deepest level where node
have no successor
• As nodes are expanded they are dropped from the frontier
so then the search backs up to the next deepest node that
still has unexplored successors
• LIFO
• Most recently generated node is chosen for expansion.
Depth-First Search
• We purse a single branch of the tree until it yields a solution or until a
decision to terminate the path is made.
• It make sense to terminate a path if it reaches a dead-end, produce
previous state
• Backtracking: The current state from which the alternate moves are
available will be revisited and new state will be created called
chronological backtracking
Algorithm
Push the root node onto stack
While (stack not empty)
Pop a node
If it is a goal node
Push all the children of the node in the stack
Return failure
Performance BFS,Uniform,DFS
• Strategies are evaluated along the following criteria:
– Completeness: does it always find a solution if one exists?
– Optimality: does it always find a least-cost solution?
– Time complexity: number of nodes generated
– Space complexity: maximum number of nodes in memory
• Time and space complexity are measured in terms of
– b: maximum branching factor of the search tree
– d: depth of the optimal solution
– m: maximum length of any path in the state space (may be infinite)
Properties of breadth-first search

• Complete?
Yes (if branching factor b is finite)
• Optimal?
Yes – if cost = 1 per step
• Time?
Number of nodes in a b-ary tree of depth d: O(bd)
(d is the depth of the optimal solution)
• Space?
O(bd)

• Space is the bigger problem (more than time)


Properties of depth-first search

• Complete?
Fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
🡪 complete in finite spaces
• Optimal?
No – returns the first solution it finds
• Time?
Could be the time to reach a solution at maximum depth m: O(bm)
Terrible if m is much larger than d
But if there are lots of solutions, may be much faster than BFS
• Space?
Iterative deepening search
• Use DFS as a subroutine
1. Check the root
2. Do a DFS searching for a path of length 1
3. If there is no path of length 1, do a DFS searching for a path of
length 2
4. If there is no path of length 2, do a DFS searching for a path of
length 3…
Iterative deepening search
Iterative deepening search
Iterative deepening search
Iterative deepening search
Properties of iterative deepening search

• Complete?
Yes
• Optimal?
Yes, if step cost = 1
• Time?
(d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
• Space?
O(bd)
Depth Limited search
• DFS is performed upto a predefined depth
• It can terminate with two kind of failure
• The standard failure value indicate no solution
• The cutoff value indicate no solution with depth limit.
BFS vs DFS

BFS DFS

BFS Traverse the tree level wise DFS traverse tree depth wise

Queue Stack

NO backtracking is requires Backtracking is implemented in DFS

BFS never gets trapped into infinite loop DFS gets trapped into infinite loops
Travelling Salesman Problem – Need of Heuristics:
Informed Search
• A salesman has a list of cities, each of which he must visit exactly once. There are direct roads
between each pair of cities on the list. Find the route the salesman should follow for the shortest
possible round trip that both starts and finishes at anyone of the cities.
• N cities possible path (N-1)!
• Heuristic Search: No longer guarantees to find the best answer but will find a very good answer.
Definition :
Newell, Shaw, and Simon (1958):
✔ “A process that MAY solve a given problem, but offers no guarantees of doing so is called a heuristic
for that problem.”
Feigenbaum and Feldman (1963):
✔ “A state heuristic (heuristic rule, heuristic method) is a rule of thumb, strategy, trick, simplification, or
any other kind of device which drastically limits search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions; in fact, they do not guarantee any solution at all. At
the most what can be said for a useful heuristic is that it offers solutions which are good enough most
of the time.”
General Purpose
Applying it to the travelling salesman problem we get the following
procedure:
1. Arbitrarily select a starting city.
2. To select the next city, look at all cities not visited and select the one
closest to the current city. Go to it.
3. Repeat step 2 until all cities have been visited.
With the heuristic the TSM problem can be executed in time proportional
to N2, instead of N-1!
Best First Search
• Search start from root node
• The node to be expanded next is selected in the basis of a evaluation
function f(n)
• The node having lowest value of f(n) is selected first
• Lowest value of f(n) indicate that the goal is nearest from this node
• It is implemented using priority queue highest priority is given to a node
which have low f(n) value.
Informed Search
Heuristic
• It’s a Guided Search
• Method that might not always find best solution but guarantees a good
function in reasonable time
• Helpful for problem which takes infinite time
• sacrifice completeness but increase efficiency
• Heuristic Function :
• Function is a way to inform search about the direction to a goal
• It provide informed way to guess which neighbor of a node will lead to a goal
• Estimate cost of the cheapest path from node ‘n’ to a goal node.
• Guide the search process in the most profitable path among all the available
path.
Greedy Best First Search
• Greedy best-first search tries to expand the node that is closest to the
goal, on the grounds that this is likely to lead to a solution quickly.
Thus, it evaluates nodes by using just the heuristic function;
that is, f(n) = h(n)
• problems in Romania; we use the straight line distance heuristic
• .If the goal is Bucharest, we need to STRAIGHT-LINE DISTANCE
know the straight-line distances to Bucharest, hSLD (In(Arad)) =
366.
• Notice that the values of hSLD cannot be computed from the problem
description itself(it is provided)
Example

◻ Consider the below graph


with the heuristic values.

◻ Here, A is the start node


and H is the goal node.

◻ What is the path traversal


of Greedy search ?
Example

◻ Consider the below graph


with the heuristic values.

◻ Here, A is the start node


and H is the goal node.

◻ What is the path traversal


of Greedy search ?

A —-> C —-> G —-> H


A * algorithm
Consider the below graph with the heuristic values.
Here, A is the start node and H is the goal node.
Then what is the path traversal of A* search ?
Consider the below graph with the heuristic values.
Here, A is the start node and H is the goal node.
Then what is the path traversal of A* search ?

A —-> C —-> G —-> H


Admissible Heuristic
AO* algorithm

◻ Best-first search is what the AO* algorithm does.

◻ The AO* method divides any given difficult problem into a smaller group of
problems that are then resolved using the AND-OR graph concept.

◻ AND OR graphs are specialized graphs that are used in problems that can be
divided into smaller problems.

◻ The AND side of the graph represents a set of tasks that must be completed to
achieve the main goal, while the OR side of the graph represents different
methods for accomplishing the same main goal.
example of a simple AND-OR graph.
Working of AO* algorithm:

The evaluation function in AO* looks like this:


f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.

Difference between the A* Algorithm and
AO* algorithm
◻ A* algorithm and AO* algorithm both works on the best first search.

◻ They are both informed search and works on given heuristics values.

◻ A* always gives the optimal solution but AO* doesn’t guarantee to give the
optimal solution.

◻ Once AO* got a solution doesn’t explore all possible paths but A* explores all
paths.

◻ When compared to the A* algorithm, the AO* algorithm uses less memory.

◻ opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.
Example
Example: Step 1
• Here in the above example below the Node which is given is the heuristic value i.e h(n).
• Edge length is considered as 1 i.e g(n).
With help of f(n) = g(n) + h(n) evaluation function,
Example: Step 2

According to the answer of step 1, explore node B


Here the value of E & F are calculated as follows,

f(B⇢E) = g(e) + h(e)


f(B⇢E) = 1 + 7
=8

f(B⇢f) = g(f) + h(f)


f(B⇢f) = 1 + 9
= 10

• B⇢E path is chosen which is minimum path, i.e f(B⇢E)


• because B's heuristic value is different from its
actual value.
• The heuristic is updated and the minimum cost path
is selected.
Example: Step 2

• The minimum value in this situation is 8.


• Therefore, the heuristic for A must be
updated due to the change in B's
heuristic.

• So we need to calculate it again.

• f(A⇢B) = g(B) + updated h(B)


=1+8
=9
• We have Updated all values in the
above tree.
Example: Step 3

• By comparing f(A⇢B) & f(A⇢C+D)


• f(A⇢C+D) is shown to be smaller. i.e 8 < 9
• Now explore f(A⇢C+D)

• So, the current node is C


• f(C⇢G) = g(g) + h(g)
• f(C⇢G) = 1 + 3
=4

• f(C⇢H+I) = g(h) + h(h) + g(i) + h(i)


• f(C⇢H+I) = 1 + 0 + 1 + 0
• =2
• ……here we have added H & I because they are
in AND
Example: Step 3

• f(C⇢H+I) is selected as the path with the lowest


cost and the heuristic is also left unchanged
because it matches the actual cost.

• Paths H & I are solved because the heuristic for


those paths is 0,

• Path A⇢D needs to be calculated


because it has an AND.
Example: Step 3

f(D⇢J) = g(j) + h(j)


f(D⇢J) = 1 + 0
=1
the heuristic of node D needs to be updated to
1.

f(A⇢C+D) = g(c) + h(c) + g(d) + h(d)


=1+2+1+1
=5

the path f(A⇢C+D) is get solved and this tree has


become a solved tree now.
AO* Performance

◻ The AO* algorithm is not optimal because it stops as soon as it finds a solution
and does not explore all the paths.

◻ However, AO* is complete, meaning it finds a solution, if there is any, and does
not fall into an infinite loop.
◻ Moreover, the AND feature in this algorithm reduces the demand for memory.

◻ the strength of this algorithm comes from a divide and conquer strategy
Real-Life Applications of AO* algorithm:

◻ Vehicle Routing Problem:

◻ The vehicle routing problem is determining the shortest routes for a fleet of
vehicles to visit a set of customers and return to the depot, while minimizing the
total distance traveled and the total time taken. The AO* algorithm can be used to
find the optimal routes that satisfy both objectives.

◻ Portfolio Optimization:

◻ Portfolio optimization is choosing a set of investments that maximize returns


while minimizing risks. The AO* algorithm can be used to find the optimal
portfolio that satisfies both objectives, such as maximizing the expected return
and minimizing the standard deviation.
Hill climbing
1. Evaluate the initial state if it is goal state then return and quit
2. Loop until a solution is found or there are no new operators
left
3. Select and apply new operator
4. Evaluate new state
a. If it is a goal state then quit
b. If it is better than current state then make it new current state
c. If it is not better than current state then go to step 2
Hill Climbing Algorithm
• Problems: – Local maxima: search halts prematurely – Plateaus: search
conducts a random walk – Ridges: search oscillates with slow progress
(resulting in a set of maxima)
Adversarial Search
• we have studied the search strategies which are only associated with a
single agent that aims to find the solution which often expressed in the
form of a sequence of actions.
• But, there might be some situations where more than one agent is
searching for the solution in the same search space, and this situation
usually occurs in game playing.
• The environment with more than one agent is termed as multi-agent
environment, in which each agent is an opponent of other agent and
playing against each other.
• Each agent needs to consider the action of other agent and effect of that
action on their performance.
Formalization of the problem:
• Initial state: It specifies how the game is set up at the start.
• Player(s): It specifies which player has moved in the state space.
• Action(s): It returns the set of legal moves in state space.
• Result(s, a): It is the transition model, which specifies the result of moves in
the state space.
• Terminal-Test(s): Terminal test is true if the game is over, else it is false at any
case. The state where the game ends is called terminal states.
• Utility(s, p): A utility function gives the final numeric value for a game that
ends in terminal states s for player p. It is also called payoff function. For
Chess, the outcomes are a win, loss, or draw and its payoff values are +1, 0, ½.
And for tic-tac-toe, utility values are +1, -1, and 0.
game tree for tic-tac-toe
◻ From the initial state,
MAX has nine possible
moves. Play alternates
between MAX’s placing an
X and MIN’s placing an O
until we reach leaf nodes
corresponding to terminal
states such that one player
has three in a row or all
the squares are filled.
Issue with game tree
For tic-tac-toe the game tree is relatively small—fewer than
9! = 362, 880 terminal nodes. But for chess there are over
1040 nodes, so the game tree is best thought of as a
theoretical construct that we cannot realize in the physical
world.
Min Max Algorithm
● Mini-max algorithm is a recursive or backtracking algorithm which is used in
decision-making and game theory.
● It provides an optimal move for the player assuming that opponent is also
playing optimally.
● Mini-Max algorithm uses recursion to search through the game-tree.
● Min-Max algorithm is mostly used for game playing in AI. Such as Chess,
Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm
computes the minimax decision for the current state.
Min Max Algorithm
1. In this algorithm two players play the game, one is called MAX and other is
called MIN.
2. Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
3. Both Players of the game are opponent of each other, where MAX will select
the maximized value and MIN will select the minimized value.
4. The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
5. The minimax algorithm proceeds all the way down to the terminal node of
the tree, then backtrack the tree as the recursion.
Working of Min-Max Algorithm:

● The working of the minimax algorithm can be easily described using an example. Below we
have taken an example of game-tree which is representing the two-player game.
● In this example, there are two players one is called Maximizer and other is called Minimizer.
● Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
● This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves
to reach the terminal nodes.
● At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved in
solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire
game-tree and apply the utility function to get the utility values
for the terminal states. In the below tree diagram, let's take A is
the initial state of the tree. Suppose maximizer takes first turn
which has worst-case initial value =- infinity, and minimizer will
take next turn which has worst-case initial value = +infinity.
Min-Max Algorithm:
● For node D max(-1,- -∞) => max(-1,4)= 4 Step 2: Now, first we find the utilities
value for the Maximizer, its initial
● For Node E max(2, -∞) => max(2, 6)= 6 value is -∞, so we will compare each
value in terminal state with initial
● For Node F max(-3, -∞) => max(-3,-5) = -3
value of Maximizer and determines
● For node G max(0, -∞) = max(0, 7) = 7 the higher nodes values. It will find
the maximum among the all.
Min-Max Algorithm:

Step 3: In the next step, it's a turn for


minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer
node values.
● For node B= min(4,6) = 4
● For node C= min (-3, 7) = -3
Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really


slow for complex games such as Chess, go, etc. This type of
games has a huge branching factor, and the player has lots of
choices to decide.
This limitation of the minimax algorithm can be improved from
alpha-beta pruning which we have discussed in the next topic.
Step 4: Now it's a turn for Maximizer, and it will again choose the
maximum of all nodes value and find the maximum value for the root
node. In this game tree, there are only 4 layers, hence we reach
immediately to the root node, but in real games, there will be more
than 4 layers.

● For node A max(4, -3)= 4


• The algorithm first recurses down to the tree bottom left nodes and
uses the UTILITY function on them to discover that their values are 3,
12, and 8, respectively.
Properties of Min Max
•Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the
finite search tree.
•Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
•Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth
of the tree.
•Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).

• Limitation: The main drawback of the minimax algorithm is that it


gets really slow for complex games such as Chess, go, etc. This type of
games has a huge branching factor, and the player has lots of choices
to decide. This limitation of the minimax algorithm can be improved
from alpha-beta pruning
Alpha Beta Purning
• The problem with minimax search is that the number of game states it
has to examine is exponential in the depth of the tree.
• Unfortunately, we can’t eliminate the exponent, but it turns out we can
effectively cut it in half.
• The trick is that it is possible to compute the correct minimax decision
without looking at every node in the game tree.
• That is, we can borrow the idea of pruning.
• When applied to a standard ALPHA–BETA PRUNING minimax tree, it
returns the same move as minimax would, but prunes away branches
that cannot possibly influence the final decision.
Alpha Beta Pruning
• MINIMAX(root) = max(min(3, 12, 8), min(2, x, y), min(14, 5, 2))
= max(3, min(2, x, y), 2)
= max(3, z, 2) where z = min(2, x, y) ≤ 2 = 3.
Problem Characteristics
Is the Problem Decomposable :break the problem into smaller problem,
each of which can be solved by using specific rules.
•Can solve large problem easily.
Can Solution Step be ignored or undone
1.Ignorable(Theorem Proving)
2.Recoverable(8 Puzzle)
3.Irrecoverable(Chess)
Is the universe Predictable
• Certain Outcome(open Loop) vs Uncertain outcome(closed Loop)
• Open loop : the result of the action can be predicted perfectly.
Planning can be used to generate a sequence of operators that is
guaranteed to lead to a solution
• Closed Loop: Planning has a good probability of leading to a solution.
solution path that need to be explored increase exponentially; as the
number of point at which outcome cannot be predicted.
• The most hardest type of problem to solve is irrecoverable, un
certain-outcome.

Problem Characteristics
• Is a Good Solution Absolute or Relative
• Is the solution a state or a path
• What is the Role of Knowledge
• Does the Task require Interaction with a Person
• Problem Classification
Production System Characteristics
• Monotonic Production system : in which the
application of a rule never prevents the later
application of another rule that could also have
been applied at the time the first rule was
selected
• NonMonotonic: Not True
• Partially Commutative Production System:
sequence of rule transform state x into state y ,
then any permutation of those rules that is
allowable also transforms state x into state y.
• Commutative Production System: that is both
monotonic and Partially Commutative.
Issues in the Design of Search Programs
• Every Search Process can be viewed as a traversal of a tree structure,
The search process must find a path that connects an initial state with
one or more final state.
• The tree can be constructed from the rules that define allowable moves
in the problem space.
• But its too large to be explored, so tree is represented implicitly in the
rules and explicitly generate only those parts that is decided to be
explored.
Issues in design
• The direction in which to conduct the search (forward and backward
reasoning)
• How to select applicable rule(matching)
• How to represent each node of the search process(Knowledge
representation problem)

You might also like