Module 3
Module 3
Implementation:
• Order the nodes in fringe increasing order of cost.
• Special cases:
greedy best-first search
A* search
Greedy Best First Search 1/2
• In BFS and DFS, when we are at a node, we can consider any of the
adjacent as next node.
• So both BFS and DFS blindly explore paths without considering any
cost function.
• With the help of best-first search, at each step, we can choose the
most promising node.
• We start from source "S" and search for goal "I" using given costs.
Example 2/6
S
• PriorityQueue initially contains S
• Remove S from PriorityQueue and process unvisited neighbors of S to
PriorityQueue
• PriorityQueue now contains {A, C, B} (C is put before B because C has
lesser cost)
A C B
Example 3/6
C B E D
Example 4/6
B H E D
Example 5/6
H E D F G
Example 6/6
• Disadvantages:
It can behave as an unguided depth-first search in the worst case scenario.
• Note: Performance of the algorithm depends on how well the cost or evaluation
function is designed.
Best-first Search Algorithm (Greedy
Search):
• Greedy best-first search algorithm always selects the path
which appears best at that moment.
• It is the combination of depth-first search and breadth-first
search algorithms.
• It uses the heuristic function and search. Best-first search
allows us to take the advantages of both algorithms.
• With the help of best-first search, at each step, we can choose
the most promising node.
• In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by
heuristic function, i.e. f(n)= g(n). Where, h(n)= estimated cost
from node n to the goal.
The greedy best first algorithm is implemented by the priority queue.
Best first search algorithm:
Step 1: Place the starting node into the OPEN list.
Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest
value of h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node
is a goal node or not. If any successor node is goal node, then return
success and terminate the search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation
function f(n), and then check if the node has been in either OPEN or
CLOSED list. If the node has not been in both list, then add it to the
OPEN list.
Step 7: Return to Step 2.
Advantages:
• Best first search can switch between BFS and DFS by gaining
the advantages of both the algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
• It can behave as an unguided depth-first search in the worst
case scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
• Example:
Consider the below
search problem, and we
will traverse it using
greedy best-first search.
At each iteration, each
node is expanded using
evaluation function
f(n)=h(n) , which is given
in the below table.
A* Search Algorithm:
• Stochastic hill climbing does not examine for all its neighbor
before moving. Rather, this search algorithm selects one
neighbor node at random and decides whether to choose it as a
current state or examine another state.
Algorithm
• Evaluate the initial state. If it is a goal state then stop and return
success. Otherwise, make the initial state the current state.
• Repeat these steps until a solution is found or the current state
does not change.
• Select a state that has not been yet applied to the current state.
• Apply the successor function to the current state and generate all the
neighbor states.
• Among the generated neighbor states which are better than the
current state choose a state randomly (or based on some probability
function).
• If the chosen state is the goal state, then return success, else make it
the current state and repeat step 2 of the second point.
• Exit from the function.
State Space diagram for Hill Climbing
• The state-space diagram is a graphical representation of the set
of states our search algorithm can reach vs the value of our
objective function(the function which we wish to maximize).
• X-axis: denotes the state space ie states or configuration our
algorithm may reach.
• Y-axis: denotes the values of objective function corresponding
to a particular state.
• The best solution will be a state space where the objective
function has a maximum value(global maximum).
• Let's consider a simple one-dimensional hill climbing problem.
Imagine you are standing on a mountainous terrain, and your
goal is to reach the highest point. You can only move
horizontally (along the x-axis), and your objective is to find the
peak.
• Here's a simple representation of the terrain as a function
• f(x)=−(x−3)2+9
• Let's say you start at f(x)=0,x=0. The algorithm would work as
follows:
1.Evaluate the current position by computing f(x).
2.Determine the neighboring points (small steps in the positive or
negative x-direction).
3.Move to the neighbor with the highest f(x) value.
4.Repeat steps 1-3 until you reach a peak or a point where all
neighboring points have lower f(x) values.
More Examples
• 2D Hill Climbing
• Travelling Salesman Problem-The Traveling Salesman Problem involves
finding the shortest possible route that visits a set of cities and
returns to the original city. The objective is to minimize the total
distance traveled. The hill climbing algorithm can be applied to
explore different routes and improve the solution iteratively.
• Knapsack Problem: The Knapsack Problem is a classic
optimization problem where you have a set of items, each with a
weight and a value, and you want to determine the combination of
items to include in a knapsack to maximize the total value without
exceeding a given weight limit. Hill climbing can be used to explore
different combinations of items and improve the solution.
Chess Game (Local Search): In chess, hill climbing can be
applied to improve the evaluation function used by a chess-
playing algorithm. The algorithm can make moves and evaluate
the resulting board positions, iteratively selecting moves that lead
to better positions.
Network Routing: In network routing, the goal is to find the
optimal path for data packets to travel from a source to a
destination. Hill climbing can be applied to explore different
routes and improve the efficiency of data transmission.
Logical Agents-Knowledge Based
• In the context of artificial intelligence, a knowledge-based agent
is an intelligent agent that uses knowledge to make decisions
and solve problems.
• These agents are designed to operate on a knowledge base,
which is a collection of information that the agent can use to
reason and make informed decisions.
• The knowledge base typically includes facts, rules, and other
pieces of information relevant to the agent's domain.
• Logical agents, a subset of knowledge-based agents, use
logical reasoning to draw conclusions and make decisions
based on the information available in their knowledge base.
Here are some key components and characteristics of logical
agents:
1.Knowledge Base:
1. The knowledge base is a central component of a logical agent. It
contains statements or propositions about the world, and these
statements can be facts, rules, or a combination of both.
2. Facts represent information that is assumed to be true in the agent's
domain. For example, "Socrates is a human" could be a fact in a
knowledge base.
3. Rules are statements that define relationships between different facts.
For instance, a rule might state, "If someone is human and mortal, then
that person is mortal."
Inference Engine:
• The inference engine is responsible for drawing conclusions
from the information in the knowledge base. It applies logical
reasoning rules and deduction to infer new information.
• Common inference methods include modus ponens (applying a
rule to draw a conclusion) and resolution (resolving conflicting
information).
Logical Representation:
• Logical agents often use formal languages to represent
knowledge in a structured and unambiguous way. Propositional
logic and first-order logic are commonly employed for this
purpose.
• Propositional logic deals with propositions (statements that are
either true or false) and logical operators (such as AND, OR,
and NOT).
• First-order logic extends propositional logic by introducing
variables, quantifiers (such as ∀ for "for all" and ∃ for "there
exists"), and predicates.
Knowledge Acquisition:
• Logical agents may have mechanisms for acquiring new
knowledge from the environment. This could involve learning
from observations, interacting with users, or incorporating
information from external sources.
Example: Expert Systems:
• Expert systems are a practical application of logical agents.
They are designed to mimic the decision-making capabilities of
a human expert in a specific domain.
• Expert systems use a knowledge base of facts and rules, and
their inference engine simulates the logical reasoning process
to provide advice or solutions in the given domain.
Let's illustrate the usage of logical agents with a simple example
involving a knowledge base, rules, and logical inference.
Consider a logical agent designed to make decisions about
whether to play tennis based on weather conditions. The
knowledge base includes facts and rules related to playing
tennis.
Knowledge Base:
• Fact: SunnyFact: Sunny
• Fact: WarmFact: Warm
• Rule: If Sunny and Warm, then PlayTennisRule: If Sunny and W
arm, then PlayTennis
Here, "Sunny" and "Warm" are considered as facts, and the rule
states that if it's sunny and warm, the logical agent should play
tennis.
Inference Engine: The inference engine uses the knowledge base to
draw conclusions. Let's say the agent observes that it is both sunny
and warm:
1.Observation: Fact: SunnyFact: Sunny, Fact: WarmFact: Warm
2.Inference:
Rule: If Sunny and Warm, then PlayTennisRule: If Sunny and Warm,
then PlayTennis (Apply modus ponens)
3.Conclusion: Action: PlayTennisAction: PlayTennis
In this case, the logical agent, based on the logical rule, infers that it
should play tennis because the observed conditions match the rule's
antecedents.
x
Let's consider another example:
• Knowledge Base:
• Fact: Rainy
• Rule: If Rainy, then NoPlayTennis
Observation: Fact: Rainy
1.Inference: Rule: If Rainy, then NoPlayTennis(Apply modus ponens)
2.Conclusion: Action: NoPlayTennisAction: NoPlayTennis
• In this case, the agent infers that it should not play tennis because
the observed condition (rainy) matches the rule, leading to the
conclusion of not playing tennis.
• knowledge bases consist of sentences.
• These sentences are expressed according to the syntax of the
representation language, which specifies all the sentences that are
well formed.
• The notion of syntax is clear enough in ordinary arithmetic: “x + y = 4”
is a well-formed sentence, whereas “x4y+ =” is not.
Logical agents
Introduction
• The central component of a knowledge-based
agent is its knowledge base, or KB.
• The knowledge base is a set of Sentences
• Each sentence is expressed in a language called
a knowledge representation language
• There must be a way to add new sentence to the
knowledge base and a query to know
• The standard names would be TELL and ASK
• Both operations involve INFERENCE- that is
deriving new sentence from an old one.
• The knowledge level, where we need specify only
what the agent knows and what its goals are, in
order to fix its behavior.
• For example, an automated taxi might have the
goal of taking a passenger from San Francisco
to Marin County and might know that the Golden
Gate Bridge is the only link between the two
locations.
• Then we can expect it to cross the Golden Gate
Bridge because it knows that that will achieve
its goal.
• Notice that this analysis is independent of how
the taxi works at the implementation level.
• It doesn’t matter whether its geographical
knowledge is implemented as linked lists or
pixel maps, or whether it reasons by
manipulating strings of symbols stored in
WUMPUS World
• The Wumpus World is a classic artificial intelligence problem
used to illustrate the concepts of knowledge representation and
reasoning. It was introduced by Alfred V. Aho, John E. Hopcroft,
and Jeffrey D. Ullman in 1974.
• In the Wumpus World, the agent (an AI or a player) navigates
through a grid-based environment in search of treasure while
avoiding dangers. The environment contains pits, a wumpus (a
mythical creature that can be dangerous), and gold. The agent
has a limited set of actions it can perform, such as moving to
adjacent rooms, shooting arrows to kill the wumpus, and
grabbing the gold.
Here are some key elements of the Wumpus World:
1. Rooms and Grid: The environment is represented as a grid of
rooms, and the agent can move between adjacent rooms.
2. Pits: Some rooms contain pits, and if the agent falls into a pit, it
dies.
3. Wumpus: The wumpus is present in one of the rooms. If the agent
enters the room with the wumpus without shooting it first, it gets
eaten and dies.
4. Gold: There is a room with gold. The goal of the agent is to find and
grab the gold.
5. Arrows: The agent has a limited number of arrows. It can shoot
arrows to kill the wumpus, but the arrows are finite.
6. Percept: The agent receives sensory information about its
surroundings. For example, it can sense a breeze if there is a pit
nearby or smell the wumpus if it is in an adjacent room.
7. Actions: The agent can perform actions like moving, shooting
arrows, grabbing gold, and climbing out of the cave.
Challenges
• The challenge in the Wumpus World is to design an intelligent
agent that can navigate through the environment, avoid
dangers, and successfully achieve the goal (grabbing the gold)
while maximizing its utility and minimizing risks.
• The precise definition of the task environment is
given, as by the PEAS description:
• Performance measure: +1000 for climbing out of the
cave with the gold, –1000 for falling into a pit
or being eaten by the wumpus, –1 for each action
taken and –10 for using up the arrow. The game
ends either when the agent dies or when the agent
climbs out of the cave.
• Environment: A 4 × 4 grid of rooms. The agent
always starts in the square labeled [1,1], facing
to the right. The locations of the gold and the
wumpus are chosen randomly, with a uniform
distribution, from the squares other than the
• Actuators: The agent can move Forward, TurnLeft by
90◦, or TurnRight by 90◦.
• The agent dies a miserable death if it enters a
square containing a pit or a live wumpus. (It is
safe, albeit smelly, to enter a square with a dead
wumpus.)
• If an agent tries to move forward and bumps into a
wall, then the agent does not move.
• The action Grab can be used to pick up the gold if
it is in the same square as the agent.
• The action Shoot can be used to fire an arrow in a
straight line in the direction the agent is
facing.
• The arrow continues until it either hits (and
hence kills) the wumpus or hits a wall.
• The agent has only one arrow, so only the first
Shoot action has any effect. Finally, the action
Climb can be used to climb out of the cave, but
only from square [1,1].
• Sensors: The agent has five sensors, each of
which gives a single bit of information: –
• In the square containing the wumpus and in the
directly (not diagonally) adjacent squares, the
agent will perceive a Stench.
• In the squares directly adjacent to a pit, the
agent will perceive a Breeze.
• In the square where the gold is, the agent will
perceive a Glitter. – When an agent walks into
a wall, it will perceive a Bump.
• When the wumpus is killed, it emits a woeful
Scream that can be perceived anywhere in the
cave.
The percepts will be given to the agent program
in the form of a list of five symbols; for
example, if there is a stench and a breeze, but
• We use an informal knowledge representation
language consisting of writing down symbols in a
grid (as in Figures 7.3 and 7.4).
• The agent’s initial knowledge base contains the
rules of the environment, as described previously;
in particular, it knows that it is in [1,1] and
that [1,1] is a safe square; we denote that with
an “A” and “OK,” respectively, in square [1,1].
• The first percept is [None, None, None, None,
None], from which the agent can conclude that its
neighboring squares, [1,2] and [2,1], are free of
dangers—they are OK. Figure 7.3(a) shows the
agent’s state of knowledge at this point.
• A cautious agent will move only into a square that
it knows to be OK. Let us suppose the agent
decides to move forward to [2,1]. The agent
perceives a breeze (denoted by “B”) in [2,1], so
there must be a pit in a neighboring square. The
pit cannot be in [1,1], by the rules of the game,
• The agent perceives a stench in [1,2], resulting in
the state of knowledge shown in Figure 7.4(a). The
stench in [1,2] means that there must be a wumpus
nearby.
• But the wumpus cannot be in [1,1], by the rules of
the game, and it cannot be in [2,2] (or the agent
would have detected a stench when it was in [2,1]).
Therefore, the agent can infer that the wumpus is in
[1,3]. The notation W! indicates this inference.
Moreover, the lack of a breeze in [1,2] implies that
there is no pit in [2,2]. Yet the agent has already
inferred that there must be a pit in either [2,2] or
[3,1], so this means it must be in [3,1]. This is a
fairly difficult inference, because it combines
knowledge gained at different times in different
places and relies on the lack of a percept to make
one crucial step
• The agent has now proved to itself that there is
neither a pit nor a wumpus in [2,2], so it is OK to
LOGIC
• knowledge bases consist of sentences.
• These sentences SYNTAX are expressed according
to the syntax of the representation language,
which specifies all the sentences that are well
formed.
• The notion of syntax is clear enough in
ordinary arithmetic: “x + y = 4” is a well-
formed sentence, whereas “x4y+ =” is not.
• A logic must also define the semantics or
meaning of sentences. The semantics defines the
truth of each sentence with respect to each
possible world.
• For example, the semantics for arithmetic
specifies that the sentence “x + y = 4” is true
in a world where x is 2 and y is 2, but false
in a world where x is 1 and y is 1.
• In standard logics, every sentence must be
either true or false in each possible world—
there is no “in between.”
• When we need to be precise, we use the term model
in place of “possible world.”
• Whereas possible worlds might be thought of as
(potentially) real environments that the agent
might or might not be in, models are mathematical
abstractions, each of which simply fixes the truth
or falsehood of every relevant sentence.
• Informally, we may think of a possible world as,
for example, having x men and y women sitting at a
table playing bridge, and the sentence x + y = 4
is true when there are four people in total.
• Formally, the possible models are just all
possible assignments of real numbers to the
variables x and y.
• Each such assignment fixes the truth of any
sentence of arithmetic whose variables are x and
• This involves the relation of logical entailment
between sentences—the idea that a sentence follows
logically from another sentence.
• In mathematical notation, we write α |= β to mean
that the sentence α entails the sentence β.
• The formal definition of entailment is this: α |=
β if and only if, in every model in which α is
true, β is also true.
• Using the notation just introduced, we can write α
|= β if and only if M(α) ⊆ M(β) .
• The relation of entailment is familiar from
arithmetic; we are happy with the idea that the
sentence x = 0 entails the sentence xy = 0.
Obviously, in any model where x is zero, it is the
case that xy is zero (regardless of the value of
y).
• We can apply the same kind of analysis to the
wumpus-world reasoning example given in the
preceding section.
• Consider the situation in Figure 7.3(b): the
agent has detected nothing in [1,1] and a
breeze in [2,1].
• These percepts, combined with the agent’s
knowledge of the rules of the wumpus world,
constitute the KB. The agent is interested
(among other things) in whether the adjacent
squares [1,2], [2,2], and [3,1] contain pits.
• Each of the three squares might or might not
contain a pit, so (for the purposes of this
• The KB can be thought of as a set of sentences or as a
single sentence that asserts all the individual sentences.
• The KB is false in models that contradict what the agent
knows— for example, the KB is false in any model in which
[1,2] contains a pit, because there is no breeze in [1,1].
• There are in fact just three models in which the KB is true,
and these areshown surrounded by a solid line in Figure 7.5.
• Now let us consider two possible conclusions:
α1 = “There is no pit in [1,2].”
α2 = “There is no pit in [2,2].”
We have surrounded the models of α1 and α2 with dotted lines
in Figures 7.5(a) and 7.5(b), respectively.
By inspection, we see the following: in every model in which
KB is true, α1 is also true. Hence, KB |= α1: there is no pit
in [1,2].
Propositional Logic
• Powerful logic called propositional logic
• The syntax of propositional logic defines the
allowable sentences.
• The atomic sentences consist of a single
proposition symbol.
• Each such symbol stands for a proposition that
can be true or false. We use symbols that start
with an uppercase letter and may contain other
letters or subscripts, for example: P, Q, R,
W1,3 and North.
• The names are arbitrary but are often chosen to
have some mnemonic value—we use W1,3 to stand
for the proposition that the wumpus is in
• There are two proposition symbols with fixed
meanings:
• True is the always-true proposition and False
is the always-false proposition.
• Complex sentences are constructed from simpler
sentences, using parentheses and logical
connectives. There are five connectives in
common use
A simple knowledge base
Propositional Theorem Proving