Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
7 views19 pages

AICHAPTER3

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 19

KNOWLEDGE REPRESENTATION

Humans are best at understanding, reasoning, and interpreting knowledge. Human knows
things, which is knowledge and as per their knowledge they perform various actions in the
real world. But how machines do all these things comes under knowledge representation and
reasoning. Hence we can describe Knowledge representation as following:
 Knowledge representation and reasoning (KR, KRR) is the part of Artificial
intelligence which concerned with AI agents thinking and how thinking contributes to
intelligent behaviour of agents.
 It is responsible for representing information about the real world so that a computer
can understand and can utilize this knowledge to solve the complex real world
problems such as diagnosis a medical condition or communicating with humans in
natural language.
 It is also a way which describes how we can represent knowledge in artificial
intelligence. Knowledge representation is not just storing data into some database, but
it also enables an intelligent machine to learn from that knowledge and experiences so
that it can behave intelligently like a human.
 AI represents some kind of knowledge such as Object, Events and Performance.
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and
situations.
TYPES OF KNOWLEDGE:
1. Declarative Knowledge:
 Declarative knowledge is to know about something.
 It includes concepts, facts, and objects.
 It is also called descriptive knowledge and expressed in declarativesentences.
 It is simpler than procedural language.
2. Procedural Knowledge:
 It is also known as imperative knowledge.
 Procedural knowledge is a type of knowledge which is responsible for knowing how
to do something.
 It can be directly applied to any task.
 It includes rules, strategies, procedures, agendas, etc.
 Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
 Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
 Heuristic knowledge is representing knowledge of some experts in a filed or subject.
 Heuristic knowledge is rules of thumb based on previous experiences, awareness of
approaches, and which are good to work but not guaranteed.
5. Structural knowledge:
 Structural knowledge is basic knowledge to problem-solving.
 It describes relationships between various concepts such as kind of, part of, and
grouping of something.
 It describes the relationship that exists between concepts or objects.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 1
KNOWLEDGE REPRESENTATION

AI knowledge cycle:
An Artificial intelligence system has the following components for displaying intelligent
behavior:

1. Perception
2. Learning
3. Knowledge Representation and Reasoning
4. Planning
5. Execution

 The above diagram is showing how an AI system can interact with the real world and
what components help it to show intelligence.
 AI system has Perception component by which it retrieves information from its
environment. It can be visual, audio or another form of sensory input.
 The learning component is responsible for learning from data captured by Perception
comportment.
 In the complete cycle, the main components are knowledge representation and
Reasoning. These two components are involved in showing the intelligence in
machine-like humans. These two components are independent with each other but
also coupled together.
 The planning and execution depend on analysis of Knowledge representation and
reasoning.

Approaches to knowledge representation:


There are mainly four approaches to knowledge representation, which are given below:

1. Simple relational knowledge:


 It is the simplest way of storing facts which uses the relational method
 Each fact about a set of the object is set out systematically in columns.
 It is famous in database systems where the relationship between different entities is
represented.
 This approach has little opportunity for inference.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 2
KNOWLEDGE REPRESENTATION

Example:

Player Weight Age

Player1 65 23

Player2 58 18

Player3 75 24

2. Inheritable knowledge:
 In the inheritable knowledge approach, all data must be stored into a hierarchy of
classes.
 All classes should be arranged in a generalized form or a hierarchal manner.
 In this approach, we apply inheritance property.
 Elements inherit values from other members of a class.
 This approach contains inheritable knowledge which shows a relation between
instance and class, and it is called instance relation.
 Every individual frame can represent the collection of attributes and its value.
 In this approach, objects and values are represented in Boxed nodes.
 We use Arrows which point from objects to their values.
 Example:

3. Inferential knowledge:
 Inferential knowledge approach represents knowledge in the form of formal logics.
 This approach can be used to derive more facts.
 It guaranteed correctness.
Example: Let's suppose there are two statements:
Marcus is a man
All men are mortal
Then it can represent as;
man(Marcus)
∀x = man (x) ----------> mortal (x)s

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 3
KNOWLEDGE REPRESENTATION

4. Procedural knowledge:
 Procedural knowledge approach uses small programs and codes which describes how
to do specific things, and how to proceed.
 In this approach, one important rule is used which is If-Then rule.
 We can easily represent heuristic or domain-specific knowledge using this approach.
 But it is not necessary that we can represent all cases in this approach.

Knowledge-Based Agent in Artificial intelligence:


 An intelligent agent needs knowledge about the real world for taking decisions and
reasoning to act efficiently.
 Knowledge-based agents are those agents who have the capability of maintaining an
internal state of knowledge, reason over that knowledge, update their knowledge after
observations and take actions. These agents can represent the world with some formal
representation and act intelligently.
 Knowledge-based agents are composed of two main parts:
 Knowledge-base and
 Inference system.
A knowledge-based agent must able to do the following:
 An agent should be able to represent states, actions, etc.
 An agent should be able to incorporate new precepts.
 An agent can update the internal representation of the world
 An agent can deduce the internal representation of the world
 An agent can deduce appropriate actions.
The architecture of knowledge-based agent:

 The above diagram is representing a generalized architecture for a knowledge-based


agent. The knowledge-based agent (KBA) takes input from the environment by
perceiving the environment.
 The input is taken by the inference engine of the agent and which also communicate
with KB to decide as per the knowledge store in KB
 The learning element of KBA regularly updates the KB by learning new knowledge.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 4
KNOWLEDGE REPRESENTATION

Knowledge base: Knowledge-base is a central component of a knowledge-based


agent, it is also known as KB. It is a collection of sentences (here 'sentence' is a technical term
and it is not identical to sentence in English). These sentences are expressed in a language
which is called a knowledge representation language. The Knowledge-base of KBA stores fact
about the world.

THE WUMPUS WORLD:


The Wumpus world is a simple world example to illustrate the worth of a knowledge-based
agent and to represent knowledge representation. It was inspired by a video game Hunt the
Wumpus.
PROBLEM STATEMENT:
 The Wumpus world is a cave which has 4/4 rooms so there are total 16 rooms which are
connected with each other.
 We have a knowledge-based agent who will go forward in this world.
 The cave has a room with a beast which is called Wumpus, who eats anyone who enters
the room.
 The Wumpus can be shot by the agent, but the agent has a single arrow.
 In the Wumpus world, there are some Pits rooms which are bottomless, and if agent falls
in Pits, then he will be stuck there forever.
 The exciting thing with this cave is that in one room there is a possibility of finding a
heap of gold. So the agent goal is to find the gold and climb out the cave without fallen
into Pits or eaten by Wumpus.
 The agent will get a reward if he comes out with gold, and he will get a penalty if eaten
by Wumpus or falls in the pit.

PEAS description of Wumpus world:


 Performance measure:

 +1000 reward points if the agent comes out of the cave with the gold.
 -1000 points penalty for being eaten by the Wumpus or falling into the pit.
 -1 for each action, and -10 for using an arrow.
 The game ends if either agent dies or came out of the cave.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 5
KNOWLEDGE REPRESENTATION

 Environment:
 A 4*4 grid of rooms.
 The agent initially in room square [1, 1], facing toward the right.
 Location of Wumpus and gold are chosen randomly except the first square [1,1].
 Each square of the cave can be a pit with probability 0.2 except the first square.
 Actuators:
 Left turn,
 Right turn
 Move forward
 Grab
 Release
 Shoot.
 Sensors:
 The agent will perceive the stench if he is in the room adjacent to the Wumpus.
(Not diagonally).
 The agent will perceive breeze if he is in the room directly adjacent to the Pit.
 The agent will perceive the glitter in the room where the gold is present.
 The agent will perceive the bump if he walks into a wall.
 When the Wumpus is shot, it emits a horrible scream which can be perceived
anywhere in the cave.

The Wumpus world Properties:


 Partially observable: The Wumpus world is partially observable because the agent
can only perceive the close environment such as an adjacent room.
 Deterministic: It is deterministic, as the result and outcome of the world are already
known.
 Sequential: The order is important, so it is sequential.
 Static: It is static as Wumpus and Pits are not moving.
 Discrete: The environment is discrete.
 One agent: The environment is a single agent as we have one agent only and
Wumpus is not considered as an agent.

Logic
• Logics are formal languages for representing information such that conclusions can be
drawn
• Logic has two components:
1. Syntax defines the sentences in the language
2. Semantics define the “meaning” of sentences
I.e. define the truth of a sentence in a world
• E.g., the language of arithmetic
x + 2 ≥ y is a sentence; x2 + y > is not a sentence
x + 2 ≥ y is true if and only if the number x + 2 is no less than the
number y

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 6
KNOWLEDGE REPRESENTATION

x + 2 ≥ y is true in a world where x = 7, y = 1


x + 2 ≥ y is false in a world where x = 0, y = 6

Logical Entailment and Inference


KB |= α
• Knowledge base KB entails sentence α if and only if α is true in all
worlds where KB is true.
E.g., the KB containing “I finished AI homework” and “I am happy”
entails “I finished the AI homework or I am happy”

Logicians typically think in terms of models, which are formally structured worlds with respect
to which truth can be evaluated.
 m is a model of a sentence  if  is true in m
 M() is the set of all models of 
 Possible worlds ~ models
 Possible worlds: potentially real environments
 Models: mathematical abstractions that establish the truth or falsity of every
 sentence
Example:
x + y = 4, where x = #men, y = #women
Possible models = all possible assignments of integers to x and y
Consider possible models for KB assuming only pits and a reduced Wumpus
world
 Situation after detecting nothing in [1,1], moving right, detecting breeze in
[2,1]

KB = all possible wumpus-worlds consistent with the observations and the “physics” of the
Wumpus world.
Consider 2 possible conclusions given a KB
α1= "[1,2] is safe"
α2= "[2,2] is safe“
One possible inference procedure
Start with KB
Model-checking
Check if KB ╞ α by checking if in all possible models where KB is true that is α also true

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 7
KNOWLEDGE REPRESENTATION

α1= "[1,2] is safe", KB╞α1, proved by model checking

α2= "[2,2] is safe", KB ╞α2


There are some models entailed by KB where α2 is false

Completeness:

Propositional logic in Artificial intelligence


It is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of
knowledge representation in logical and mathematical form.

Example:
a) It is Sunday.
b) The Sun rises from West (False proposition)
c) 3+3= 7(False proposition)
d) 5 is a prime number.
Following are some basic facts about propositional logic:
 Propositional logic is also called Boolean logic as it works on 0 and 1.
 In propositional logic, we use symbolic variables to represent the logic, and we can
use any symbol for a representing a proposition, such A, B, C, P, Q, R, etc.
 Propositions can be either true or false, but it cannot be both.
 Propositional logic consists of an object, relations or function, and logical
connectives.
 These connectives are also called logical operators.
 The propositions and connectives are the basic elements of the propositional logic.
 Connectives can be said as a logical operator which connects two sentences.
 A proposition formula which is always true is called tautology, and it is also called a
valid sentence.
 A proposition formula which is always false is called Contradiction.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 8
KNOWLEDGE REPRESENTATION

 A proposition formula which has both true and false values is called
 Statements which are questions, commands, or opinions are not propositions such as
"Where is Rohini", "How are you", "What is your name", are not propositions.

Syntax of propositional logic:


The syntax of propositional logic defines the allowable sentences for the knowledge
representation. There are two types of Propositions:
1. Atomic Propositions
2. Compound propositions
Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
Example:
a) 2+2 is 4, it is an atomic proposition as it is a true fact.
b) "The Sun is cold" is also a proposition as it is a false fact.
Compound proposition: Compound propositions are constructed by combining simpler or
atomic propositions, using parenthesis and logical connectives.
Example:
a) "It is raining today, and street is wet."
b) "Sridhar is a doctor, and his clinic is in Mumbai."

Logical Connectives:
Logical connectives are used to connect two simpler propositions or representing a sentence
logically. We can create compound propositions with the help of logical connectives. There
are mainly five connectives, which are given as follows:

1. Negation: ¬ P is called negation of P.


2. Conjunction: P ∧ Q is called a conjunction.
3. Disjunction: P ∨ Q. is called disjunction
4. Implication: P → Q is called an implication. It are also known as if-then rules
5. Bi-conditional: P⇔ Q is a Bi-conditional sentence

Following is the summarized table for Propositional Logic Connectives:

Truth Table:

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 9
KNOWLEDGE REPRESENTATION

Truth table with three propositions:

Precedence of connectives:

Just like arithmetic operators, there is a precedence order for propositional connectors or
logical operators. This order should be followed while evaluating a propositional problem.
Following is the list of the precedence order for operator:

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 10
KNOWLEDGE REPRESENTATION

Precedence Operators

First Precedence Parenthesis

Second Precedence Negation

Third Precedence Conjunction(AND)

Fourth Precedence Disjunction(OR)

Fifth Precedence Implication

Six Precedence Bi-conditional

Logical equivalence:
Logical equivalence is one of the features of propositional logic. Two propositions are said to be
logically equivalent if and only if the columns in the truth table are identical to each other.

 Let's take two propositions A and B, so for logical equivalence, we can write it as
A⇔B. In below truth table we can see that column for ¬A∨ B and A→B, are identical
hence A is Equivalent to B

Properties of Operators:
 Commutativity:
P∧ Q= Q ∧ P, or
P ∨ Q = Q ∨ P.
 Associativity:
(P ∧ Q) ∧ R= P ∧ (Q ∧ R),
(P ∨ Q) ∨ R= P ∨ (Q ∨ R)
 Identity element:
P ∧ True = P,
P ∨ True= True.
 Distributive:
P∧ (Q ∨ R) = (P ∧ Q) ∨ (P ∧ R).
P ∨ (Q ∧ R) = (P ∨ Q) ∧ (P ∨ R).
 DE Morgan's Law:
¬ (P ∧ Q) = (¬P) ∨ (¬Q)
¬ (P ∨ Q) = (¬ P) ∧ (¬Q).
 Double-negation elimination:

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 11
KNOWLEDGE REPRESENTATION

¬ (¬P) = P.
Limitations of Propositional logic:
 We cannot represent relations like ALL, some, or none with propositional logic.
Example:
All the girls are intelligent.
Some apples are sweet.
 Propositional logic has limited expressive power.
 In propositional logic, we cannot describe statements in terms of their properties or
logical relationships.

Propositional Theorem proving:

The most well-known rule is known as Modus Ponens (Latin for affirming mode) and is
expressed as

WumpusAlive can be deduced from \text { (WumpusAhead } \wedge \text { WumpusAlive)


} , for example. One may readily demonstrate that Modus Ponens and And-Elimination are
sound once and for all by evaluating the potential truth values of \alpha and \beta . These
principles may then be applied to each situation in which they apply, resulting in good
conclusions without the necessity of enumerating models.

The equations above show all of the logical equivalences that can be utilized as inference
rules. The equivalence for biconditional elimination, for example, produces the two inference
rules.

Some inference rules do not function in both directions in the same way. We can’t, for
example, run Modus Ponens in the reverse direction to get \alpha \Rightarrow \beta and
\alpha \text{ from } \beta .

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 12
KNOWLEDGE REPRESENTATION

Let’s look at how these equivalences and inference rules may be applied in the wumpus
environment. We begin with the knowledge base including R1 through R5 and demonstrate
how to establish \neg P_{1,2} i.e. that [1,2] does not include any pits. To generate R6, we
first apply biconditional elimination to R2:

After that, we apply And-Elimination on R6 to get


For contrapositives, logical equivalence yields
With R8 and the percept R_{4} \text { (i.e., } \neg B_{1,1} \text { ) } , we can now apply
Modus Ponens to get
Finally, we use De Morgan’s rule to arrive at the following conclusion:
That is to say, neither [1,2] nor [2,1] have a pit in them.

We found this proof by hand, but any of the search techniques may be used to produce a
proof-like sequence of steps. All we have to do now is define a proof problem:

Initial State: the starting point for knowledge.


Actions: the set of actions is made up of all the inference rules that have been applied to all
the sentences that fit the inference rule’s upper half.
Consequences: Adding the statement to the bottom part of the inference rule is the result of
an action.
Objective: The objective is to arrive at a state that contains the phrase we are attempting to
verify.

EFFECTIVE PROPOSITONAL MODEL CHECKING:


Cast the problem as one of constraint satisfaction. Many combinatorial problem in computer
science can be reduced to checking the satisfiablility of a propositional sentence.
1.complte backtracking search(DPLL algorithm)
2.Walksat algorithm

1.DPLL Algorithm
function DPLL(clauses, symbols, model) returns true or false
if every clause in clauses is true in model then return true
if some clause in clauses is false in model then return false
P, value ←FIND-PURE-SYMBOL(symbols, clauses, model)
if P is non-null then return DPLL(clauses, symbols–P, model∪{P=value})
P, value ←FIND-UNIT-CLAUSE(clauses, model)
if P is non-null then return DPLL(clauses, symbols–P, model∪{P=value})
P ← First(symbols)
rest ← Rest(symbols)
return or(DPLL(clauses, rest, model∪{P=true}),
DPLL(clauses, rest, model∪{P=false}))
DPLL Algorithm

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 13
KNOWLEDGE REPRESENTATION

Early termination
Essentially backtracking

2.WALKSAT ALGORITHM:

First-Order logic:
First-order logic is another way of knowledge representation in artificial intelligence. It is an
extension to propositional logic..
First-order logic is also known as Predicate logic or First-order predicate logic.
First-order logic is a powerful language that develops information about the objects in a more
easy way and can also express the relationship between those objects.
:
 Objects: A, B, people, numbers, colors, wars, theories, squares, pits, Wumpus, ......
 Relations: It can be unary relation such as: red, round, is adjacent, or n-any relation
such as: the sister of, brother of, has color, comes between
 Function: Father of, best friend, third inning of, end of, ......
As a natural language, first-order logic also has two main parts:
1. Syntax
2. Semantics
Syntax of First-Order logic:
The syntax of FOL determines which collection of symbols .
The basic syntactic elements of first-order logic are symbols. We write statements in short-
hand notation in FOL.

Basic Elements of First-order logic:


Following are the basic elements of FOL syntax
Constant 1, 2, A, John, Mumbai, cat,....

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 14
KNOWLEDGE REPRESENTATION

Variables x, y, z, a, b,....

Predicates Brother, Father, >,....

Function sqrt, LeftLegOf, ....

Connectives ∧, ∨, ¬, ⇒, ⇔

Equality ==

Quantifier ∀, ∃

Atomic sentences:
Atomic sentences are the most basic sentences of first-order logic. These sentences are
formed from a predicate symbol followed by a parenthesis with a sequence of terms.
Syntax:
Predicate (term1, term2, ......, term n).
Example: Ravi and Ajay are brothers: => Brothers (Ravi, Ajay).
Chunky is a cat: => cat (Chunky).

Complex Sentences:
Complex sentences are made by combining atomic sentences using connectives.
Example: Ravi and Ajay are brothers, geetha and seetha are sisters.
Brothers (Ravi, Ajay) =>Sisters(geetha,seetha).

Using First order logic:

1. Subject: Subject is the main part of the statement.


2. Predicate: A predicate can be defined as a relation, which binds two atoms together
in a statement.
Example:
Consider the statement: "x is an integer.", it consists of two parts, the first part x is the subject
of the statement and second part "is an integer," is known as a predicate.:

Quantifiers in First-order logic:


A quantifier is a language element which generates quantification, and quantification
specifies the quantity of specimen in the universe of discourse.
These are the symbols that permit to determine or identify the range and scope of the variable
in the logical expression. There are two types of quantifier:

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 15
KNOWLEDGE REPRESENTATION

1. Universal Quantifier, (for all, everyone, everything)


2. Existential quantifier, (for some, at least one).

1. Universal Quantifier:
Universal quantifier is a symbol of logical representation, which specifies that the statement
within its range is true for everything or every instance of a particular thing.
The Universal quantifier is represented by a symbol ∀, which resembles an inverted A.
Note: In universal quantifier we use implication "→".
If x is a variable, then ∀x is read as:
For all x
For each x
For every x.
Example:
All man drink coffee.
Let a variable x which refers to a cat so all x can be represented in UOD as below:
First-Order Logic in Artificial intelligence
∀x man(x) → drink (x, coffee).
It will be read as: There are all x where x is a man who drink coffee.

2. Existential Quantifier:
Existential quantifiers are the type of quantifiers, which express that the statement within its
scope is true for at least one instance of something.
It is denoted by the logical operator ∃, which resembles as inverted E. When it is used with a
predicate variable then it is called as an existential quantifier.

Note: In Existential quantifier we always use AND or Conjunction symbol (∧).


There exists a 'x.'
For some 'x.'
For at least one 'x.'
Example:
Some boys are intelligent.
∃x: boys(x) ∧ intelligent(x)

It will be read as: There are some x where x is a boy who is intelligent.

Points to remember:
 The main connective for universal quantifier ∀ is implication →.
 The main connective for existential quantifier ∃ is and ∧.

Properties of Quantifiers:
In universal quantifier, ∀x∀y is similar to ∀y∀x.
In Existential quantifier, ∃x∃y is similar to ∃y∃x.
∃x∀y is not similar to ∀y∃x.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 16
KNOWLEDGE REPRESENTATION

Some Examples of FOL using quantifier:

1. All birds fly.


In this question the predicate is "fly(bird)."
And since there are all birds who fly so it will be represented as follows.
∀x bird(x) →fly(x).
2. Every man respects his parent.
In this question, the predicate is "respect(x, y)," where x=man, and y= parent.
Since there is every man so will use ∀, and it will be represented as follows:
∀x man(x) → respects (x, parent).
3. Some boys play cricket.
In this question, the predicate is "play(x, y)," where x= boys, and y= game. Since there are
some boys so we will use ∃, and it will be represented as:
∃x boys(x) → play(x, cricket).

Free and Bound Variables:


The quantifiers interact with variables which appear in a suitable way. There are two types of
variables in First-order logic which are given below:

Free Variable: A variable is said to be a free variable in a formula if it occurs outside the
scope of the quantifier.
Example: ∀x ∃(y)[P (x, y, z)], where z is a free variable.

Bound Variable: A variable is said to be a bound variable in a formula if it occurs within the
scope of the quantifier.
Example: ∀x [A (x) B( y)], here x and y are the bound variables.

UNIFICATION AND FORWARD CHAINING:


Forward chaining is also known as a forward deduction or forward reasoning method when
using an inference engine. Forward chaining is a form of reasoning which start with atomic
sentences in the knowledge base and applies inference rules (Modus Ponens) in the forward
direction to extract more data until a goal is reached.

Properties of Forward-Chaining:
 It is a down-up approach, as it moves from bottom to top.
 by starting from the initial state and reaches the goal state.
 This approach is also called as data-driven as we reach to the goal using available
data.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 17
KNOWLEDGE REPRESENTATION

Example:
Let us say we have the following:
Fact 1: A dog is up for adoption through person A.
Fact 2: Person B is looking for a dog.
Inference rule: If a dog is up for adoption and someone is looking to adopt it, that person is
free to adopt it.
Here, the decision can be reached as person b can adopt the dog from person A. This is how
forward chaining works to make a decision.

Advantages
 Suitable to draw multiple conclusions simultaneously
 Higher flexibility than backward chaining
 Reliable for conclusion
Disadvantages
 Time-consuming due to data synchronization
 The fact explanation is unclear

BACKWARD CHAINING:
Backward-chaining is also known as a backward deduction or backward reasoning method
when using an inference engine. A backward chaining algorithm is a form of reasoning,
which starts with the goal and works backward, chaining through rules to find known facts
that support the goal.

Properties of backward chaining:

 It is known as a top-down approach.


 Backward-chaining is based on modus ponens inference rule.
 In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
 It is called a goal-driven approach, as a list of goals decides which rules are selected
and used.
 Backward -chaining algorithm is used in game theory, automated theorem proving
tools, inference engines, proof assistants, and various AI applications.

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 18
KNOWLEDGE REPRESENTATION

 The backward-chaining method mostly used a depth-first search strategy for proof.

Example: Let us take the same example.


Decision/Goal: Person B adopts a dog.
Fact 1: A dog is up for adoption from Person A.
Fact 2: Person B is looking for a dog.
Inference rule: If a person wants to adopt a dog, he can if there is any up for adoption.
Here, the inference engine will begin with the goal and look if the conditions are met. If both
conditions are met, the stated decision can be concluded.
Advantages
 Swifter than forward chaining
 Easier process
 Efficiently drives correct solutions
Disadvantages
 Provides single answer
 Less flexibility
 Suitable only if the endpoint is known
 Difficult to execute

M.S.Ramya
Asst. Professor,SSCASCW,
TUMKUR Page 19

You might also like