Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Definition of knowledge
Large collection of symbols is called as data.
Large collection of data is called as information.
If you have lot of information it is knowledge.
If you have lot of knowledge then you are an intelligent.
If you are an intelligent then you have wisdom.
Knowledge is defined as the piece of information that helps in decision-making.
Intelligence can be defined as the ability to draw useful inferences from the available knowledge.
Wisdom is the maturity of the mind that directs its intelligence to achieve desired goals.
Knowledge Relation:
Wisdom
Intelligence
Knowledge
Information
Data
Symbol
What is intelligence?
An exact definition of intelligence has proven to be extremely elusive.
Douglas Hofstadler suggests the following characteristics in a list of essential abilities for intelligence.
1.To respond to situations very flexibility.
2.To make sense out of ambiguous or contradictory messages.
3.To recognize the relative importance of different elements of a situation.
4.To find similarities between situations despite differences, which may separate them.
5.To draw distinctions between situations despite similarities, which may link them.
Turing Test: In 1950, Turing published an article in the Mind magazine, which triggered a
controversial topic Can a machine think.
Turing proposed an imitation game which was later modified to Turing test. In the imitation
game the players are three humans- a male, a female and an interrogator. The interrogator who is
shielded from the other two, asks questions to both of them and based on their typewritten answers
determines who is female. The aim of the male is to imitate the female and deceive the interrogator
and the role of female is to provide replies that would inform the interrogator about her true sex.
Room A
Room B
Room C
Turing proposed that if the human interrogator in Room C is not able to identify who is in Room A or
in Room B, then the machine possesses intelligence. Turing considered this is a sufficient test for
attributing thinking capacity to a machine.
As of today, Turing test is the ultimate test a machine must pass in order to be called as
intelligent test.
Importance of Turing test:
It gives a standard for determining intelligence.
It also helps in eliminating any bias in favour of living organism, because the interrogator focuses
slowly on the content of the answers to the questions.
Definitions of Artificial Intelligence:
There is no universal agreement among AI researchers about exactly what constitutes AI.
Various definitions of AI focus on different aspects of this branch of computer science including
intelligent behavior, symbolic process, heuristics and pattern matching.
Some of the definitions of AI:
1.AI is the study of how to make computer do things, which at the moment people do better.
------Elaine Rich.
2. McCarthy coined the term AI in 1956.
Developing computer programs to solve complex problems by applications of processes that
analogous to human reasoning processes.
This definition has 2 major parts: computer solutions for complex problems and processes that are
analogous to human reasoning processes.
A.I is the study of mental faculties through the use of computational models.
3.AI is the study of the computations i.e. possible to perceive, reason and action.
From the perspective of this definition AI differs from most of psychology because of the greater
emphasis on computation and AI differs from most of computers science because of the emphasis on
perception, reasoning and action
4.AI is the part of computer science concerned with designing intelligent computer systems that
exhibit the characteristics we associate with intelligent in human behavior.
--- Arron Barrand Edward A.Feignbaum
5.According to Bruce G Buchanan and Edward shortlife symbolic processing is an essential
characteristics of AI.
AI is the branch of computer science dealing with symbolic, non-algorithmic methods of problem
solving.
6. In an encyclopedia article, Bruce G Buchanan includes heuristics as key elements of AI.
AI is the branch of computer science that deals with ways of representing knowledge using symbols
rather than numbers and writes rules of thumb or heuristic methods for processing information.
-----Bruce G Buchanan, encyclopedia Britannica.
A heuristic is a rule of thumb that helps you to determine how is proceed.
7. Another definition of AI focuses on pattern matching techniques.
In simplified terms, AI works with pattern matching methods, which attempt to describe
objects, events or processes in terms of their qualitative features and logical computational
relationships.
---Brattle researches corporation, AI and fifth generation computer technologies.
computer
Non living device.
Dependent and must be programmed
Describe in nature.
Unlimited memory size.
Basic unit is a ram cell.
Store devices are electronic & magnetic.
Faster.
Speed of transmission is equal to the
Speed of electrons & speed of light.
9. No reasoning power.
10. Dumb and no emotions.
11. Must be programmed.
12. Volume is about 2000wats.
13. Power is 500 watts.
14. Logic adopted is binary logic.
15. Only intention and achieved a
Certain degree of specification.
1.
2.
3.
4.
5.
6.
7.
8.
must use to communicate with the computer. The goal of natural language processing is to enable
people and computers to communicate in a natural (human) language such as English rather in a
computer language.
The field of N.L.P is divided into 2 sub fields of : 1. Natural language understanding which
investigates methods of allowing the computer to comprehend instructions given in ordinary English
so computers can understand people more easily.
2. Natural-language generations, which strives to have, computers produce ordinary English language
so that people can understand computers more easily.
3. Speech recognition:
The focus of N.L.P is to enable computers to communicate interactively with English words
and sentences that are typed on paper or displayed on a screen. The primary interactive method of
communication used by human is not reading and writing; it is speech.
The goal of speech recognition research is to allow computers to understand human speech so that they
can hear our voice and recognize the words. We are speaking speech recognition research seeks to
advance the goal of natural language processing by simplifying the process of interactive
communication between people and computers.
4.Computer vision:
It is a simple task to attach a camera to a computer so that the computer can receive visual images .it
has proven to be a far more difficult task. However to interpret those images so that the computers can
understand exactly what it is seeing.
People generally use vision as their primary means of sensing their environmental .we generally see
more than we hear, feel, smell of taste .the goal of computer vision research is to give computers this
same facility for understanding their surroundings.
5. Robotics:
A robot is an Electro mechanical device that can be programmed to perform manual tasks. The
robotic industries association formally defines a robot as a re programmable multi functional
manipulator designed to move material, parts, roots or specialized devices through variable
programmed motions for the performance of a variety of tasks.
Not at all robotics is considered to be the part of AI .a robot that performs only the actions it has been
pre programmed to perform is considered to be a dumb robot processing no more intelligence.
An intelligent robot includes some kind of sensory apparatus such as a camera that allows it to respond
to changes in its environment, rather than just to allow instructions mindlessly.
Intelligent computer assisted instruction (ICAI):
CAI has been in use for many years bringing the power of the computer to bear on the educational
process. now AI methods are being applied to the development of intelligent computer assisted
instruction in an attempt to create computerized tutors that shape their teaching techniques to fit the
learning patterns of individual students.
Automatic programming:
In simple terms programming is the process of telling the computer exactly what you want it to
do. Developing a computer program frequently requires a great deal of time. A program must be
designed, written, tested, debugged and evaluated all as part of the program development process.
The goal of automatic programming is to create special programs that act as intelligent tool to assist
programmers and expedite each phase of the programming process. The ultimate aim of automatic
programming is computer system that could develop programs by itself, in response to and in
accordance with the specifications of a program developer.
AI Programming
Symbolic
Heuristic search
not explicit
Satisfactory
Separate
Imprecise
Frequent
Large Knowledge base
Inferential
ConventionalProgramming
Numeric
algorithmic search
Precise
Optional
Intermingled
Precise
Rare
large database
Repetitive
Water
Boiled
Boiling
Water
Added coffee
Milk powder
Decation
Milk
Coffee
Added sugar
Palatable coffee
Example2:
A Water Jug Problem:
You are given two Jugs, a 4-gallon one and a 3-gallon one. Neither have
any measuring markers on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 gallons of water into the 4-gallon jug?
The state space for this problem can be described as the set of ordered pairs of integers (x, y),
such that x=0, 1,2,3, or 4 and y = 0,1,2, 0r 3; x represents the number of gallons of water in the 4gallon jug, and y represents the quality of water in the 3-gallon jug. The Start State is (0,0). The goal
state is (2,n) for any value of n (since the problem does not specify how many gallons need to be in the
3-gallon jug).
(x, y)
If x < 4
(4,y)
2.
(x, y)
If y < 3
(x,3)
(x, y)
If x >0
(x d, y)
(x, y)
(x, y - d)
If y > 0
(x, y)
(0, y)
If x > 0
(x, y)
(x, 0)
If y > 0
(x, y)
(4,y (4 -x))
If x + y> 4 and y > 0
(x, y)
(x-(3-y),3)
If x + y> 3 and x > 0
9.
(x, y)
(x+y,0)
If x + y <3 and x > 0
10
(x, y)
(x+y,0)
If x + y < 3 and x > 0
(0,2)
(2,0)
5
6
7
11
12.
(2,y)
the
(0,y)
Ground.
Production rules for the water jug problem.
Gallons of water in the 4-gallon jug.
0
0
3
3
4
0
2
Rule Applied
2
9
2
7
5 0r 12
9 or 11
0
0
3
0
2
1
8
6
0
4
2
2
1
1
3
0
10
1
8
6
Problem Reduction: In this method a complex problem is broken down or decomposed into a set of
preemptive sub problems. Solutions for these preemptive subprograms are easily obtained. The
solutions for all the sub-problems collectively give the solution for the complex problem.
Example:
We want to evaluate
(x2+3x+sin2xcos2x)dx
3x dx
sin2x cos2x dx
x3 /3
3 x2/2 dx
(1-cos2x)cos2x dx
3 x2 /2
(cos2x cos4x) dx
The individual values can be combined (Integrated) to get the final result.
Major components of AI : Any AI system has four major components.
1.
2.
3.
4.
Knowledge representation
Heuristic search
AI programming languages and tools
AI hardware
A physical symbol system is a machine that produces through time an evolving collection of
symbol structures.
Creation
Modification
Set of operators
Reproduction
Destruction
P.S.S. has the necessary and sufficient means to exhibit intelligence.
Intelligence requires knowledge
Experience gives knowledge
Intelligence requires knowledge
Less desirable properties of knowledge
There appears to be no way to prove or disprove it on logical grounds. So it must be subjected to
empirical validation. We may find that it is false. We may find that the bulk of the evidence says that it
is true. But the only way to determine its truth is by experimentation.
The importance of the physical symbol system hypothesis is twofold. It is a significant theory
of the nature of human intelligence and so is of great interest to psychologists. It also forms the basis
of the belief that it is possible to build programs that can perform intelligent tasks now performed by
people.
Properties of AI:
1.
2.
3.
4.
5.
Non AI Problems
AI Techniques
Non AI Techniques
What is an AI Techniques?
It is voluminous
It is hard to characterize accurately
It is constantly changing
10
It differs from data by being organized in a way that corresponds to the ways it will be
used.
Organization of knowledge is situation dependent
The three important AI Techniques:
Search: Provides a way of solving problems for which no more direct approach is available as well as
a frame work into which any direct techniques that are available can be embedded.
Use of knowledge: Provides a way of solving complex problems by exploiting the structures of the
objects that are involved.
Abstraction: Provides a way of separating important features and variations from unimportant ones
that would otherwise overwhelm any process.
Problem Characteristics:
Heuristic search is a very general method applicable to a large class of problems. In order to choose
the most appropriate methods for a particular problem it is necessary to analyze the problem along
several key dimensions.
1.
2.
3.
4.
x2 dx
x3 /3
3x dx
(1-cos2x)cos2x dx
3 x2 /2
(cos2x cos4x) dx
3x dx +
sin2x cos2x dx
11
The individual values can be combined (Integrated) to get the final result.
A non-decomposable problem: Blocks World problem
On (C, A)
Operators available:
C
A
A
B
1. Clear (x) [ block x has nothing on it] On(x,table) [ Pick up x and put it on table]
2. Clear(x) and clear(y) On(x, y) [put x on y]
A proposed solution: Decomposition produces two smaller problems
1. Is simple the start state. Simply put B and C.
2. Is not simple. We have to clear off A by removing C before we can pick up A and put it on B
this can be done easily.
1.We now try to combine the two sub-solutions into one solution we fail regardless of which one do
first, we will not be able to so the second. I.e. 1 and 2 are independent.
2. Can solution steps be ignored or undone?
1. Theorem Proving: Suppose we want to prove a mathematical theorem we proceed by first
proving a lemma that we think will be useful. Eventually, we realize that the lemma is not of
help at all. Are we in trouble?
No. All we have lost is the effort that was spent.
2. The 8 puzzle: The 8-puzzle is a square in which is placed eight square tiles. The remaining 9 th
square is uncovered. Each tile has a number on it. A tile that is adjacent to the blank space can
be slid into that space. A game consists of a starting position and specified into the goal
position by sliding the tiles around.
3. Chess: Suppose we made a wrong move and we realized it a couple of moves later.
We cannot go back to correct the move.
The three problems are of three classes.
1. Ignorable: (Ex: Theorem proving) in which solution steps can be ignored.
2. Recoverable: (Ex: 8-puzzle) in which solution steps can be undone.
3. Irrecoverable: (Ex: Chess) in which solution steps cannot be undone.
Ignorable problems can be solved using a simple control structure that never back tracks.
Recoverable problems can be solved using a simple control structure that backtracks.
A great deal of effort is needed to solve irrecoverable problems.
3. Is the universe predicate?
12
New York
Miami
13
Dallas
San Fransisco
Boston
New York
Miami
Dallas
S.F.
----250
1450
1700
3000
250
---1200
1500
2900
1450
1200
----1600
3300
1700
1500
1600
----1700
3000
2900
3300
1700
-------
Path 1 :
Boston---250---->New
>Boston
York---1450---->Miami---3050---->Dallas---4750---->S.F---7750----
Path 2 :
Boston---3000---->S.F.---4700---->Dallas---6200---->New
>Boston
York---7400---->Miami---8850----
We cant say one path is the shortest one unless we try other paths also.
1. Marcus any path problems can be solved in a reasonable amount of
time.
2. TSP best path problems_ computationally harder than any path
problems.
5. Is the solution a state or path?
Natural language understanding:
The bank president ate dish of pasta salad with the fork.
Several components in this sentence, each of which in solution, may have more than one
interpretation. But, the whole sentence must give only meaning.
Source of Ambiguity:
Bank financial institutions (or) side of rivers only one of these may have a president. Dish
object of the verb eat, a dish was eaten? The Pasta Salad in the dish was eaten.
Pasta salad a salad containing pasta.
Dog food doesnt normally contain dog.
So some search is required to find the interpretation of the sentence. But these will be anyone
interpolation.
Ex: Water jug problem.
The solution is not just the state (2,0) but the path from (0,0) to (2,0).
6.What are the role of knowledge?
14
Chess: Knowledge required is very little (a set of rules for legal moves, a control mechanism that
implements an appropriate search procedure, knowledge of good tactics by a perfect program.
Newspaper: Now consider the problem of scanning daily newspaper to decide which are
supporting the democrats and which are supporting the republicans in some upcoming election.
Again assuming unlimited computing power, how much knowledge would be required by a
computer trying to solve this problem? This time the answer is a great deal.
1. The names of the candidates in each party.
2. The fact that if the major thing you want to see done is has taxes lowered, you are probably
supporting republicans.
3. The fact that if the major thing you want to see done is improved education for minority
students, you are probably supporting the democrats.
4. The fact that if you opposed to big government you are probably supporting the republican.
And so on.
These two problems chess and newspaper story understanding, illustrate the difference between the
problems for which a lot of knowledge is important only to constrain the search for solution and
those for which a lot of knowledge is required even to be able to recognize a solution.
A set of rules.
One or more knowledge or database.
A control strategy that specifies the order of the rules to be applied.
A rule applied.
15
Non-monotonic
Partially Commutative
Theorem proving
Robot navigation
Chemical synthesis
Bridge
16
Searching Techniques
Every AI program has to do the process of searching for the solution steps are not explicit in
nature. This searching is needed for solution steps are not known before hand and have to be found
out. Basically to do a search process the following steps are needed.
1. The initial state description of the problem.
2. A set of legal operators that changes the state.
3. The final or goal state.
The searching process in AI can be broadly classified into two major parts.
1. Brute force searching techniques (Or) Uninformed searching techniques.
2. Heuristic searching techniques (Or) Informed searching techniques.
Brute force searching techniques:
In which, there is no preference is given to the order of successor node generation and
selection. The path selected is blindly or mechanically followed. No information is used to determine
the preference of one child over another. These are commonly used search procedures, which explore
all the alternatives, during the searching process. They dont have any domain specific knowledge all
their need are the initial state , final state and the set of legal operators. Very important brute force
searching techniques are
1. Depth First Search
2. Breadth First Search
Depth first search: This is a very simple type of brute force searching techniques. The search
begins by expanding the initial node i.e. by using an operator generate all successors of the initial node
and test them.
This procedure finds whether the goal can be reached or not but the path it has to follow has not been
mentioned. Diving downward into a tree as quickly as possible performs Dfs searches.
Root
B
D
I
G
Goal State
Algorithm:
Step1: Put the initial node on a list START.
Step2: If START is empty or START = GOAL terminates search.
Step3: Remove the first node from START. Call this node a.
Step4: If (a= GOAL) terminates search with success.
17
Step5: Else if node a has successors, generate all of them and add them at the beginning
Of START.
Step6: Go to Step 2.
The major draw back of the DFS is the determination of the depth citric with the search has to
proceed this depth is called cut of depth.
The value of cutoff depth is essential because the search will go on and on. If the cutoff depth
is smaller solution may not be found. And if cutoff depth is large time complexity will be more.
Advantages:
DFS requires less memory since only the nodes on the current path are stored.
By chance DFS may find a solution with out examining much of the search
space at all.
B
D
I
G
Goal State
ALGORITHM:
Step 1. Put the initial node on a list START
Step 2. If START is empty or goal terminate the search.
Step 3. Remove the first node from the Start and call this node a
Step 4. If a =GOAL terminate search with success
Step 5. Else if node a has successors generate all of them and add them at the tail of
START
Step 6. Go to step 2.
Advantages:
1.
BFS will not get trapped exploring a blind alley.
2.
If there is a solution then BFS is guaranteed to find it.
3.
The amount of time needed to generate all the nodes is considerable because of the time
complexity.
4.
Memory constraint is also a major problem because of the space complexity.
5.
The searching process remembers all unwanted nodes, which are not practical use for the
search process.
18
Heuristic Search Techniques: In informed or directed search some information about the problem
space is used to compute a preference among the children for exploration and expansion.
The process of searching can be drastically reduced by the use of heuristics. Heuristic is a
technique that improves the efficiency of search process. Heuristic are approximations used to
minimize the searching process. Generally two categories of problems are used in heuristics.
1.
Problems for which know exact algorithms are known & one needs to find an appropriate
& satisfying the solution for example computer vision. Speech recognition.
2.
Problems for which exact solutions are known like rebuke cubes & chess.
The following algorithms make use of heuristic evolution
1.
Generate & test
2.
Hill climbing
3.
Best first search
4.
A* Algorithm
5.
AO* Algorithm
6.
Constraint satisfaction
7.
Means- ends analysis.
1.Generate and test: The generate & test strategy is the simplest of all the approaches. The
generate & test algorithm is a depth first search procedure since complete solutions must be
generated before they can be tested. In its most systematic form, it is simply an exhaustive search of
the problem space, It is also known as the British museum algorithm. A reference to a method for
finding an object in British museums by wandering randomly.
Algorithm
Step 1: Generate possible solutions. For some problems this means generating a particular point in the
problem space. For others it means generating a path from a start state.
Step 2: Test to see if these actually a solution by comparing the chosen point or the end point of the
chosen path of the set of acceptable good states.
Step 3: If a solution has been found, quit, otherwise, return to step 1.
Hill Climbing:
It is a variant of generate a test in which feed back from the test in which feed
back from the test procedure is used to help the generator decide which direction to move in the search
space. In a pure generate & test procedure the test function response with only a yes or no. But if the
test function is augmented with a heuristic function. That provides a estimate of how close given state
is to a goal state. Hill climbing is often used when a good heuristic function is available for evaluating
states. But when no other useful knowledge is available. This algorithm is also discrete optimization
algorithm uses a simple heuristic function. The amount of distance the node is from the goal node in
fact there is a practically no difference between hill climbing & DFS except that the children of the
node that has been expanded are shorted by the remaining distance nodes.
19
Root
B
F
C
G H
G
Goal State
Algorithm:
Step1: Put the initial node on a list START.
Step2: If (START is empty) or (START = GOAL) then terminate the search.
Step3: Remove the first node form the start, call this node a.
Step4: If ( a = GOAL) terminate search with success.
Step5: ELSE if n ode a has successors generate all of them. Find out how form they are from the
goal node. Sort them by the remaining distance from the goal and add them to beginning of the
start.
Step6: Go to step 2.
Problems of hill climbing:
Local maximum:
A state that is better then all its neighbors but not so when compared to states to
states that are farther away.
Local Maximum
Plateau: The flat area of the search space in which all neighbors have the same value.
Plateau
Ridge: Described as a long and narrow stretch of evaluated ground or a narrow elevation or raised part
running along or across a surface.
Ridge
20
In order to overcome these problems, adopt one of the following or a combination of the
following methods.
1. Backtracking for local maximum. Backtracking helps in undoing what has been done so far and
permits to try different path to attain the global peak.
2. A big jump is the solution to escape from the plateau. A huge jump is recommended because in a
plateau all neighboring points have the same value.
3. Trying different paths at the same time is the solution for circumventing ridges.
E
K
12
14
S
B
Goal
5
2
7
H
J
Search process of best-first search.
Step
chooses
Node being
Children
Available nodes
21
Node
Expanded
1.
2.
3.
4.
5.
S
A
C
B
H
A: 3, B: 6, C: 5
D: 9, E: 8
H: 7
F: 12, G:14
I: 5, J: 6
6.
A: 3, B: 6, C: 5
B: 6, C: 5, D: 9, E: 8
B: 6, D: 9,E: 8,H: 7
D:9,E:8,H:7,F:12,G:14
D:9,E:8,F:12,G:14,I:5,
J:6
D:9,E:8,F:12,G:14,J:6,
K:1, L:0,M:2
A: 3
C: 5
B: 6
H: 7
I: 5
search stop
goal is
reached
There is an only minor variation between hill climbing and Best FS. In the former we sorted the
children of the first node being generated. Here we have to sort the entire list to identify the next node
to be expanded.
The paths found by best first search are likely to give solutions faster because it expands a node
that seems closer to the goal. However there is no guarantee of this.
Algorithm:
Step 1: put initial node on a list start.
Step 2: if (start is empty) or (start =goal) then terminate search.
Step 3: remove the first node from start. Call this node a.
Step 4: if (a = goal) then terminate search with success.
Step 5: else if node a has successors, generate all of them. Find out how far they are from
the goal node. Sort all the children generated so far by the remaining distance from the
goal.
Step 6: name this list as start one.
Step 7: replace start with start one.
Step 8: go to step 2.
A* algorithm: The best first search algorithm that was just presented is a simplification an algorithm
called A* algorithm, which was first, presented by HART.
A part from the evolution function values one can also bring in cost functions indicate how much
resources like time, energy, money etc. have been spent in reaching a particular node from the start.
While evolution functions deal with the future, cost function deals with the past. Since the cost
function values are really expanded they are more concrete than evolution function values. If it is
possible for one to obtain the evolution function values then A* algorithm can be used. The basic
principle is that the sum the cost and evolution values for a state to get its goodness worth and this is a
yard stick instead evolution function value in best first search. The sum of the evolution function value
and the cost along the path leading to that state is called fitness number. While best first search uses
the evolution function value for expanding the best node A* uses the fitness number for its
computations.
9
6
14
22
13
A
3
18
20
12
6
23
17
14
K
0 20
L
2
I
Goal
C
11
18
2
21
M
5
7H
23
Step
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Node being
Expanded
S
A
B
C
E
D
G
H
F
I
Children
Available nodes
A: 6, B: 8, C: 11
D: 14, E: 13
F: 18,G: 17
H: 18
A: 6, B: 8, C: 11
B: 8, C: 11, D: 14, E: 13
C: 11,D: 14,E: 13,F: 18,G: 17
D: 14,E: 13,F: 18,G: 17,H: 18
D: 14,F: 18,G: 17,H: 18
F: 18,G:17,H: 18
F: 18,H: 18
F:18, I: 23 ,J:23
I: 23,J: 23
J: 23, K: 20,L: 20,M: 21
I: 23,J: 23
K: 20,L: 20,M: 21
Node chooses
A: 6
B: 8
C: 11
E: 13
D: 14
G: 17
H: 18
F: 18
I: 23
L: 20
Search stop
Goal is reached
Algorithm:
Step 1: put the initial node on a list start
Step 2: if (start is empty) or (start = goal) terminate search.
Step 3: remove the first node from the start call this node a
Step 4: if (a= goal) terminate search with success.
Step 5: else if node has successors generate all of them estimate the fitness number of the successors
by totaling the evaluation function value and the cost function value and sort the fitness number.
Step 6: name the new list as start 1.
Step 7: replace start with start 1.
Step 8: go to step 2.
Problem Reduction: In this method, a complex problem is broken down or decomposed into a set
of primitive sub problems. Solutions for these primitive sub-problems are easily obtained. The
solutions for all the sub-problems collectively given the solution for the complex problem.
23
Between the complex problem and the sub-problem, there exist two kinds of relationships, i.e
AND relation and OR relation ship.
In AND relation ship, the solution for the problem is obtained by solving all the sub-problems.
(Remember AND gate truth table condition).
In OR relationship, the solution for the problem is obtained by solving any of the subproblems. (Remember AND gate truth table condition).
This is why the structure is called an AND-OR graph.
The problem reduction is used on problems such as theorem proving, symbolic integration and
analysis of industrial schedules.
To describe an algorithm for searching an AND-OR graph, need to exploit a value, call futility.
If the estimated coast of a solution becomes greater than the value of futility, then give up the search.
Futility should be chosen to corresponds to a threshold such that any solution with a cost above it is
too expensive to be practical, even if it could every be found.
A
9
5 B
24
If can be shown that AO* will always find a minimum cost solution tree if one
exists, provided only that h*(n) <_ h(n), and all are costs are positive. Like A*, the
efficiency depends on how closely h* approximates h.
Constraint satisfaction:
Many problems in AI can be viewed as problems of constraint satisfaction in which the goal is
to discover some problem state that satisfies a given set of constraints. Constraint satisfaction is a
search procedure that operates in a space of constraint sets. The initial state contains the constraints
that are originally given in the problem description. A goal state is any state that has been constrained
enough, where enough must be defined for each problem.
Constraint satisfaction is two-step process. First, constraints are discovered and propagated as far as
possible through outs the system.
Algorithm:
1. Propagate available constraints. To do this, first set OPEN to the set of all objects that must have
values assigned to them in a complete solution. Then do until an inconsistency is detected or until
OPEN is empty:
(a) Select an object OB from OPEN. Strengthen as much as possible the set of constraints that
apply to OB.
(b) If this set is different from the set that was assigned the last time OB was examined or if this
is the first time OB has been examined, then add to OPEN all objects that share any
constraints with OB
(c) Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report the
solution.
3. If the union of the constraints discovered above defines a contradiction, then return failure.
4. If neither of the above occurs, then it is necessary to make a guess at in order to proceed. To do
this, loop until a solution is found or all possible solutions have been eliminated
(a) Select an object whose value is not at determined and select a way of strengthening the constraints
on that object.
(b) recursively invoke constrain satisfaction with the current set of constraints augmented by
the strengthening constraint just selected.
This algorithm apply it in particular problem domain requires the use of two kinds of rules. Rules that
define the way constraints may validly be propagated and rules that suggest guesses when guesses are
necessary.
The solution process proceeds in cycles, at each cycle two significant things are done.
1. Constraints are propagated by using rules that correspond to the
properties of arithmetic.
2. A value is guessed for some letter whose value is not yet determined.
Problem1: S E N D
25
MORE
=======
MONEY
==========
LET M=1
C3+S+1>9, C3 can be 0 or 1 => S=9 or 8
C3+S+M can be either 9,10 or 11. It is 9 then no carry, If sum is 11 then O=1.
But M is already assigned 1.
So O=0 and C3=0. S=9 or 8.
Let C3=0 & S=9
C2+E+O=N if C2=0, E=N It is wrong.
So c2 =1, 1+E =N.
Let E=2 then N=3
923D
10R2
======
10 3 2 Y
R=9 & C1 = 0 wrong
R=8 & C1 = 1 correct
923D
1082
=====
1032Y
to get carry D>8 => D= 8 or 9 clash, Similarly E= 3 & 4 clash
Now for E=5 then N=6
956D
10R5
=====
1065Y
C2+6+R = 1 5
C2= 0 => R=9 wrong
C2= 1 => R =8
956D
1085
=====
1065Y
Now D+5>9=>D>4
D=6 then Y=1 It is wrong
D= 7 then Y=2 It is correct
D= 8 or 9 wrong
Result: 9 5 6 7
1085
=====
10652
Values: S=9,E=5, N=6 , D= 7,
M=1,O=0,R=8,E=5
M=1,O=0,N=6,E= 5,Y=2.
Problem2:
D O NALD
G E R ALD
26
=========
ROBERT
D+D = C1.T
C2+A+A = C3.E
C4+O+E= C5.O
C1+ L+ L = C2.R
C3+N+R= C4.B
C5+D+G= R
27
The means-ends analysis process centers on the detection of differences between the current
state and the Goal State. Once such a difference is isolated, an operator that can reduce the difference
must be found. But perhaps that operator cannot be applied to the current state. So we set up a sub
problem of getting to a state in which it can be applied. The kind of backward chaining in which
operators are selected and then sub-goals are set up to establish the preconditions of the operators is
called operator sub-goaling.
Just like the other problem solving techniques we have discussed, means-end- analysis relies
on a set of rules that can transform one problem state into another. These rules are usually not
represented as a left side that describes the conditions that must be met for the rule to be applicable
(these conditions are called the rules preconditions) and a right side that describes those aspects of the
problem state that will be changed by the application of the rule.
Algorithm: 1. Compare CURRENT to GOAL. IF there are no differences between them then
return.
2.Otherwise, select the most important difference and reduce it by doing the
following until success or failure is signaled.
(a) Select an as yet untried operator 0 that is applicable to the current difference. If
there are no such operators, them signal failure.
(b) Attempt to apply 0 to CURRENT. Generate descriptions of two states: 0-START,
a state in which 0s preconditions are satisfied and 0-RESULT, the state that
would result if 0 was applied in 0-START.
(c) If (FIRST-PART MEA(CURRENT, 0-START))
And
(LAST-PART MEA (0-RESULT, GOAL)
are successful, then signal success and return the result of concatenating
FIRST PART, 0, and LAST-PART.
Ex: Initial state: ( (R & (~PQ)&S))
Goal State:( ((Q V P) & R)&~S)
(R & (~P Q)
(~PQ)
&R
(~~P V Q) & R
(P V Q) & R
(Q V P) & R
Knowledge Representation
Knowledge is an intellectual acquaintance with, or perception of, fact or truth. A representation
is a way of describing certain fragments or information so that any reasoning system can easily adopt
28
it for interfacing purpose. Knowledge representation is a study of ways of how knowledge is actually
picturised and how effectively it resembles the representation of knowledge in human brain.
A knowledge representation system should provide ways of representing complex knowledge
and should possess the following characteristics.
1. The representation scheme should have a set of well-defined syntax and semantics. This help in
representing various kinds of knowledge.
2. The knowledge representation scheme should have a good expression capacity. A good
expressive capability will catalyze the inference mechanism in its reasoning process.
3. From the computer system point of view, the representation must be efficient. By this we mean
that it should use only limited resources with out compromising on the expressive power.
Representations and mappings:
In order to solve the complex problems encountered in AI, one needs both a large amount of
knowledge and some mechanisms for manipulating that knowledge to create solutions to new
problems. A variety ways of representing knowledge have been exploited in AI programs.
Facts: truths in some relevant world. These are the things we want to represent.
Representations: Representations of facts in some choose formalism. These are the things we will
actually be able to manipulate.
One way to think of structuring these entities is as two levels:
The knowledge level: The knowledge level at which facts are described.
The symbol level: The symbol level at which representations of objects at the knowledge level are
defined in terms of symbols that can be manipulated by programs.
Facts
Internal Representation
English
Understanding
English Representation
English
generation
29
30
31
relationships are important for attributes for the same reason that they are important for other concepts
- they support inheritance.
3. Techniques for reasoning about values: Some times values of attributes are specified explicitly when
acknowledge base is created. But often the reasoning system must reason about values it has not been
given explicitly. Several kinds of information can play a role in this reasoning.
4. Single valued attributes: A specific but very useful kind of attributes is one that is guaranteed to take
a unique value. Knowledge - representation systems have taken several different approaches to
providing support for single - valued attributes.
Introduce an explicit notation for temporal interval. If two different values are
ever asserted for the same temporal interval, signal a contradiction
automatically.
Assume that the only temporal interval that is of interest is now so if a new
value is asserted. Replace the old value.
Provide no explicit support.
3.Choosing the granularity of representation: Regardless of the particular representation formalism, we
choose, it is necessary to answer the question At what level of details should the world be
represented. Another way this question is often phrased is what should be our primitives? should
there be a small number of low-level ones or should there be a larger number covering a range of
granularities?
The major advantage of converting all statements into a representation in terms of a small set
of primitives is that the written only in terms of the primitives rather than in terms of the many ways in
which the knowledge may originally have appeared. Several AI programs including those described
by schank and Abelsan and woks are based on knowledge bases described in terms of a small number
of low-level primitives. There are several arguments against the use of low-level primitives. One is
the simple high level facts may require a lot of storage when broken down into primitives. A second
but related problem is that if knowledge is initially presented to the system in a relatively high level
form such as English, and then substantial work must be done to reduce the knowledge into primitive
form. A third problem with the use of low-level primitives is that in many domains; it is not at all clear
what the premises should be.
Ex: John spotted Sue.
Who spotted Sue?.
Here the direct answer to the question is yes.
Did John see Sue?.
32
33
Ex: I.
II.
34
II.
order to refer to the current state, we start from the initial state and
look back all the nodes on the path from start state to current state.
Make the changes to the initial state as they occur but every node
where a change takes place, gives what to do to undo the move or
change if we need to back track.
Knowledge Base
1. Has information at higher level of
Abstraction
2. Significantly smaller than
database and changes are gradual
3. Operates on a class of objects
rather than a single object
4. Updates are performed by domain
experts
5. Correctness in a sense is very
elusive
6. Has the power of inferencing
7. Used for data analysis and
planning
8. Knowledge representation is by
logic or rules or frames or
semantic rules.
9. Has to have a consultation with
the system and provide needed
data to obtain the solution
35
36
equation are expressed as procedural knowledge. Declarative knowledge on the other hand is
passive knowledge expressed as statements of facts about the world. Personnel data in a database is
typical of declarative knowledge such data are explicit pieces of independent knowledge. We
define knowledge as justified belief.
Two other knowledge terms, which we shall use occasionally, is epistemology and meta
knowledge.
Epistemology is the study of the nature knowledge whereas meta knowledge is
knowledge that is what we know.
Different kinds of widely known knowledge representation:
1. Semantic Nets 2. Frames 3. Conceptual dependency 4. Scripts
Semantic Networks:
tweety
A knid of
yellow
colour
Has parts
Wings
2.
is_a
Scooter
is_a
Two_wheeler
Motor_bike
has
has
Brakes
Moving_vehicle
has
Engine
has
Electrical_system
Fuel_System
37
Semantic nets were introduced by Quillian (1968) to model the semantics of English sentences and
words. He called his structures semantic networks to signify their intended use.
The main idea behind semantic nets is that the meaning of a concept comes from the ways in
which it is connected to other concepts. In a semantic net, information is represented as a set of nodes
connected to each other by a set of labeled arcs, which represent relation ships among the nodes.
Semantic nets are a natural way to represent relationships that would appear as ground
instances of binary predicates in predicate logic.
A semantic net representing a sentence.
Give
Instance
Agent
EV7
John
Instance
Object
BK3
Beneficiary
Mary
John
72
Height
John
Bill
Height
Height
Greater than
H1
H2
Height
72
Nodes: In the semantic net, nodes represent entities, attributes, states or events.
Arcs: In the semantic net, arcs give the relationship between the nodes.
Labels: In the semantic net, the labels specify what type relationship actually exists or describe the
relation ship.
38
A generic node is a very general node. In figure for the semantic network of Bharathiar
university computer center, the mini computer system is a generic node because many minicomputer
systems sexists and that node have to center to all of them.
Individual or instance nodes explicitly state that they are specific instances of a generic node.
HCL Horizon-III is an individual node because it is a very specific instance of the mini-computer
system.
Bharathiar
University
Computer Center
Line-printer
Mini-computer
system
HCL
Horizon-III
30
Dumb-terminal
Hammer-bank
Coimbatore
Bharathiar
University
keyboard
monitor
Is-a
Moving-vehicle
Is-a
Mini-computer system
An is-a link is a special type of link because it provides facilities to link a generic node and a
generic node and individual node and a generic node.
39
Another major feature of the is-a link is that it generates hierarchical structure with the
network.
This is a link has another major property which is called inheritance. The property of
inheritance is that the properties, which a most a generic node possesses, are transmitted to various
specific instances of the generic node.
Reasoning using semantic networks: Reasoning using semantic networks is an easy task. All that has
to be done is to specify the start node. From the initial node, other nodes are pursued using the links
until the final node is reached. To answer the question What is the speed of the line printer? from the
above figure.
The reasoning mechanism first finds the node of line printer. It identifies the arc that has the
characteristics speed since it points to the value 300, the answer is 30.
The is a link structure can be easily represented using predicate logic. Road vehicle is a land
vehicle.
x: road-vehicle (x) land-vehicle (x)
1. Marcus is a man
Man (Marcus)
Marcus man
Partitioned Semantic net: Suppose we want to represent simple quantified expressions in semantic
nets. One way to do this is to partition the semantic net into a hierarchical set of spaces, each of which
corresponds to the scope of one or more variables.
Ex:-
Bite
assailant
Mail-carrier
isa
isa
victim
The nodes dog, bite and mail carrier represent the class of dog, biting and mail carriers respectively,
while the nodes d, b and m represent a particular biting and a particular mail carrier. This fact can be
easily be represented by a single net with no partitioning.
But now suppose that we want to represent the fact
40
SA
GS
Dogs
Bite
Isa
Mail carrier
isa
Assailant
victim
isa
m
To represent this fact, it is necessary to encode the scope of the universally quantified x. The node g
stands for the assertion given above. Node g is an instance of the special class GS of general statement
about the world. Every element of GS has as least two attributes. A form, which states the relation
that is being asserted, and one or more connections, one for each of the universally quantified
variables. There is only one such variable d., which and stand for any element of the class dogs. The
other two variables in the form, b and m are under stood to be existentially quantified. In other words,
for every dog d, there exists a betting event b, and mail Carrie n, such that d is the assailant of b and m
is the victim.
Every dog in town has bitten the constable
SA
Dogs
Bite
Constable
GS
Town-Dogs
Isa
g
d
Assailant
isa
b
victim
In this net, the node c representing the victim lies out side the form of the general statement. Thus it
is not viewed as an existentially quantified variable whose value may depend on the value of d, instead
it is interpreted as standing for a specific entity. (in this case, a particular constant), just as do other
nodes in a standard, non partitioned.
41
Bite
isa
Mail-carrier
isa
assailant
isa
victim
GS
Would be represented. In this case, g has two links, one pointing to d, which represents any dog, and
one pointing to m, representing any mail carrier.
An inclusion hierarchy relates the spaces of a partitioned semantic net to each other For
example, in above space SI is included in space SA. Whenever a search process operates in a
partitioned semantic net, it can explore nodes and arcs in the space from which it starts and in other
spaces that contain the starting point, but it cannot go downwards, except in special circumstances,
such as when a form are is being traversed. So, returning to above figure, from node d it can be
determined that d must be a dog. But if we were to start at the node dogs and search for all known
instances of dogs by traversing is a links, we would not find d since it and the link to it are in the space
SI, Which is at a lower level than space SA, which Contains Dogs. This is important, since d does not
stand for a particular dog; it is merely a variable that can be instantiated with a value that represents a
dog.
Example:
Every batter hit a ball.
SA
GS
Batter
Hit
Isa
Ball
isa
Assailant
42
victim
isa
B
Batters
Like
Isa
isa
Pitcher
GS
Assailant
victim
Conceptual Graphs: A conceptual graph is a graphical portrayal of a mental perception, which consists
of basic of primitive concepts and relationships that exists between the concepts. A single conceptual
graph is roughly equivalent to a graphical diagram of a natural language sentence where the words are
depicted as concepts and relationships. Conceptual graphs may be regarded as formal building blocks
for associative networks which when linked together in a coherent way, from a more complex
knowledge structure. A concept may be individual or generic.
Ex : Joe is eating soup with a spoon
Joe and food(soup) are individual (objects)
Eat and spoon are generic
Joe
agent
object
ea
t
Food : soup
Instrument
Spoon
Conceptual graphs offer the means to represent natural language statements accurately and to perform
many forms of inference found in common sense reasoning.
Frames : Frames were first introduced by Marvin Minsky (1975) and a data structure to represent a
mental model of a stereotypical situation such as driving a car, attending a meeting or eating in a
restaurant.
Frames are general record like structures, which consist of a collection of slots and slot values.
The slots may be of any size and any type. Slots typically have names and any number of values.
A frame can be defined as a data structure that has slots for various objects and collection of
frames consists of expectations for a given situation.
43
A frame structure provides facilities for describing objects, facts about situations, procedures
on what to when a situation is encountered because of these facilities a frame provides, frames are
used to represent the two types of knowledge. Declarative/factual and procedural.
Ex :
Name of the frame
Stationary cupboard
Computer
Dumb-terminal
Printer
Dumb-terminal
Declarative and Procedural frames: A frame that merely contains description about objects is called a
declarative type/factual/situational frame.
Name :AC unit
Model
Capacity
Power cons
Name: computer
Model
CPU
AC unit
Stationary cupboard
Computer
Dumb terminal
Printer
Dumb terminal
Name: stationary
cupboard
Length
Breadth
Height
Memory
Name: printer
Name: terminal
Model
Monitor type
speed
Keyboard type
Font quality
A part from the declarative part in a frame, it is also possible to attach slots, which
explain how to perform things. In other words it is possible to have procedural knowledge represented
in a frame. Such frames which have procedural knowledge embedded in it are called action procedure
frames. The action frame has the following slots.
44
1.
2.
3.
4.
5.
Actor slot: which holds information about who is performing the activity.
Object slot : this frame information about the item to be operated on
Source slot: source slot holds information from where the action has to begin.
Destination slot: holds information about the place where action has to end.
Task slot: This generates the necessary sub-frames required to perform the operation.
Ex :
Name: cleaning the jet of carburetor
Expert: actor
Carburetor: Object
Scooter: source
Scooter: destination
The generic frame merely describes that, the expert in order to clean the nozzle of the scooter
has to merely perform, the following operations:
Removing the carburetor from the scooter
Opening it up to expose all parts
Cleaning the nozzle
Refitting it in the scooter.
Here source and destination is scooter.
45
Reasoning using frames: The task of action frames is to provide facility for procedural attachment and
help transforming from initial to goal state. It also helps in breaking the entire problem in to sub-tasks,
which can be described as top-down methodology. It is possible for one to represent any tasks using
these action frames.
Reasoning using frames is done by instantiation. Instantiation process begins when the given
situation is batches with frames that already exist. The reasoning process tries to match the frame with
the situation and latter fills up slots for which values must be assigned. The values assigned to the slot
depict a particular situation and but this reasoning process tries to move from one frame to another to
match the current situation. This process builds up a wide network of frames, there by facilitating one
to build a knowledge base for representing knowledge about common sense.
Frame-based representation language:
Frame representations have become popular enough that special high level frame-based
representation language have been developed. Most of languages use LISP as the host language. They
typically have functions to create access, modify updates and display frames.
Implementation of frame structures: One way to implement frames is with property lists. An atom is
used as the frame name and slots are given as properties. Facets and values with in slots become lists
of lists for the slot property.
Putprop train((type(value passenger))
(class(value first second sleeper))
(food(restaurant(value hot-meals))
(fast-food(value cold snacks))) land transport)
Another way to implement frames is with an association list ( an-a-list), that is, a list of sub
lists where each sub list contains a key and one or more corresponding values. The same train frame
would be represented using an a-list as
(set Q train ((AKO land transport)
(type(value passenger))
(class(value first second sleeper))
(food(restaurant(value hot-meals))
(fast-food(value cold snacks)))
It is also possible to represent frame like structures using Object oriented programming
extensions to LISP languages such as Flavors.
Scripts: Scripts are another structures representation scheme introduced by Roger Schank (1977).
They are used to represent sequences of commonly accruing events. They were originally developed to
capture the meanings of stories or to understand natural language test.
A script is a predefined frame-like structure, which contains expectations, inferences and other
knowledge that is relevant to a stereotypical situation.
46
Frames represented a general knowledge representation structure, which can accommodate all
kinds of knowledge. Scripts on the other hand help exclusive in representing stereotype events that
takes place in day-to-day activity.
Some such events are
1. Going to hotel, eating something, paying the bill and exiting.
2. Going to theatre, getting a ticket, viewing the film and leaving.
3. Going to super market, with a list of items to be purchased, putting the items needed on a
trolley, paying for them.
4. Leaving home for office in a two-wheeler, parking the two-wheeler at the railway station,
boarding the train to the place of work and going to the place of work.
5. Going the bank for with drawl, filling the with drawl slip/check, presenting to the cashier,
getting the money and leaving the bank.
All the situations are stereotype in nature and specific properties of the restricted domain can
be exploited with special purpose structures.
A script is a knowledge representation structure that is extensively used for describing stereo
typed sequences of action. It is a special case of frame structure. These are interested for capturing
situations in which behavior is very stylized. Scripts tell people what can happen in a situation,
what events follow and what role every actor plays. It is possible to visualize the same and scripts
present a way of representing them effectively what a reasoning mechanism exactly understand
what happens at that situation.
Reasoning with Scripts: Reasoning in a script begins with the creation of a partially filled script
named to meet the current situation. Next a known script which matches the current situation is
recalled from memory. The script name, preconditions or other key words provide index values
with which to search for the appropriate script. An inference is accomplished by filling in slots
with inherited and defaults values that satisfy certain conditions.
Advantages:
1. Permits one to identify what scenes must have been proceed when an event takes place.
2. It is possible using scripts to describe each and every event to the minutest detail so that
enough light is thrown on implicitly mentioned events.
3. Scripts provide a natural way of providing a single interpretation from a variety of
observations.
4. Scripts are used in natural language understanding system and serve their purpose effectively in
areas for which they are applied.
Disadvantages:
1. It is difficult to share knowledge across scripts what is happening in a script is true only for that
script.
2. Scripts are designed to represent knowledge in stereo type situations only and hence cannot be
generalized.
47
Important components:
1. Entry condition: Basic conditions that must be fulfilled. Here customer is hungry and has
money to pay for the eatables.
2. Result: Presents the situations, which describe what, happens after the script has occurred.
Here, the customer after satisfying his hungry is no hungrier. The amount of money he has is
reduced and the owner of the restaurant has now more money. Captional results can also be
stated here like the customer is pleased with the quality of food, quality of service etc., or can
be displeased.
3. Properties: These indicate the objects that ate existing in the script. In a restaurant on has
tables, chairs, menu, food money, etc..
4. Roles: What various characters play is brought under the slot of roles. These characters are
implicitly involved but some of them play an explicit role. For example waiter and cashier play
an explicit role where the cook and owner are implicitly involved.
5. Track: Represents a specific instance of a generic pattern. Restaurant is a specific instance of a
hotel.
This slot permits one to inherit the characteristics of the generic node.
6. Scenes: Sequences of activities are described in detail.
Scene 1: Entering the restaurant
Customer enter into the restaurant .
Customer PTRANS restaurant
Script: Going to a restaurant
Customer scans the tables.
Entry
Conditions:
Customer
Ex
2 : Going
to super
market is hungry
Customer
ATTEND
Scene
1: Enter
market eyes to the tables
Customer has money
Customer
decides
where
to market
sit.
Script: Going to a Owner
super market
Shopper
Ptrans
into
has food
Customer
MBUILD
to sitcart
there
Track:Food,
Supertables,
marketmenu, money
Shopper
Ptrans
shopping
to shopper
Props:
SceneScene
2: Ordering
the
food
Roles:
Implicit
Roles:
Owner
of
supermarket.
2:
Shop
for
items
Roles: 1.Explicit: customer, waiter,
Customer asks
for menu.
Customer
MTRANS
menu
Producer of items.
Shopper
MOVES
shopper
throughfor
aisles
cashier
Waiter bringsShopper
it. Waiter
PTRANS eyes
the menu
Explicit Roles: Shopper,attendants,
ATTENDS
to display items
2.Implicit:Owner,Coocker
Customer
decides
choice
of
food.
Clerks, cashier.
Shopper Ptrans items to shoppers cart
Track: Restaurant
Customer
MBUILD
Entry
Conditions
:
Shopper
needs
groceries
Scene
3
: Checkchoice
out of food
Results: Customer is not hungry
Customer
orders
that
food.
Customer
MTRANS
food market
open
Shopper MOVES
to check
out standthat food.
Owner
has more money
Scene 3: Eating
the food
Prop : Shopping
cart,
aisles,
Shopper
ATTENDS eyes to charges
Customer
hasdisplay
less money
Cook gave food
to
waiter.
Cook
ATRANS
food to waiter.
market items,
checkout
stands,
cashier,
money
Shopper Atrans
money
to cashier
Owner has less food.
Waiter gave the
foodAtrans
to customer.
Results : Shopper has less money
Sacker
bags to shopper
WaiterScene
ATRANS
food
to customer.
Shopper has grocery items
4 : Exit
market
Customer eatsShopper
the foodPtrans
with ashopper
spoon. to exit to market
Market has less grocery items
Customer INGESTS the food with a spoon.
Market has more money
Scene 4: Paying the bill
Customer asks for bill. Customer MTRANS for bill.
Waiter brings it. Waiter PTRANS it.
Customer gave a check to waiter.
Customer ATRANS a check to waiter.
Waiter brings the balance amount.
Waiter PTRANS the balance amount.
Customer gave tip to waiter. Customer ATRANS to him
Customer moves out. Customer PTRANS out .
Ex1: Going to a restaurant
48
Conceptual Dependency (CD): Conceptual dependency is a theory of how to represent the kind of
knowledge about events that is usually contained in natural sentences. The goal is to represent the
knowledge in a way that
Facilitates drawing interference from the sentences.
Is independent of the language in which the sentences were originally stated.
The theory was first described in Schank 1973 and was further developed in Schank 1975. It has
been implemented in a variety of programs that read and understand natural language text. Unlike
semantic nets provide only a structure in to which nodes representing information at any level can
be placed. Conceptual dependency provides both structure and a specific set of primitives, at a
particular level of granularity out of which representations of particular pieces of information can
be constructed.
Conceptual dependency (CD) is a theory of natural language processing which mainly deals
with representation of semantics of a language. The main motivation for the development of CD as
a knowledge representation techniques are given below.
To construct computer programs that can understand natural language.
To make inferences from the statements and also to identify conditions in which two sentences
can have similar meaning.
To provide facilities for the system to take part in dialogues and answer questions.
To provide a necessary plank that sentences in one language can be easily translated into other
languages.
To provide means of representation which are language dependent.
Knowledge is represented in CD by elements what are called as conceptual structures.
Apart from the primitive CD actions one has to make use of the six following categories of
objects
1. PPs (Picture producers) : Only physical objects are physical procedures.
2. Acts : Actions are done by an actor to an object.
Major acts :
CD primitives
a. ATrans
b. PTrans
c. PROPEL
d. Move
Kick)
e. GRASP
f. INGEST
animal (eg.
g. EXPEL
to the
Explanation
Transfer of abstract relationship (eg. Give)
Transfer of physical location of an object (eg. Go)
Application of physical force of an object(eg. Throw)
Movement of a body part of an animal by the animal (eg.
Grasping of an object by an actor (eg. Hold)
Taking of an object by an animal to the inside of that
Drink,eat)
Expulsion of an object from inside the body by an animal
49
h. MTrans
within animal
words (eg.
Spit)
Transfer of mental information between animals or
(e
i. MBuild
information (eg.
g. Tell)
Construction of a new information from an old
Dec
ide)
j. Speak
k. Attend
3. Locs : Locations
Every action takes place at some locations and serves as source and destination
4. Ts : Times
An action can take place at a particular location at a given specified time. The time can
be represented on an absolute scale or relative scale.
5. AAs : Action Aiders
These serve as modifiers of actions. An actor PROPEL has a speed of actor associated
with it which is an action aider.
6. PAs : Picture Aiders
Serve as aides of pictures procedures. Every object that serve as a PP, needs certain
characteristics by which they are defined. PAs practically serve PPs by defining the
characteristics.
The main goal of CD representation is to make explicit of what is implicit. That is way, every
statement that is made has not only the actors and objects but also time and locations, source and
destinations.
The following set conceptual tenses make usage of CD more precise
ORPFTTs Tf K?/-
CD brought forward the notation of language independence because all acts are language
independent primitives.
50
PP
ACT
Bird
Ptrans
Bird flew
PP
PP
Joe
student
Joe is a student
Act
PP
PP
Joe
PROPEL
Joe
Atrans
Act
door
Sue
a flower
sue
PP
Joe
flower
Joe
Act
Joe
INGEST
do
soup
spoon
Joe ate some soup
to
man
I
ATRANS
book
I
from
51
One
INGEST
smoke
I
CIGARETTE
TfP
I
INGEST
ONE
smoke
DEAD
CIGARETTE
ALIVE
The vertical causality link indicates that smoking kills one. Since it is marked, however we
know only that smoking can kill one, hot that it necessarily does.
The horizontal causality link ordinates that it is first causality that made me stop smoking. The
qualification p attached to the depending between I and INGEST indicates that the smoking has
stopped and that the stopping happened.
There are three important ways in which representing knowledge using the CD model
facilities reasoning with the knowledge.
1. Fewer inference rules are needed would be required if knowledge were not broken down
into primitives.
2. Many inferences are already contained in the representation it self.
3. The initial structure that is built to represent the information contained in one sentence will
have that need to be billed. These holes can serve as an attention focuser for the program
that must understand using sentences.
LOGIC
Logic can be defined as a scientific study of the process of reasoning and the system of rules
and procedures that help in the reasoning process.
Basically, the logic process takes in some information (called premises) and produces some
outputs (called Conclusions).
Logic is basically classified into two categories.
52
There are two kinds of propositions. They are atomic propositions (also called simple
propositions) and molecule propositions (also called compound propositions).
Combining one or more atomic propositions using a set of logical connective forms molecular
propositions.
Ex :
Molecular propositions are much more useful than atomic propositions because real world
problems involve more of molecular propositions.
Syntax of propositional logic :
1. ~A
2. A & B
3. A v B
4. A B
5. A
6. A
7. A
8. A
(Negation of A)
(Conjunction of A with B)
(Inclusive disjunction of A with B)
(A implies B)
B
(Material bi-conditional of A with B)
B
(Exclusive disjunction of A with B)
B
(Joint denial of A with B)
B
(Disjoint denial of A with B)
Semantics of logical propositions: A clear meaning of the logical propositions can be arrived at by
constructing appropriate truth tables for the molecular propositions.
Truth table :
A
~A
~B
AvB
T
F
T
F
T
T
F
F
F
T
F
T
F
F
T
T
T
T
T
F
A&B AB A
T
F
F
F
T
T
F
T
53
B
T
F
F
T
A+ B A| B A B
F
T
T
F
F
T
T
T
F
F
F
T
Logical equivalences :
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
A
A&B
AvB
(A&(B&C))
(Av(BvC))
(A&(BvC))
(Av(B&C))
~(A&B)
~(AvB)
AB
A(BC)
A
B
~~A, A&A
B&A
BvA
((A&B)&C)
((AvB)vC)
((A&B)v(A&C))
((AvB)&(AvC))
~Av ~B
~A& ~B
~AvB, ~(A& ~B), (~B ~A)
((AB)C)
(A&B)v(~A& ~B), (AB)&(BA)
Tautologies: Propositions that are true for all possible combinations of truth values of their atomic parts
are called tautologies. This implies that a tautology is always true.
Ex : ((A&(AB))B) is a tautology.
A
AB
T
T
F
F
T
F
T
F
T
F
T
F
A&(AB)
(A&(AB))B
T
F
F
F
T
T
T
T
Contradiction: Whenever the truth value of a sentence is always false for all combinations of its
constituents, then the sentence is called contradiction. Ex : A & ~A.
Contingent: A statement is called a contingent if its truth table has both true and fales as its output. Ex :
A B.
Normal forms in prepositional logic: There are two major Normal forms of statements in prepositional
logic. They are Conjunctive Normal Form (CNF) and Disjunctive Normal Form (DNF).
A formula A is said to be in CNF if it has the form A=A1 & A2 & A3&&An, n>=1 where each
A1, A2, A3,,An is a disjunction of an atom or negation of an atom.
A formula A is said to be in DNF if it has the form A=A1 v A2 v A3vv An, n>=1 where each
A1, A2, A3,, An is a conjunction of an atom or negation of an atom.
Conversion procedure of Normal Form :
Step 1 : Eliminate implication and bi-conditionals. AB = ~AvB
A
B = (AB)&(BA) = (~AvB)&(~BvA).
Step 2 : Reduce the scope of NOT symbol by the formula (~(~A)) = A
Step 3 : Use distributive laws and other equivalent formulas.
54
55
Propositional logic works fine in situations where the result is either true or false but not both
.however there are many real life situations that cannot be treated this way. in order to over come this
deficiency, first order logic or predicate logic uses three additional notions .These are predicates
,terms and quantifiers.
Predicates: a predicate is defined as a relation that binds 2 atoms together.
Ex: Bhaskar likes Aero planes is represented as
Likes (baskar, aero planes)
Here likes is a predicate that likes 2 atoms "baskar" and aero planes.
The symbols and rules of combination of a predicate logic are:
(1). Predicate symbol : Rama loves sita Love(Rama,Sita) Here love is a predicate symbol.
(2). Constant symbol : Here constant symbols are Ram,Sita
(3). Variable symbol : X loves Y Here X&Y are variable symbols.
(4). Function symbol : These symbols are used to represent special type of relation ship or mapping.
It is also possible to have a function as an arguement.
Ravi's father is rani's mother is represented as father (father(ravi),rani)
Here rani is a predicate and father ravi is a function.
Constant,variables and functions are referred to as terms and predicates and reffered to as atomic
formulas.
The statements in predicate logic are termed as well formed formulas i.e WFFs. A predicate with no
variable is called a ground atom.
Connectives : The formula in predicate calculus can be combined to form more complex formula
using several connectives. OR connective( V), AND connective ( ^), Implies connective ( )
Negation connective().
Quantifiers: a quantifier is a symbol that permits one to declare or identify the range or scope of the
variables in a logical expression.
There are 2 basic quantifiers used in logic. they are universal quantifiers (for all) and existential
quantifier.
If a is variable then for all a is read as 1. for all a 2.for each a 3.for every a
Similarly if a is a variable then there exists a is read as 1.there exists a 2. for some 3. for every a
Variables:
Free and bound variables: a variable in a formula is free if and only if the occurrence is outside the
scope of a quantifier having the variable.
A variable is also free in a formula if at least one occurrence of it is free.
For all x there exists y (A(x, y,z)) and for all z (B(y, z))--------(1)
In this formula, the variable z is free in the first portion for all x there exists y (A(x, y, z))
A variable is a formula is bound if and only if its occurrence is with in the scope of the quantifier .a
variable is also bound in situations where at least one occurrence of it is bound.
Ex: for all x (A(x)->B(x))
In this formula, the quantifier for all applies over the entire formula.
(A(x)->B(x)) then the scope of the quantifier is A(x)->B(x)
any change in the quantifier has an effect on both A(x) and B(X).
hence the variable x is bound .
Normal forms in predicate logic: in predicate logic, one normal form is there .i.e called prefix normal
form .
56
a formula A in predicate logic is said to be in prefix normal form if it has the form
(Q1, x1)(Q2,,x2).(Qn, xn)B
Where Qixi is either a for all or there exists and B is formula without any quantifier.
Convert the formula for all x(A(x)->there exists y(B(x, y)) in to prefix normal form
Sol: for all x(A(x)->there exists y B(x,y))
=for all x (~A(x)or there exist y ((B(x,y))
=for all x there exists y (~(A(x)orB(x,y))->PNF.
Syntax for FOPL:
Connectives: there are 5 connective symbols
.~,
57
Clause form: A clause is defined as a WFF consisting of a disjunction of literals. The resolution
process when it is applicable is applied to pair of parents and producer a new clause.
Conversion to clasual form: one method we shall examine is called resolution of requires that all
statements to be converted into a normalized clasual form .we define a clause as the disjunction of a
number of literal. A ground clause is one in which no variables occur in the expression .A horn clause
is a clause with at most one positive literal.
To transform a sentence into clasual form requires the following steps:
1. Eliminate all implication and equivalence symbols.
2. Move negation symbols into individual atoms.
3. Rename variables if necessary so that all remaining quantifiers have different variable assignments.
4. Replace existentially quantified variables with special functions and eliminates the corresponding
quantifiers.
5. Drop all universal quantifiers and put the remaining expression info CNF(disjunctions are moved
down to literals).
6. Drop all conjunction symbols writing each clause previously converted by the conjunction on a
separate line.
We describe the process of eliminating the existential quantifiers thru a substitution process. This
process requires that all such variables are replaced by something called skolem functions, arbitrary
functions which can always assume a correct value required of an existentially quantified variable.
Example in prepositional logic: u : v: x : y: p (f(u), v, x, y)->Q(u, v, y)
The skolem form is v :x: p(f(q),v, x, g(v, x))->Q(a ,v, g(v, x))
Convert expression into clasual form
x: y (z p(f(x),y, z)( u Q(x, u)& v (R(y, v))
We have after application of step 1
x: y ( z p(f(x),y,z)or uQ(x, u)&( y R(y, u)))
After application of step2 we obtain
x y ( z p(f(z),y, z)or( u Q(x, )&( v)R(y, v)))
After application of step 4 (step 3 is not required)
y (p(f(a),y, g(y)or (Q(a, h(y &R(y,1(y)))
After application of step 5 the result is
y: ((p (f(a),y, g(y)or Q(a, h(y))&( p(f(a),y, g(y) or R(y,1(y))
Finally , after application of step 6 we obtain the clasual form
58
Example in Predicate logic: Suppose we know that all Romans who know marcus either hate Caesar
or think that any one who hates any one is crazy.
59
Unification: Unification is a procedure that compares two literals & discovers whether there exists a
set of substitutions that makes them identified.
Any substitution that makes 2 or more expressions equal is called a unifier for the expressions.
applying a substitution to an expression E produces an instance E' of E where E'=E. given 2
expressions are unifiable, such as expressions c1,c2 with a unifier B with C1B=C2,we say that B is
most general unifier (mgu) if any other unifier & is an existence of B. for ex: 2 unifiers for the literals
p(u, b, v) and p(a, x y) are & ={a/u, b/x, c/y} and B={a/u, b/x, c/v, c/y}.
Unification can sometimes be applied to literals with in the same single clause. When an mgu exists
such that 2 or more literals with in a clause are unified, the clause remaining after deletion of all but
one of the unified literals is called a factor of the original clause.
The basic idea of unification is very simple. to attempt to unify 2 literals, we first check if their initial
predicate symbols are the same, if so we can proceed otherwise there is no way they can be unified,
regardless of their arguments.
Unification has deep mathematical roots and is a useful in many AI programs. for ex: theorem proving
and natural language parser.
Algorithm:
1.If l1, l2 are both variables or constants then
a) If l1, l2 are identical then return NIL.
b) Else if l1 is a variable the if l1 occurs in l2 then return {fail} , else return (l2/l1}
c) Else if l2 is a variable then if l2 occurs in l1 then return {fail} ,else return (l1/l2}.
d) Else return {fail}.
2. If the initial predicate symbols in l1, l2 are not identical then return {fail}.
3. If l1 , l2 have a different no. of arguments then return {fail}
4. Set subset to Nil (At the end of this procedure, subset will contain all the substitutions use to unify
l1 , l2.
5. For i<-1 to no. of arguments in l1.
a) Call unify with the I th argument of l1 and the I th argument of l2 putting result in s.
b) If s contains fail then return {fail}.
c) If s is not equal to nil then
1.Apply s to the remainder of both l1, l2
2. Subset = append (s, subset).
3. Return subset.
Resolution: Robinson in 1965 introduced the resolution principle, which can be directly applied to
any set of clauses.The principle is " Given any two clauses A, B if there is a literal p1 in A which has a
60
complementing literal p2 in B, delete p1 ,p2 from A,B and strut a disjunction of the remaining
clauses .the clause so constructed is called the resolvent of A,B .
Resolution in propositional logic:
EX: A: P V Q V R ,
B:~P V Q V R
P V Q VR
~P V Q V R
QVR
C: ~Q V R
~Q V R
R
EX:2 A: P V Q V R
B:~P V R ;
C:~Q D: ~R
X = Q V R Resolvent of A &B .
Y=R
Resolvent of X &C.
Z = Nil
Resolvent of Y & D.
The resolution procedure is a simple iterative process. At each step two clauses, called the parent
clause are compared (resolved) yielding a new clause that has been inferred from them. The new
clause represents ways that the 2 parents clauses interact with each other.
Resolution work on the principle of identity complementary literals in 2 clauses and deleting them
there by forming a new literal . the process is simple and straight forward when one has identical
literals . In other words for clauses containing no variables resolution is easy. When there are variables
the problem becomes complicated and the necessities one to make proper substitutions.
There are 3 major types of substitutions
1.
Substitution of a variable by a constant.
2.
Substitution of a variable by another variable.
3.
Substitution of a variable by a function that does not contain the same variable.
Algorithm: 1. Convert all the statements of F to clause form.
2. Negate P and convert the result to clause form. Add it to the set of clauses obtained in 1.
3. Repeat until either a contradiction is found, no progress can be made, or a predetermined
amount
of effort has been expended.
a) Select 2 clauses. call these the parent clauses.
b) Resolve them together. The resolving will be the disjunction of all the literals of both
parent clauses with appropriate substitution performed & with the following exception. If there is 1
pair of literals T1,T2 such that one of the parent clause contains T1 and the other contains T2 and if
T1,T2 are unifiable then neither T1,T2 should appear in the resolving. We call T1,T2 complementary
literals.use the substitution produced by the unification top create the resolving if there is more than 1
pair of complementary literals, only pair should be omitted from the resolvent.
c) if the resolvent is the empty clause , then a contradiction has been found. if it is not, then add it to
the set of clauses available to the procedure .
Resolution in predicate calculus:
Theorem proving using resolution.
There are two basic methods of theorem proving.
Method1: Start with the given axioms, use the rules of inference and prove the theorem.
Method2: Prove that the negation of the result cannot be true.
61
The second method is commonly known as theorem proving using refutation. The methodology for
that is
Step1. Find the negation of the result to be proved.
Step2. Add it as a valid statement to the given set of statements.
Step3. Perform resolution on these statements until a contradiction is encountered.
Step4. Conclude that the contradiction is due to the assumed negation of the result.
Step5. So the negated assumption that is false or the result to be proved is true.
The following simple example will show how these two methods help in theorem proving.
Example:
Given that a) x [Physician (x)- known-surgery (x)]
b) Physician (Bhaskar)
Prove that knows- surgery (Bhaskar)
Proof Using method 1:
Modus ponens states that if there is an axiom of the form PQ and another of the form P, and
Q logically follows. Assuming Physician (Bhaskar) as P and [physician (x) knows-surgery (x)] as Q
the result knows-surgery (bhaskar) logically follows.
Proof Using Method2:
Assume the negation of the result
Knows-surgery (Bhaskar) (1)
The given axioms are
Physician (Bhaskar) (2)
x [Physician (x) knows-surgery(x)] . (3)
Equation 3 can be written as
Physician (x) V knowns-surgery (x)
(4)
(the quantifier is universal. If it had been existential, the skolem function has to be used.)
In Eq(4), substitute x= Bhaskar. So we have
Physician (Bhaskar) V knows-surgery (Bhaskar) (5)
Resolving Eq (1) and (5) , we have
Physician (Bhaskar)
(6)
Resolving Eq(2) and (6) we have a contradiction.
This contradiction was due to the assumption that was made. i.e the negation of the result . Hence the
negation of the result is false or the result is true.
Resolving using refutation is much simpler that the method using the rules of inference.
Question-answering:
Chang (1973) divides the questions into four major classes.
Class 1 type where in the question that requires an yes or no answer.
Ex: yes, flaght 312 has left coimbattor or
no, kerala express is running 45 minutes late.
Class 2 type: The kind of question that requires where is or who is or under hat condition as an
answer.
Ex: Ravi is vishaka patnam.
Kumar should not light a matchstick if there is an LPG leak.
Class 3 type: Here answer is in the form of sequence of action and the order is important
62
Ex: Add concentrated sulphuric acid slowly to water and then add the diluted acid to the salt.
Class 4 type: Questions whose answers require testing of some conditions.
Ex: If possible do colour Doppler evalution of if not do echocardiography.
Ex:1. marcus was a man
man(marcus)
2. marcus was a Pompeian
Pompeian (marcus)
3. marcus was born in 40 A.D born (marcus,40)
4.all men are mortal
for all x : man(x)->mortal(x)
5.all Pompeian was died when the volcano erupted in 79 A.D.
6.erupted volcano 79 and for all x [Pompeian(x)->died (x,79)]
7.no mortal lives longer than 150 years .
for all x :for all t1: for all t2 : mortal(x) and born (x1t1) and gt(t2-t1,150) -> dead (x,t2)
8. it is now 1991 - now=1991
9.alive means not dead.
For all x:for all t:[alive (x,t)->->dead (x,t)]and [->dead (x,t)->alive(x,t)]
10.if someone dies ,then he is dead at all later times.
For all x:for all t1:for all t2 :died (x,t1)and gt(t2,t1)->dead(x,t)
Question : is marcus alive -> alive (marcus ,now )
^(9,substitution)
dead(marcus,now)
^ (10,substitution )
^ died(marcus,t1)and gt(now ,t1)
Pompeian (marcus)and gt(now ,79)
^ (2)
gt(now,79)
^ (8,substitute equals)
gt(1991,79)
^ compute gt
nil
marcus is died.
Ex2:
1.
Steve only likes easy courses.
x: easycourse(x)likes(x,Steve)
2.
Science courses are hard.
Hard(science courses)
3.
All the courses in the basketweaving department are easy.
x: basketweaving dept(x) easy(x).
4.
BK301 is a basket weaving course.
Basketweaving course(Bk301).
From(1)
x: easycourse(x) V likes(x,steve) ---------(5).
From (3)
x: basketweaving dept(x) V easy(x). ---------(6).
Resolvent from (5) and (6) basketweaving dept(x) V likes(x,steve)-------(7).
Resolvent from (4) and (7) likes(Bk301,steve) x/Bk301.
Steve would like Bk301.
Refutation Method: Assume steve does not Bk301
likes(x,steve)
easycourse(x)
5
6
63
basketweaving dept(x)
Nil
~food(x)
6
~person(bill)
8
Nil
Note: Additional knowledge that Bill is a person. We can justify the knowledge because Bill eats
peanuts &still alive.
So Bill is a person.
So our assumption is wrong.
So John likes peanuts.
64
Several types of resolution are depending on the number and types of parents. Some of them.
Binary resolution: 2 clauses having complementary literals are combined as disjunct to produce a
single a clause after deleting the complementary literals.
~p(x,a) or q(x) and ~q(b) or r (x)
~p(b,a) or r (b)
Unit resulting resolution: a number of clauses are resolved simultaneously to produce a unit
clause. All except one of the clauses are unit clauses and that one clause has exactly one more literal
than the number of unit clause.
Linear resolution : when each resolved clause ci is a parent to the clause ci+1 (i=1,2,n-1) the
process is called linear resolution.
Linear in resolution: if one of the parents in a linear resolution is always from the original set of
clauses (the Bi) we have linear resolution .
Limitation of logic as acknowledge representation scheme:
1.logic and theorem proving techniques are monotonic in nature. The derived axioms hold good under
all circumstances. Real world is never monotonic for information obtained is seldom complete.
2.logic does not provide facilities for handling uncertainty. Every information it deals has to either
correct or incorrect but never partially.
3.codificaiton of the problem in logic is a tough task and requires considerable effort on the part of the
user.
4.even though various techniques do exist for speeding resolution, it takes considerable amount of time
to prove statements in logic.
5.one major constraint in logic is that unless you are sure that a solution exists, the search will not
terminate .we will be going on adding clause after but the solution will be still elusive.
Forward versus Backward Reasoning:
The object of a search procedure is to discover a path through a problem space from an initial
configuration to a goal state. There are actually two directions in which such a search could
proceed.
Forward, from the start states
Backward, from the goal states.
The production system model of the search process provides an easy way of
viewing forward and backward reasoning as systematic processes.
Forward reasoning from initial states:
1. Root: A tree of move sequences, that might be solutions, is built by starting
with the initial configuration at the root of the tree.
2. Generate of the next level: The next level of the tree is generated by building a
tree by finding all the rules whose left sides are matched against the current
state(root node) and the right sides are to generate new nodes by creating new
configuration. Generate next level by taking each node generated at the
previous level and continue the above reasoning forward till a configuration
matches the goal state.
Back ward reasoning from the goal states:
65
66
3. If they match, we apply if and a new state generated will be represented at the
right side.
This process repeats until the goal state is reached.
4. Matching is more complex than backward ones.
Backward-chaining rule system:
1. In this, we start the search from the goal state as an initial state and make a
move to the state we want.
2. The number of state computed to generate are said to be chain of links.
3. If one state, in the path is removed the entire path will be collapsed.
PROLOG is an example of backward chaining system.
We can also search both forward from the start state and backward from the goal
simultaneously until two paths meet some where in between. This strategy is called
bi-directional search. It seems appealing if the number of nodes at each step grows
exponentially with the number of steps that have been taken. In fact, many
successful AI applications have been written using a combination of forward and
backward reasoning and most AI programming environments provide explicit
support for such hybride reasoning.
Monotonic
It is complete with respect to
the domain of interest.
NonMonotonic
It is incomplete.
2.
It is consistent.
It is not consistent.
3.
A monotonic reasoning system cannot work effectively in real life environments because
Information available is always incomplete.
As process goes by situation change and so are the solutions. Default assumptions are made in order to
reduce the search time and for quick arrival of solutions.
Basic concepts of non-monotonic reasoning systems:
AI systems provide solutions for those problems whose facts and rules of inference are explicitly
stored in the database and knowledge base. But as mentioned above the data and knowledge are
incomplete in nature and generally default assumptions are made.
Non-monotonic reasoning systems are more complex then monotonic reasoning systems. Monotonic
reasoning systems generally do not provide facilities for altering facts, deleting rules it will have an
adverse effect on the reasoning process.
67
One of major systems that has been implemented using non-monotonic reasoning system with
dependency. Directed back tracking is the Truth maintenance system of Doyle.
Dependency -directed back tracking helps to great deal in nom monotonic reasoning systems. A
monotonic system evades contradictions .a contradiction occurs when the system finds that the new
state discovered is inconsistent with the existing ones.
TMS: TMS also known as belief revision and revision maintenance are companion components to
inference systems. The main job of the TMS is to maintain consistency of the knowledge being used
by the problem solves and not to perform any inference functions. The TMS also gives the inference
component the latitude to perform non-monotonic inferences. When new discovers are made, this
more recent information can displace previous conclusions that are no longer valid .in this way the set
of beliefs available to the problem solver will continue to be current and consistent.
Diagram:
Inference Engine
Tell
TMS
Ask
Knowledge
Premises
Assumptions
Datum
68
Justification
There are 2 types of justifications records maintained for nodes. Support lists (SL) and conceptual
dependencies. SLs are the most common type. They provide the supporting justifications for nodes.
The data structure used for the SL contains 2 lists of other dependent node names on in list and an out
list.
(SL <inlist><out list>)
CP justifications are used less frequently than the SLS. They justify a node as a type of valid
hypothetical argument.
(CP<consequent><in hypothesis><out hypothesis>)
Example for truth maintenance system.
TMS maintains the consistency of the knowledge being used by the problem solves.
It maintains the currently active belief set.
It maintains the records of reasons of justification for beliefs.
Records are maintained in the form of dependency network.
Example: ABC murder story.
Initially Abbot is believed to be the primary suspect the reason is non-monotonic. The three
assertions believed initially are.
Suspect Abbot (Abbot is the primary suspect)
Beneficiary Abbot (Abbot is a beneficiary of the victim)
Alibi Abbot (abbot was at on Albony hotel at that time.)
Representation in TMS:
A TMS dependency network offer a purely syntactic,domain-independent way to represent
belief and change it consistently.
Suspect Abbot [IN]
] supported belief
Justification
Beneficiary Abbot
Alibi Abbot
Justification:
1. The assertion Suspect Abbot has an associated TMS justification. An arrow to the assertion it
supports connects the justification.
2. Assertions in a TMS dependency network are believed when they have a valid justification.
3. Each justification has two parts:
a. An IN-list [connected to justification by +]
b. An OUT-list [connected to justification by -]
4. If the assertion corresponding to the node should be believed, then in the TMS it is labeled IN.
5. If there is no reason to believe the assertion, then it is labeled OUT.
69
Premise Justification: Premise justifications are always considered to valid. Premises need no
justifications.
Labeling task of a TMS: The labeling task of a TMS is to label each node so that three major
criteria of dependency network are met.
1. Consistency
2. Well-founded-ness
3. Resolving contradictions.
Ex .1:
Consistency criterions:
The following two cases show how consistency is maintained while changing Abbots state.
Case(i). Abbot is beneficiary. We have no further justification for this fact. We simply accept it.
The following figure shows a consistent labeling for the network with premise justification.
Suspect Abbot[IN]
Beneficiary
Abbot[IN]
Alibi Abbot[Out]
Empty IN and OUT lists
Suspect Abbot[OUT]
+
Beneficiary
_
Alibi Abbot[IN]
70
Abbot[IN]
Registered Abbot[in]
Far away[in]
Register
forged[OUT]
Charged labeling
Well-Founded ness criterion:
(1) It is defined as the proper grounding of a chain of
justifications on a set of nodes that do not themselves depend on the nodes they support.
(2) For example : Cabot justification for his alibi that he was at a ski show is hardly valid.
The only support for the alibi of attending the ski show is that Cabot is telling the truth.---------(1)
The only support for his telling the truth, would be if we knew he was at the ski show.----------(2)
Above (1) and (2) statements show a chain of IN- List links
Node.
So, In such cases the node should be labeled Out for well-founded ness.
Suspect Cabot [IN]
+
Beneficiary Cabot [IN]
TellsTruthCabot
[OUT]
Cabot Justification
71
Contradiction [out]
Suspect Abbot
Suspect Babbit
Suspect Cabot
Other
suspect
(3) Initially there is no valid justification for other suspects so, contradiction is labeled
OUT.
(4) Suppose Cabot was seen on T.V that he was at the ski slopes, then is causes Alibi
Cabot node to be labeled IN. So, it makes Suspect Cabot node to be labeled OUT.
(5) The above point gives a valid justification for contradiction and hence is labeled IN.
Contradiction [IN]
Alibi Abbot
suspects
Alibi Babbit
Alibi Cabot[IN]
Other
(6) The job of TMS is to determine how the contradiction can be made OUT. I.e the
justification should be made invalid.
(7) Non monotonic justifications can be invalidated, by asserting some fact whose absence
is required by the justification
(8) That is we should install a justification that should be valid only as long as it needs to
be.
(9) A TMS have algorithms to create such justifications, which is called Abductive
justification.
Default reasoning:
Default reasoning is one type of non-monotonic reasoning, which treats conclusions as,
believed until a better reason is found to believe something else.
Two different approaches to deal with non-monotonic system are:
(1) Non- monotonic logic (2) Default logic
(1) Non-monotonic logic: Non-monotonic logic is one in which the language of FOPL is augmented
with a modal operator m, which can be read as is consistent.
Non-monotonic Logic defines the set of theorems that can be derived from a set of WFFs A to
be the intersection of the sets of theorems that results from the various ways in which the WFFs of A
might be combined.
A MB B
72
A MB B
We conclude:
MB B.
(2)
Default Logic:
Another form of uncertainty occurs as a result of in complete knowledge .one way humans
deal with this problem is by making plausible default assumptions. That is we make assumptions,
which typically hold but may have to be retracted if new information is obtained to the contrary.
Default reasoning is another form of non-monotonic reasoning .it eliminates the need to explicitly
store all the facts regarding a situation. Reiter 1980 develops a theory of default reasoning within the
context of traditional logics .a default is expressed as
A (x): M (b1 (x)Mbk (x)
C (x)
Where a (x) is a precondition wff for the conclusion wff c (x) M is a consistency operator and the bi
(x) are conditions, each of which must be separately consistent with the Kb for the conclusion c (x) to
hold.
Default theories consist of a set of axioms and set of default inference rules with schemata. The
theorems derivable from a default system are those that follow from first order logic and the
assumptions assumed from the default rules. Suppose a Kb contains only the statements
Bird (x): Mfly (x)/fly (x)
A default proof of fly is possible.but if KB also contains the clause. Ostrich (tweety).
Ostrich (x)->~fly (x)
Fly (tweety) would be blocked since the default is now in consistent. Default rules are especially
useful in hierarchal kB.because the default rules are transitive property inheritance becomes possible.
Transitivity can also be a problem in Kb with many default rules. Rule interactions can make
presentations very complex.
Two kinds of non-monotonic reasoning that can defined in these logics are:
1. Abduction
2. Inheritance
1.
Minimalist Reasoning:
Minimalist reasoning follows the idea that there are many fewer true statements than false
ones. If something is true and relevant it makes sense to assume that is has been entered into our
73
knowledgebase. Therefore, assume that the only true statements are those that necessarily must be true
in order to maintain the consistency of knowledge base.
Two kinds of minimalist reasoning are:
(4)
Closed world assumption (CWA).
(5)
Circumscription.
Closed world assumption: Another form of assumption made with regard to incomplete
knowledge, is more global in nature than single defaults. This type of assumption is useful in
application where most of the facts are and it is, therefore, reasonable to assume that if a proposition
cannot be proven, it is false. This is known as the closed world assumption with failure as negation.
This means that in a kb if the ground literal p(a) is not provable, then ~p(a) is assumed to hold true.
CWA is another form of non-monotonic reasoning. CWA is essentially the formalism under which
prolog operates and prolog has been shown to be effective in numerous application.
Disadvantages:
1. A knowledge base augmented with CWA is not consistent.
2. CWA assumptions are not always true in the world.
3. CWA forces completion of a knowledge base by adding the negation assertion P whenever it
is consistent to do so. But the assignment of a property to some predicate P and its complement
to the negation of P may be arbitrary.
Predicate completion: Limiting default assumptions to only portions of a KB can be achieved
through the use of completion or circumscription formulas. Unlike CWA, these formulas apply only to
specified predicates and not globally to the whole KB.
Completion formulas are axioms which are added to a kb to restrict the applicability of specific
predicates, if it is known that only certain objects should satisfy given predicates, formulas which
make this knowledge explicit are added to the kb. T his technique also requires the addition of the
unique names assumption ie formulas which state that distinguished named entities in the KB are
unique.
As an ex: of predicate completion, suppose we have the following
kb.owns(joe,food)
,student(hoe),owns(jill,chevy),student(jill),owns(sam,bike),programmer(Sam),student(Mary).
If it is known that joe is the only person who owns a ford, this fact can be made explicit with the
following completion formula.
For all (x)(owns (x,ford)->equal (x,joe)-----(1)
I n addition, we add the inequality formula ~equal (a,joe)---(2)
Which has the meaning that this is true for all constants, which are different from joe. Likewise, if it is
known that mary also has a ford and only mary and joe have fords, the completion and corresponding
inequality formulas in this case would be for all owns (ford,x)->equal(x,joe) or equal (x,mary)
~Equal (a, joe)
~Equal (a,mary)
In addition, we add another statement ~owns (jill, ford)
1..~Owns (x, food) or equal (x, joe)
2. ~Equal (a,joe)
3.owns(jill,ford) from 1 & 3 we get equal (jill,joe)----(4)
from 2 & 4 [ ]->nil
74
Modal logic:
75
FUZZY SET: You have asked by your friends to arrange for a small party what does small means
it possible for us exactly to identify certain characteristics that tell that the party is small. If you are a
fluent small has one meaning and if you belong to middle income flow income group, the word small
has a different meaning.
Hence, one can say that sets for which the boundary is ill defined are called fuzzy sets.
Operations on fuzzy sets are somewhat similar to the operations of standard set theory they are also
intuitively acceptable.
~ ~
A= B if and only if u A(x)=u B(x) for all x belongs u equality
~ subset of ~
A
B if and only if u A(x)<=u B(x) for all x belongs to u containment
A and B(x)=min(x)( A(x), B(x))
intersection .
union
A or (x)=1- A (x).
Compliment set.
A part from these basic operations fuzzy set theory provides the following additional operations called
hedges for the purpose of handling fuzziness in effective way.
76
Reasoning with fuzzy logic: The characteristic function for fuzzy sets provides a direct linkage to
fuzzy logic. The degree of membership of x in ~A corresponds to the truth-value of the statement x is a
member of ~A where ~A defines some propositional or predicate class.
Generalized modus ponens for fuzzy sets have been proposed by a number of researches. They differ
from the standard modus ponens in that statements, which are characterized by fuzzy sets, are
permitted and the conclusion need not be identical to the implicand in the implication.
Premise: This banana is very yellow.
Implication: If a banana is yellow then the banana is ripe.
Conclusion: This banana is very ripe.
Now let x & y be two universes and let ~ and ~ be fuzzy sets in x & x*y respectively. Define fuzzy
A
B
Relations ~ (x), ~ (x , y) and ~
(y) in x ,x*y and y respectively. Then the compositional
rule of
RA
R B
RC
Inference is the solution of the relational equation.
~
~
(x) O ~ (x,y)=max x min(u A(x), u B (x,y) } where the symbol O signifies the
composition
R C y= R A
R
Of ~ & ~ . As an example let x=y={1,2,3,4}
A
B
77
~
A ={little}={(1/1),(2/.6),(3/.2),(4.0)}
~
R =approximately equal , a fuzzy relation defined by
Y 1 2
3
4
1
1 .5
0
0
~
2
.5
1 .5
0
R
3
0 .5
1 .5
4
0
0
5
1
Then applying the max_min composition rule ~ max min{u
(x),u (x,y)}
R=
x
A
R
=max
x{min(1,1),(.6,.5),(.2,0),(0,0),min[(1,.5),(.6,1)(.2,.5)
(0,0)],min[(1,0),(.6,.5),(.2,1),(0,.5)]min[(1,0),(.6,0),(.2,.5),(0,1)]}
max {[1,.5,0,0],[.5,.6,.2,0],[0,.5,.2,0],[0,0,.2.0]}
x
={[1],[.6],[.5],[.2]}
there fore the solution is ~
R C (y) ={(1,1),(2/.6),(3/.5),(4/.2)}
78
To handle the uncertain data, probability is oldest technique available. Probabilistic reasoning is some
times used when out comes unpredictable.
Bayes theorem: Bayes theorem is used for the computation of probabilities, this theorem provides a
method for reasoning about partial beliefs. Here every event that is happening or likely to happen is
quantified dictate how these numerical values are to be calculated. This method introduced by
Clergyman Thomas bates in the 18 century. This form of reasoning depends on the use of conditional
probabilities of specified events when it is known that other events have occurred. for 2 events H & E
with the probability p(E)>0 the condition probability of events H, given event E has occurred is
defined as .
P (H/E)=P (H&E)/P (E).--------(1)
Read this expression as the probability of hypothesis H given that we have observed evidence.
The conditional probability of event E given that event H occurred can likewise be written as
P(E/H)=P(H&E)/P(H)----------(2)
From Eq1 & Eq 2 We get P(H/E)=P(E/H)*P(H) / P(E)----------------(3)
This equation expresses the notion that probability of event H occurring when it is known that event E
occurred is the same as the probability that E occurs when it is known that H occurred, multiplied by
the ratio of the probabilities of the 2 events H, E occurring.
The probability of an arbitrary event B can always be expressed as
P (B)=P (B&A)+P (B&~A) = P (B/A)/P(A) + P (B/~A) P (~A)
Using this result 3 we can be written as P (H/E)=P (E/H) P (H) / P (E/H) P (H)+P (E/~H) P (~H)
The above equation can be generalized for an arbitrary no. of hypothesis Hi i=1,2k thus suppose
the Hi partition the universe, i.e. the Hi are naturally exclusive .then for any evidence E1
k
79
P(x1,x5)=p(x5/x2,x3)p(x4/x1,x2)p(x5/x1))p(x2/x1)p(x1)
Diagram:
X1
X2
X3
X4
X5
80
Ex2:
A Bayes network is directed a cyclic graph whose nodes are labeled by random variable. Bayes
network are some times called causal networks because the areas connecting the nodes can be though
of as representing causal relationship.
To construct a Bayesian network for a given set of variable, we draw arcs from cause variable
to immediate effects. We preserve the formalism and rely on the modularity of the world. We are
trying to model. Consider an example:
S: Sprinkler was on last night
W: Grass is wet
R: It rained last night
We can write MYCIN style rules that described predictive relationships among these three events.
IF: The sprinkler was on last night then there is evidence that the grass will be wet this morning.
Sprinkle
r
Rain
Wet
Taken alone, this rule may accurately describe the world. But consider a second rule:
IF: the grass is wet this morning then there is evidence that it rained last night.
Taken alone , this rule makes sense when rain is the most common source of water on the
grass.
Rainy Season
Sprinkle
r
Rain
Wet
There are two different ways that propositions can influence the likelihood of each other. The
first is that causes influence the likelihood of their symptoms; the second is that observing a symptom
affects the likelihood of all of its possible causes. The basic idea behind the Bayesian network
structure is to make a clear distinction between these two kinds of influence.
DEMPSTER SHAFER THEORY : The Bayesian approach depends on the use of known prior
and likely probs to compute conditional probs .The dempster Shafer approach on the other hand is a
generalization of classical prob theory which permits the assignment of probmasses (beliefs) to all
subsets of the universe and not just to the basic element.
A generalization theory has been proposed by Arthur Dempster 1968 and extended by his student
Glenn shafer (1976). It has come to be known as the Dempster theory of evidence. The theory is based
81
on the notion that separate prob masses may be assigned to all subsets of a universe of discourse rather
than just to indivisible single members are required in traditional prob theory.
In the Dempster, we assume a universe of discourse and a set correspond to n propositions exactly
one of which is true . The propositions are assumed to be exhaustive and mutually exclusive let 2
denote all subsets of U including the empty set and u itself
M:2 pow u->[0,1]
m m(A)=1.
A
The function m defines a prob distribution on 2 pow u it represents the measure of belief committed
exactly to A .a belief function, Bel corresponding to specific m for the set A is defined as the sum of
beliefs committed to every subset of A by m. Bel(A) is a measure of the total support or belief
committed to the set A and sets a minimum value for its likelihood .
Bel(A)=summation over B subset of a m (B).
The Dempster ,a belief interval can also be defined for a subset A .It is represented as the sub internal
[BEL(a),p(a)]OF [0,1].Bel(A) is also called the support of A and p(A)=1-Bel(~A) the plausibility of
(A).when evidence is available from two or more independent knowledge sources Bel 1,BEL 2 one
would like to pool the evidence to reduce the uncertainty .for this dempster has provided such has
combining function denoted as bel1 o bel 2 the total prob mass committed to C.
C= summation over Ai and Bj m1(Ai)m2(Bj)
The sum in above equation must be normalized to account for the fact that some intersections Ai and
Bj =pie will have positive prob which must be discarded .the final form of dempster of combination is
then given by
m1om2=summation over Ai and Bj =C mi(Ai)m2(Bj)/summation over Ai and Bj not to
0m1(Ai)m2(Bj)
where the summations are taken overall I and j.
AD_HOC methods: The SO_called ad_hoc methods of dealing with uncertainty are methods, which
have no formal theoretical basis. Although they are usually patterned after probabilistic concepts.
These methods typically have an intuitive, if not a theoretical, appeal. They are chosen over formal
methods as a pragmatic solution to a particular problem. When the formal methods impose difficult or
impossible conditions.
Different ad_hoc procedures have been employed successfully in a no. Of AI systems, particularly in
expert system. ad_hoc methods have been used in a large no. of knowledge base systems more than
have the more formal methods. This is largely because of the difficulties encountered in acquiring
large no. of reliable probabilities related to the given domain and to the complexities of the
calculations.
Heuristic methods: Heuristic methods are based on the use of procedures, rules and the other forms
of encoded knowledge to achieve specified heuristics; one of several alternative conclusions may be
chosen through the strength of positive versus negative evidence presented in form of justifications
and endorsements. The endorsement weights employed in such systems need not be numeric. Some
form of ordering a preference selection scheme must be used.
Reasoning using certainty factors: Probability based reasoning adopted bayes theorem
for handling uncertainty, unfortunately, to apply bayes theorem one, needs to estimate a priori and
conditional probableties which are difficult to be calculated in many domains. Hence, to circumvent
this problem, the developers of MYCIN system adopted certainty factors.
82
A certainty factor (CF) is a numerical estimate of the belief or disbelief on a conclusion in the
presence of a set of evidence. Various methods of using CFs have been adopted, Typical of them are
as under.
1. Use a scale from 0 to 1 where 0 represents total disbelief and 1 stands for total belief.
Other values
between 0 to 1 represent varying degrees of belief and disbelief.
2
2. MYCINs CF representation is one scale from 1 to +1. The value of 0 stands for unknown.
In
Expert systems, every production rule has a certainty factor associated with it. The values
of the CF
are determined by the domain expert who creates the knowledge base.
A certainty Factor(CF[h, e]) is defined in terms of two components:
(i)
MB[(h, e)]: a measure (between 0 and 1) of belief in hypothesis h
given the evidence e. MB measure the extent to which the evidence
supports the hypothesis. It is zero if the evidence fails to support the
hypothesis.
(ii)
MD[h, e]: a measure (between 0 & 1) of disbelief in hypothesis h
given the evidence e . MD measure the extent to which the evidence
supports the negation of the hypothesis. It is zero if the evidence
supports the hypothesis.
We can define the certainty factor as
F[h,e]= MB[h,e}- MD[h,e].
Matching
Matching is a basic function that is required in almost all A.I programs. It is an essential part of
more complex operations such as search and control. In many programs it is known that matching
consumes a large fraction of the processing time.
Matching is the process of comparing two or more structures to discover their likeness or
differences. The structures may represent a wide range of objects including physical entities, words or
phrases in some language, complete classes of things, general concepts, relations between complex
entities and the like . The representations will be given in one or more of the formalisms like FOPL,
networks or some other scheme and matching will involve comparing the component parts of such
structures.
Matching is used in a variety of programs for different reasons. It may serve to control the
sequence of operations, to identify or classify objects, to determine the best of a number of different
alternatives or to retrieve items from a database. It is an essential operation in such diverse programs
as speech recognition, natural language understanding, vision, learning, automated reasoning,
planning, automatic programming and expert system as well as many others.
Matching is just process of comparing two structures or patterns for equality. The match fails if the
patterns differ in any aspect.
83
Indexing:
Incase of indexing we use the current state as an index into the rules in order to
select the matching ones. Consider chess. Here we assign a number to each board position.
Then we use a hosting function to treat the number as an index into the rules.
(ii)
(iii)
Complex & Approximate Matching: An approximate matching is one, which is used when the
preconditions approximately match the current situation.
Consider an example of a dialogue between ELIZA & a user. Here ELIZA ill try to match the left side
of the rule again the users last sentence and use the correct right side to generate a response. Let us
consider the following ELIZA rules:
84
(X me Y) -> (X you Y)
(I remember X) -> (who do remember X just now?)
(My {Family-member} is Y) -> (Who else in your family is Y?)
Suppose the use says, I remember Mary How ELIZA will try to match the above response to the left
hand side of the given rules. It finds that it matches to the first rule & now it takes the right hand side
and asks why do remember Mary just now?
This is how the conversation proceeds taking into consideration the approximate matching.
(iv)
Conflict Resolution: Conflict Resolution is a strategy in which we incorporate the decision
making into the matching process. We have three basic approaches.
(b) Preference based on Rules
(c) Preference based on Objects
(d) Preference based on states.
(a) Preference Based on Rules:
Here we consider the rules in the order they are given or
we give some priority to special case rules. There are two ways in which a matcher will try
to decide how one rule is more general than the others. This allows us in decreasing the
search size by more general rules. The two ways are:
1. If one rule contains all the preconditions that another rule has and some additional
then
Second rule is more general they the first.
2. If one rule contains preconditions with variables and the other contains the same
A precondition with constants then the first rule is more general.
(b)
Preference based on Object: Here we use keywords to match into the rules consider
the example of ELIZA. It takes specific keywords form the users response and tries to
match the keywords with the given rules. Like previously it uses the remember
keyword to match the L.H.S. rule.
(c)
Preference Based on States: In this case we fire all the rules that are waiting they lead
us to some states. Using a heuristic function we can decide which state is the best.
Partial matching: For many AI applications complete matching between two or more structures is in
appropriate. For example, input representations of speech waveforms or visual scenes may have been
corrupted by noise or other unwanted distortions. In such cases, we do not want to reject the input out
of hand. Our systems should be more tolerant of such commonly occurring problems. Instead, we want
our systems to be able to find an acceptable or best match between the input and some reference
description.
The RETE Matching algorithm:
85
A typical system will contain a Knowledge Base which contains structures representing the
domain experts knowledge in the form of rules or productions, a working memory which
holds parameters for the current problem, and an inference engine with rule interpreter which
determines which rules are applicable for the current problem.
The basic inference cycle of a production system is match, select, and execute as indicated in
figure. These operations are performed as follows:
I
User
Interface
Interface
Engine
Match
O
Select
Working
memory
Knowledge
table
Execute
Match: During the match portion of the cycle, the conditions in the left hand side(LHS) of the
rules in the knowledge base are matched against the contents of working memory to determine
which rules have their LHS conditions satisfied with consistent bindings to working memory
terms. Rules which are found to be applicable (that match) are put in a conflict set.
Select: From the conflict set, one of the rules is selected to execute. The selection
strategy may depend on regency of usage, specificity of the rule, or other criteria.
Execute: The rule selected from the conflict set is executed by carrying out the action
or conclusion part of the rule, the right hand side (RHS) of the rule. This may involve an I/O
operation, adding, removing or changing clauses in working Memory or simply causing a halt.
The above cycle is repeated until no rules are put in the conflict set or until a stopping
condition is reached.
A typical knowledge base will contain hundreds or even thousands of rules and each
rule will contain several (perhaps as many as ten or more) conditions. Working memories
typically contain hundreds of clauses as well. Consequently, exhaustive matching of all rules
and their LHS conditions against working-memory clauses may require tens of thousands of
comparisons. This accounts for the claim made in the introductory paragraph that as much as
90& of the computing time for such systems can be related to matching operations.
To eliminate the need to perform thousands of matches per cycle, an efficient match
algorithm called RETE has been developed (Forgy, 1982). It was initially developed as part of
the OPS family of programming languages (Brownston, et al., 1985). This algorithm uses
several novel features, including methods to avoid repetitive matching on successive cycles.
The main timesaving features of RETE are as follows.
Advantages:
a. In most expert systems, the contents of working memory change very little from
cycle to cycle.
86
b. Many rules in a knowledge base will have the same conditions occurring in their
LHS.
c. The temporal nature of data.
d. Structural similarity in rules.
e. Persistence of variable binding consistency.
PROLOG
The name PROLOG was taken from the phrase PROgramming in LOGic .The language was
originally developed in 1972 by Alain Colmerouer and P.Roussel at the university of Marseilles in
France. Prolog is unique in its ability to infer facts and conclusions form other facts.
Prolog has been successful as an AI programming language for the following reasons.
1.The syntax and semantics of prolog are very close to formal logic. by this time , It must be clear to
you that most AI program reason using logic.
2. Prolog language has in it a built in inference engine and automatic backtracking facility. This helps
in efficient implementation of various search strategies.
3. This language has high productivity and easy of program maintenance.
4. Prolog language is based on the universal formalism of Horn Clause.
5. Because of the inherent AND parallelism, prolog language can be implemented with ease on parallel
machines.
6. The clauses of prolog have a procedural and declarative meaning. Because of this understanding of
the language is easier.
7.In prolog, each clause can be executed separately as though it is a separate program. Hence modular
programming and testing is possible.
Horn clause consists of a set of statements joined by logical Ands
Procedural languages
(Basic,cobol,pascal)
A.I languages
(prolog,lisp)
87
The main part of a prolog program consists of a collection of knowledge about a specific subject. This
collection is called a database and this database is expressed in facts and rules.
Turbo prolog permits you to describe facts as symbolic relationships.
Ex: The right speaker is dead.
This same fact can be expressed in turbo prolog as
Is (right_speaker, dead)
This factual expression in prolog is called a clause. Notice the period at the end of the clause.
In this ex: right speakers and dead are objects.
The word is is the relation in this ex. relation is a name that defines the way in which a collection of
objects.
The entire expression before the period in each case is called a predicate. A predicate is a function with
a value of true or false predicates express a property or relationship .the word before the parenthesis is
name of relation. The elements with in the parenthesis are the arguments of the predicate, which may
be objects or variables.
Clauses
|
Facts
|
rules
Predicates
period
|
relations
arguments
arguments
|
objects
variables.
A rule is an expression that indicates the truth of a particular fact depends upon one or more other
facts. Consider this ex: if there is a body stiffness or pain in the joints.
AND there is sensitivity to infections.
Then there is probably a vitamin C deficiency.
This rule could be expressed as the following turbo prolog clause.
Hypothesis (vit_deficiency ) if
Symptom (arthrities ) and symptom (infection_senstivity)
Notice the general form of the prolog rule. The conclusion is stated first and is followed by the word if.
Every rule has a conclusion is stated and antecedent rules are important part of any programming short
hand has been developed for expressing rules. in this abbreviated form the previous rule becomes:
Hypothesis (vitc_deficiency ):Symptom (arthrities)
Symptom (infection_Senstivity)
The :-operator is called a break a comma expression and relationship and semicolon expresses an or
relationship.
The process of using rules and facts to solve a problem is called formal reasoning.
88
A turbo prolog program consists of 2 or more sections. The main body of the program, the clause
section contains the clause and consists of facts and rules. The relationships used in the clauses of the
clause section are defined in the predicate section. Each relation in each clause must have a
corresponding predicate definition in the predicates section predicate definition in the pre dicta section
does not end with a period. The domains section also is apart of most turbo prolog programs .If defines
the type of each object. The domain section turbo prolog controls the typing of object.
6 basic object types are available to the user.
Character.
Integer.
Real.
String.
Symbol
File.
Character single character (enclosed between single quotation marks)
Integer-integer from -32768 to 32767
-307
-308
Real- floating point number (lE
to 1E )
String character sequence (enclosed between double quotation marks)
Symbol- character sequences of letters numbers and under series with the first character a lower case
letter
File-symbolic file name.
In a turbo prolog program the sections should always be in the following.
1.domains.
2. Predicates 3. Clauses.
Variables:
A variable name must begin with a capital letter and may be from 1 t o250 characters long. Except for
the first character in the name you may use uppercase or lower case letters, digits o the under line
character. Names should be meaningful.
Bound & free variables: if a variable has a value at a particular time it is said to be bound or
instantiated .if a variable does not have a value at a particular time is said to be free or uninstantiated.
Anonymous variables: The anonymous variable stands for all values while in a goal it is satisfied if at
least one value correspond to it.
Backtracking: One of the most important principles of prolog execution is backtracking. The solution
of the compound goal proceeds from left to right. If any condition in the chain fails, prolog back
tracking to the previous condition tries to prove it again to see if the failed condition will succeed with
the new binding. Prolog moves relentlessly forward and backward through the conditions, trying every
available binding in an attempt to get the goal to succeed in as many as possible.
90
Key pressed
The readln predicate: The readln predicate permits a user to read any string symbol into a variable.
Ex: readln(replay)
Replay (yes)
Read predicate: The read car predicate permits a user to read any character in to variable.
Ex: readchar(replay)
Replay =y
The read int. predicate can be used to read an integer value to a variable.
Ex: read int (Age)
Read real: The read real predicate can be used to read floating point numbers into a variable.
Ex: read real (price)
Price=12.50
Inkey: The inkey predicate reads a single character from the I/p the predicate form is inkey (char)
If there is no i/p the predicate fails. Otherwise the character is returned bound to the char.
Key pressed: key pressed predicate to determine whether has been pressed with out a character being
returned.
91
once it succeeds, it acts a fence, effectively blocking any back tracking beyond the cut. If any premise
beyond the cut fails, prolog cans only back track as far as the cut to try another path.
The basic purpose of the cut is to eliminate certain search path is in the problem space, you can
think of the paths through the database as the limbs of a tree. If you get cut on one limb and discover
failure, you normally can move back towards the trunk, find another limb and start moving outward
again the cut act like a fence where it is placed in the program you cannot back beyond the cut.
The cut can be used for any or all of 3 purposes.
1. To terminate the search for any further solutions to a goal. Once a particular rule has been tested.
2. To terminate the search for any further solutions to a goal once a particular rule has been forced to
fail with
the fail predicate.
3.To eliminate paths in the database to speed up execution .the cut is not always the best solution.
Types of cuts: there are 2 types of cuts in prolog namely.
Green cut & red cut: the green cut is used to force binding to be retained, once the right abuse is
reached green cuts are used to express determination.
The red type of cut is used to omit explicit conditions. Cuts improve the clarity and efficiency
of most programs of the 2 types of cut, the green cut is the more acceptable type. You can often use the
not predicate instead of the red cut.
EX: state (tamlnadu)
State (kerala)
State (A.P)
State (U.P)
State (karnataka)
State (M.P)
State (S);
Write (do u belong to)
Write (S)
Write (&)
Readln (Reply)
Reply =yes
Yes
So u belong to U.P
Quality operator: =
Comparison operator: <, >, =, <=, >=, -, =, <>
Arithmetic operator: +, -, *, %, mod, div, +(positive),-(negative)
Recursion: Recursion is one of the prologs most important Execution control technique. If often is
used with the fail or cut predicate. Recursion is a technique in which some thing is defined in terms of
itself. In prolog recursion is the technique of using a clause to invoke copy of it-self.
Ex: If you specify the goal as count (1) you should see the following output -> 1 2 3 4 5 6 7 8 true
Contains a copy itself.
Count (~)
(Statements)
Count (NN)
Program:
Predicates
Count (integer)
Clauses
Count (9).
Count (N): Write ( , N).
NN=N+1.
Count (NN).
The repeat predicate: One predicate that uses recursion is the repeat predicate. Although repeat is not
a built in predicate it is so important that you will probably want to add it to many programs its
format is
Repeat
Repeat:Repeat
The repeat predicate is useful for forcing a program to generate alternate solutions through
back tracking. When the repeat predicate is used in a rule, the predicate always succeeds .if a later
premise causes failure of the rule, or log will back tracking to the repeat predicate.
Basic rules of recursion:
1.a program must include some method of terminating the recursion loop.
2.variable binding in a rule of fact apply only to the current layer.
3.in most applications the recursive procedure should do its work on the way down.
Recursion is not always the best programming choice. Recursion complex and the flow of execution is
not always easy to follow.
Length (list, length): The length utility simply measures the list, reporting either how long it is ,which
lists are of a specified length or whether a specified list has the specified length.
Length([],0).
Length([|Tail],len):Length(Tail,len1).
Len = Len1+1.
93
Location (Position, list, object): This predicate can either find the location of the specified object, find
the object at the location or confirm the presence of the object at the position.
Location (Num, list, item):Sides (Pos, item, list),
Length (pos, N), Num=N+1.
Insert, Extract, Replace: These three utilities share a common argument pattern.
(Object, location, list, new list)
Insert (object, location, list, new list): The insert utility differs from append in it allows you to select
the location in the list for doing your insertion.
Insert (Item, l, list, [item | list]):-!
Insert (item, loc [Head | rest], [head | Rest 1]:Loc1 = Loc-1, insert (item, loc1, rest, rest1).
Replace (object, location, list, new list): The replace utility goes to the specified location and
substitutes the specified object for the object at location.
Replace (item, l,[-|Rest],[Item | Rest]):-!
Extract (object, location, list, new list); The extract utility shortens the list by removing the item of the
specified location.
APPEND:
(List, list, list). Using append, you can add an element to a list. Since an empty list is
still a list, this means you can use append to build a list from scratch.
Append ([], L, L).
Append ([X|L1], L2, [X|L3]: Append (L1, L2, L3).
DELETE (Object, list, new list): Deleting an object from a list is also a recursive process.
Delete (X, [X|List2], List2)
Delete (X, [Other| Rem], [Other\Rem1]):Delete (X, Rem, Rem1).
Dup (List): The dup utility simply checks for the presence of duplicate elements. If all you want to do
is check, it is help full.
Dup ([]): -! Fail.
Dup ([H|T]): Member (H, T)!
Dup ([_| T):Dup (T), !.
Reverse (list,list): As its name suggests, the reverse utility takes a specified list and reverses the order
of its elements.
Reverse ([],[]).
Reverse ([H|T],newlist):Reverse (T, L2),
Append (L2,[H},newlist).
Notice that no cut markers are included in the utility. Optimizing it for large lists would require a cut in
the null lists rules and in the append null list rules.
94
Shift (list,list):
This utility puts the head of a list at the end shifting all elements one position to
the left.
For a left shift ([1,2,3]) becomes ([2,3,1]).
Lshift([H|T],newlist):Append (T,[H],newlist).
For a right shift ([1,2,3]) becomes [3,1,2].
Rshift(oldlist,[H|T]):Append (t,[H],oldlist).
Simple problems:
LENGTH OF THE LIST.
Program:
Domains
List = integer*
Predicates
Length_of (list, integer)
Clauses
Length-of ([],0).
Length-of ([-|],l):Length-of [(T,Taj] length),
L= Taillength+1.
Result:
Goal: length_of([1,2,3],l)
L=3.
/*TO SEE IF A STRING IS IN THE LIST OR NOT*/
Domains
Object=symbol
list=object*
Predicates
Member (object,list)
Clauses
Member (X,[ X|_] ).
Member (X,[_| Tail] ):Member (X, Tail).
Goal: Member( Ram, [Ram,Sam,Krishna] )
True
1 solution
/*TO DELETE AN ITEM*/
Domains
Integer list= integers*
Item = integer
Predicates
Delete (Item, [Item | Tail], Tail).
95
Delete (Item, [X | Tail], [X| Tail 1]: Delete (Item, Tail 1).
Goal: delete (Anand, [ Anand, Rajesh, krishna, Praveen ], 1 )
L=[Rajesh, Krishna, Praveen]
1 solution.
/* ADDING AN ELEMENT TO THE LIST */
Domains
objectlist = symbol *
item = symbol
Predicates
add ( symbol, objectlist, objectlist )
Clauses
add ( X, list, list ):- List = [ X | List ].
Result:
Goal: add(vijay,[giri,vamsi,koti])
L=[vijay,giri,vamsi,koti].
* SUM OF INTEGER LIST * /
Domains
List = integer *
Sum = integer
Predicates
sum_of_list ( list, sum )
Clauses
sum_of_list ( [], 0 ).
sum_of_list ([head| tail],sum):sum_of_list (tail,addtail),sum=head+addtail.
Result: sum_of_list([ ],sum)
Sum=0;
Sum-of-list([1,2,3,4,5],sum)
Sum= 15.
96
/*CALCULATE FACTORIAL*/
Domains
X,factx = integer
Predicates
factorial(x,Factx)
Clauses
factorial (1,1).
factorial (x,Factx):Y=x-1, factorial (y,Facty),Factx=x*Facty.
Result:
Goal: factorial (5,x)
X= 120
/* APPENDING TWO LISTS*/
Domains
objects = integer
list=integer*
item=integer
Predicates
append(list,item,list)
Clauses
Append ([], item, list).
Append ([h|list1], item,[h|list3]):Append (list1,item, list3).
Result:
Goal: append ([Rajesh, Mahesh],[Divya, Deepthi],l).
L=[rajesh, Mahesh, divya, deepthi].
97
LISP
LISP (List Processing) is an AI programming language developed by John Mc Carthy in late
1950.Lisp is a symbolic processing language that represents information in lists and manipulates these
lists to derive information. Till today different dialects of lisp language have been developed .the
foundations of these dialects remain the same while the syntax and functionality show a marked
change.
LISP DIALECTS
S. No
Dialects
Developed by
1
Common Lisp
Gold Hill computers
2
Franz Lisp
Franz Inc
3
Inter Lisp
Xerox
4
Mu lisp
Microsoft Inc ,the software house
5
Portable standard Lisp
University of Utah
6
Vax lisp
Digital equipment corp.
7
X lisp
Blue users group
8
Zeta lisp
LMI, symbolic, Xerox
Features of LISP:
1. LISP is the equivalence of form between programs and data in language,
which allows data structures to be executed as programs and programs to be
modified as data.
2. LISP is heavy reliance an recursion as a control structures, rather than
iteration (looping) that is common in most programming language.
3. LISP has an interpreter which implies that the debugging facilities would be
unmatched by any compiled language.
4. LISP encourages the use of dynamic data structures which leads to design of
a flexible system.
5. LISP macro facility allows new languages to be defined and embedded within
the LISP system.
Preliminaries of Lisp: LISP language is exclusively used for manipulating lists. The major
characteristic is that the basic elements are treated as symbols irrespective of whether they are numeric
or alphanumeric. The basic data element in lisp, An atom is indivisible. Lisp has 2 basic types of
atoms. Nos and symbols. Numbers represent numerical values. Any type of number such as positive
or negative integers, floating point or decimals are acceptable. Symbols represent alpha numeric
character symbols can be a combination of alphabets and numerical
(apple orange grapes mango)
(millimeter container decimeter meter)
(78 65 71 70 68)
(Bullock_ cart (Hercules atlas hero)(tvs 50 kelvinator mofa))
Such a combination of lists within lists are called nested or multiple or complex list. Only 1
thing to be mind is that the number of parenthesis, opening and closing must be same,. If not, the
98
system will flash an error message. Because if the parenthesis problem, lisp is jovially referred as a
language that has Lots of Infuriating stupid parentheses.
LISP FUNCTIONS:
A lisp program is a collection of small routines, which define simple
functions . In order that complex functions may be written; the language comes with a set of basic
functions called primitives. These primitives serve commonly required operations. a part from these
primitives ,lisp permits you to define your own functions. Because of this independence, lisp is highly
flexible since tailor made functions are defined and manipulated, modularity is high. Lisp uses prefix
notations for its operations. The basic primitives of lisp are classified as:
1. Arithmetic primitives
2. Boolean primitives
3. List manipulation primitives
1. Arithmetic primitives: + addition,- subtraction ,* multiplication ,/ division / |+adds | to its argument
to the the power specified by its second argument. Quotient and remainder together give the result of
division between integers, recipe gives the reciprocal, max and min return the maximum and minimum
of their argument.
(+ 6 2) =>8 ; (* 6 2)=>12
(- 6 2) =>4; (/ 6 2)=>3
(Remainder 14 3)=>2 ;(Recip(5))=>.2 ;max( 8 3 9 )=>9
(min 8 3 9)=>3
Boolean primitives: these primitives provides a result which is Boolean in nature i.e. true or false.
Some of them require only 1 argument while others more than 1.
Some such primitives are:
1. Atom : the purpose is to find out whether the element is an atom or not
Ex:(ATOM RAMAN)=>T
(ATOM 26)=>Nil
2.Number p: determines if the atom is a number or not.
Ex: (number p raman)=>nil ;(numberp 20)=>T
3.list p: determines if the input is a list or not.
Ex: (list p (25 35 46 75) =>T
(List p raman )=>NIL
4. Zero p:to find out whether the number is zero or not
Ex:(zero 26)=>nil ;(zerop 0)=>T
5.Odd p;to find out whether the i/p is odd.
Ex:(ODD p 65)=>T ;(ODD p 60)=>Nil
6.even p;to find out whether th i/p is even or not
Ex: (even p 78)=>T;(EVENP 89)=>nil
7.equaL P; To find out whether the given lists are equal
Ex: (equal (janaki raman sarukesi )(janiki raman sarukesi) =>T
(Equal (75 67 94)(4 3 65 987)
8.greater p: to find out the 1 is greater than the second
Ex: (greater p 46 86)=>nil
Ex: (greater p 86 46)=>T
9.lesser p: to find out whether 1 is lesser than the 2.
99
List manipulation of primitives: The purpose of list manipulation primitives are for 1.creating a
new list 2 . Modifying an existing list with addition, deletion or replacement of an atom 3. Extracting
portions of LIST
For these lisp provides some primitives and well specified functions can be developed from these .in
lisp values are assigned to variables by set Q property .the primitive has 2 arguments, the 1 being the
variable & the second, the value to be assigned to the variable. The value could be an atom or list
itself.
Ex: 1 (SET Q A 22) when evaluated assigns 22 to variable A.
2. (Set q TV onida) would assign TV=ONIDA whenever.
3. (Set pressure-1 22) would assign pressure-1 =22
(Set q pressure 2 pressure1) would assign pressure 2 =22
Set q pressure pressure 1 would assign to pressure 2=pressure 1
List construction: lists are constructed using CONS primitive. Cons primitive adds new element to the
front of the existing list. This primitive needs 2 arguments and returns a single list.
Ex:1.((CONS p (Q R S )) =>(P Q R S )
2. (CONS RAM (LAXMAN, GOPI, RAJ))
=>(RAM,LAXMAN, GOPI, RAJ )
3. (SET Q A (B C D ))
(SET Q X (X Y Z))
(Cons A X)
=>(B C D X Y Z)
While the cons primitive adds a new list to the front the existing list, the primitive append adds it to
tail of the existing list.
Extracting portions of a list:
Extracting portions of a list pertains to decomposing the list and getting an atom out of it. For
this purpose, LISP provides two major primitives. They are CAR and CDR.
The CAR primitive returns the first element of list.
EX: 1. (CAR (RAM LAXMAN BHARATH SHATRUGHNAN)) RAM
(CAR ((32 36) (56 54) (67 31) 67 87 89)) (32 36)
The CDR Primitive returns the list excluding the first element.
Ex: 1. (CDR (RAM LAKSHMAN BHARATH SHATRUGHNAN))
(LAKSHMAN BHARATH SHATRUGHNAN)
2. (CDR ((32 36) (56 54) (67 31) (67 87 89))
((56 54) (67 31) 67 87 89)
Using these two list-manipulating primitives it is possible to extract any porting of the list. For
example if we would like to get the element BHARATH from the list given as follows:
( RAM LAKSHMAN BHARATH SHATRUGHANAN)
we will have to adopt the following method.
Step 1:
Break the list into two using CDR function.
(CDR (RAM LAKSHMAN BHARATH SHATRUGHNAN)
(LAKSHMAN BHARATH SHATRUGHAN)
100
Step 2:
Step 3:
101
->(defun Maximum2(a, b)
(cond((> a b) a)
( t b)))
Maximum2
->
Note the t in the second clause preceding b. This forces last clause to be evaluated when the first
clause is (nil) (not).
->Maximum2 (234 320)
->320
Find maximum of 3 numbers
->(Defun Maximum3 (a b c )
(cond((>a b) (cond((>a c) a)( t c)))
((> b c) b ) (t c)))
Maximum 3
->
->Maximum3 (20 30 25)
30
->
Logical functions:
Like predicate, logical functions may also be used for flow of control.
The basic logical operations are AND, OR , NOT. Not is the simplest, it takes one argument & returns
if the argument to nil.
Input, output and local variables: The operations we need for this are performed with the inputoutput functions. The most commonly used i/o functions are read, print, print, princ, terpri and format.
Read:
102
Terpri: Tt takes no arguments .it introduces a new line (carriage return and line feed) wherever it
appears and then return nil.
Format: The format function permits us to create cleaner output then is possible with just the basic
printing functions.
->(Format <destination><string>)
Destination specifies where the o/p is to be directed. String is the desired o/p string but intermixed
with format directives, which specify how watch argument is to be represented.
Directives appear in the same string in same order the argument is to be printed. Each directive is
preceded with a tilde character (~) to identify, it as a directive.
~^ the argument is printed through princ were used.
~S the argument is printed through prinl were used.
~D the argument which must be an integer is printed as a decimal number.
~F the argument must be a floating-point number is printed as a decimal floating point number.
~C the argument is printed as character
~% a new line is printed.
Ex: suppose x=3.0 ;y=9.42
>format t circle radius =~2f ~% circle area=~3fx
Circle radius =3.0
Circle area=9.42
->Ex: program find area of a circle
(defun circle-area()
(terpri)
(princ please enter the radius
(Set Q radius read))
(princ the area of the circle is)
(princ (*3.1416 radius )
(terpri)
circle-area
->(circle-area)
please enter the radius 4
the area of the circle :50.2656
->
Iteration: we introduce a structured form of iteration with the DO construct, which is some what like
the while loop in pascal.
The do statement has the form
(DO (<var1, val1><var-update1>)
<Var2 val2><var-update2>)
(<Test1><return-value>)
(<S-expression >)
The S expressions forming the body of the construct are optional
Ex:-> (defun factorial (n)
(Do ((count n (-count 1))
(Product n (*product (-count 1))
((Equal 0 count) product)))
103
Factorial
->
Recursion: For many problems, recursion is the natural method of solution. Such problems occur
frequently in mathematical logic and the use of recursion will often result in programs that are both
elegant & simple. A recursion function is one which cal itself successively to reduce a problem to a
sequence of simple steps. Recursion requires a stopping condition and a recursive step.
We illustrate with a recursive version of factorial. The recursive step in factorial is the product of n and
factorial (n-1) .the stop condition is reached when n=0.
-_(defun factorial (n)
(cond ((zero p n)1)
(t (*n(factorial(- n 1))))
Factorial
->Factorial (6)
720
->
Property lists: one of the unique and most useful features of lisp an AI language is the ability to assign
properties atoms. Property list functions permit one to assign such properties to an atom and to
retrieve, replace them as required.
The function putprop assigns properties to an atom .it takes 3 arguments: An object name can atom, a
proper or attribute name ,and property or attribute value. For ex: to assign properties to a car, we can
assign properties such as make year, color and style with the following statements.
Syntax: (putprop object value attribute)
Ex: (putprop car ford make)=>ford
(Putprop car 1988 year)=>1988
(Putprop car red color)=>red
getprop: to retrieve a property value, such as the color of car we use the function get, which also takes
the 2 arguments object and attribute.
Syntax: (get object attribute)
Ex: (get car color )=>red
(get car year )=>1998
remprop:this function is used to remove the properties .it takes 2 arguments object & attribute .
syntax: (remprop object attribute)
ex: (remprop car color)=>red
setf:the new function setf is same as setq except it is more general.it is an assignment function which
also takes 2 arguments, the first of which may be either an atom or an access function (CRA,CDR,get)
and the second the value to be assigned .
arrays: single or multiple dimension arrays may be defined in lisp using the make-array function.the
items tored in the array may be any lisp object.
(self myarray (make-array(10)))
#A(00000000)
Mapping functions:
Map car: Mapcar is one of several mapping functions provided in lisp to apply some function
successively to one or more lists of elements. the first argument of mapcar is a function and the
remaining arguments are lists of elements to which the named function is applied. The result of
applying the function to successive members of the lists are placed in a new list which is returned.
Ex;1-> (map car 1+(5 10 15 20 25)
(6 11 16 21 26)
->
104
105
Questions:
1. Define Artificial Intelligence. What are its techniques and its related fields (applications).
2. Distinguish between Human brain and Computers.
3. Distinguish between Artificial Intelligence programming and Conventional programming
(Software programming).**
4. Define state space representation and give an example of state space representation (water jug
problem, chess problem)
5. State the problem things and its characteristics.
6. What are the various characteristics that are considered in solving a problem? Explain.**
Or
Explain what are the key dimensions to analyze the problem
7. How to represent a problem in state space? Give good state space representation for Tower of
Hanoi problem.
8. What is production system? Describe its suitability to perform state space search.**
9. Write blind or uninformed or brute-force searching techniques (Dfs and Bfs).
10. Write heuristic search techniques (Generate and Test, hill climbing, Best first search, A*, AO*or
And Or graph algorithms)**
11. Describe the constraint satisfaction procedure and solve the following problems:
SEND + MORE=MONEY, CROSS+ROAD=DANGER,DONALD+ZERALD = ROBERT.
12. Explain the Means end analysis.
13. Distinguish between Declarative knowledge representations and procedural knowledge
representations with suitable example.
14. Describe the Key issues in knowledge representation
15. Describe the knowledge representation using semantic nets
16. Construct partitioned semantic net representations for the following statements.
(a). Every batter hit a ball
(b). All the batters like the pitch
17. Describe the knowledge representation using frames.
18. Describe the knowledge representation using scripts with suitable example.
19. Construct a script for going to a restaurant or super market or movie or an interview or college.
20. Describe the knowledge representation using conceptual dependencies.
21. Show a conceptual dependence representation of the sentence: I gave the man a Red Book
22. What are the advantages of predicate logic over prepositional logic
23. Write the rules for transform a sentence into clausal form (canonical form).
24. Represent the following facts in FOPL and convert them into clause form. Use resolution
techniques to find that Ravi is spy.
One of Raman, Ravi, Raghu and Ramesh is spy.
Raman is not spy.
Spies wear light colored dresses and do not attract attention of others.
Raghu was wearing a dark colored suit.
Ramesh was the center of attention of that evening..
25. Assume the following facts.
Steve only likes easy courses.
Science courses are hard.
All the courses in the basket-wearing department are easy.
BK301 is a basket-weaving course.
Use resolution to answer the question.
106
107