Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

18CS753 Ai Module 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

Artificial Intelligence Module -IV

Module-4

GAME PLAYING
OVERVIEW

Games hold an inexplicable fascination for many people, and the notion that computers might play
games has existed at least as long as computers. Charles Babbage, the nineteenth-century computer
architect, thought about programming his Analytical Engine to play chess and later of building a
machine to play tic-tac-toe. Two of the pioneers of the science of information and computing
contributed to the fledgling computer game-playing literature. A few years later, Alan Turing described
a chess-playing program, although he never built it. (Read, just to know)

✓ A game is defined as a sequence of choices where each choice is made from a number of
discrete alternatives. Each sequence ends in a certain outcome and every outcome has a
definite value for the opening player. In case of two player games, both the players have
opposite goals.
✓ There were two reasons that games appeared to be a good domain in which to explore machine
intelligence:
They provide a structured task in which it is very easy to measure success or failure.
They did not obviously require large amounts of knowledge. They were thought to be
solvable by straightforward search from the starting state to a winning position.
✓ The first of these reasons remains valid and accounts for continued interest in the area of
game playing by machine. Unfortunately, the second is not true for any but the simplest games.
For example, consider chess.
The average branching factor is around 35.
In an average game, each player might make 50 moves.
So in order to examine the complete game tree, we would have to examine 35100 positions.
✓ Thus it is clear that a program that simply does a straightforward search of the game tree will
not be able to select even its first move during the lifetime of its opponent. Some kind of
heuristic search procedure is necessary.
✓ To improve the effectiveness of a search-based problem-solving program two things can be
done:
Improve the generate procedure so that only good moves (or paths) are generated.
Improve the test procedure so that the best moves (or paths) will be recognized and
explored first.
✓ The ideal way to use a search procedure to find a solution to a problem is to generate moves
through the problem space until a goal state is reached. In the context of game-playing
programs, a goal state is one in which we win.
✓ In order to choose the best moves (ply), these methods use a static evaluation function. This
function is similar to that of the heuristic function h' in the A* algorithm.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
✓ Unfortunately, deciding which moves have contributed to wins and which to losses is not
always easy. The problem of deciding which of a series of actions is actually responsible
for a particular outcome is called the credit assignment problem.

➢ THE MINIMAX SEARCH PROCEDURE


• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the game-tree.
• Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-
tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they get
the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

➢ Working of Min-Max Algorithm:


• The working of the minimax algorithm can be easily described using an example. Below
we have taken an example of game-tree which is representing the two-player game.
• In this example, there are two players one is called Maximizer and other is called
Minimizer.
• Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.
• This algorithm applies DFS, so in this game-tree, we have to go all the way through the
leaves to reach the terminal nodes.
• At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs. Following are the main steps involved in
solving the two-player game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility
function to get the utility values for the terminal states. In the below tree diagram, let's take A
is the initial state of the tree. Suppose maximizer takes first turn which has worst-case initial
value =- infinity, and minimizer will take next turn which has worst-case initial value =
+infinity.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

• Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state with initial value of Maximizer and
determines the higher nodes values. It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will
find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to
the root node, but in real games, there will be more than 4 layers.

o For node A max(4, -3)= 4

That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the
tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which is O(bm).

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Limitation of the minimax Algorithm:

o The main drawback of the minimax algorithm is that it gets really slow for complex games such as
Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which
we have discussed in the next topic.

Algorithm: MINIMAX (Position, Depth, Player)


1. If DEEP-ENOUGH(Position, Depth),then return the structure
VALUE = STATIC (Position, Player)
PATH = nil
This indicates that there is no path from this node and that its value is that determined
by the static evaluation function.
2. Otherwise, generate one more ply of the tree by calling the function MOVE-GEN and setting
SUCCESSORS to the list it returns.
3. If SUCCESSORS is empty, then there are no moves to be made, so return the same
structure that would have been returned if DEEP-ENOUGH had returned true.
4. If SUCCESSORS is not empty, then examine each element in turn and keep track of the
best one. This is done as follows.
Initialize BEST-SCORE to the minimum value that STATIC can return. It will be
updated to reflect the best score that can be achieved by an element of SUCCESSORS.
For each element SUCC of SUCCESSORS, do the following:
(a) Set RESULT-SUCC to MINIMAX (SUCC, Depth + 1, OPPOSITE (Player))
This recursive call to MINIMAX will actually carry out the exploration of
SUCC.
(b) Set NEW-VALUE to VALUE (RESULT-SUCC). This will cause it to reflect the
merits of the position from the opposite perspective from that of the next lower level.
(c) If NEW-VALUE > BEST-SCORE, then we have found a successor that is better
than any that have been examined so far. Record this by doing the following:
(i) Set BEST-SCORE to NEW-VALUE.
(ii) The best known path is now from CURRENT to SUCC and then on to the
appropriate path down from SUCC as determined by the recursive call to
MINIMAX. So set BEST-PATH to the result of attaching SUCC to the front of
PATH (RESULT-SUCC).
5. Now that all the successors have been examined, we know the value of Position as well as
which path to take from it. So return the structure
VALUE = BEST- SCORE PATH = BEST- PATH
When the initial call to MINIMAX returns, the best move from CURRENT is the first
element on PATH.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

• Problems with minimax search


✓ Number of game states to be examined is exponential.
✓ We need to examine all the possible nodes in a game.
✓ We cannot reduce the number of tree branches to be explored.

Minimax algorithm performs a complete depth-first exploration of game tree. If the maximum
depth of the tree is ‘m’ and there are ‘b’ legal moves at each point (branching factor), then the
time complexity of the algorithm is O(bm). The space complexity is O(b*m). This algorithm serves
as a basis for mathematical analysis of games.
➢ ADDING ALPHA-BETA CUTOFFS/ALPHA BETA PRUNING

✓ Alpha_Beta pruning strategy is used to reduce the number of tree branches explored,
there by eliminate the large parts of the tree from consideration.
✓ Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique
for the minimax algorithm.
✓ As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half.
Hence there is a technique by which without checking each node of the game tree we can compute
the correct minimax decision, and this technique is called pruning. This involves two threshold
parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
✓ Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
✓ The two-parameter can be defined as:
o Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
o Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
✓ The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is: α>=β

Key points about alpha-beta pruning:


o The Max player will only update the value of alpha.
Dept.
o Theof ISE,
MinRNSIT
player will only update the value of beta.
Artificial Intelligence Module -IV

o While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same
value to its child D.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞,
and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now
passes to right successor of A which is Node C.At node C, α=3 and β= +∞, and the same values will be
passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.

Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
Move order is an important aspect of alpha-beta pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and
works exactly as minimax algorithm. In this case, it also consumes more time because of alpha-beta factors,
such a move of pruning is called worst ordering. In this case, the best move occurs on the right side of the tree.
The time complexity for such an order is O(bm).
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and
best moves occur at the left side of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in ideal ordering is O(bm/2).

Algorithm: MINIMAX-A-B (Position, Depth, Player, Use-Thresh, Pass-Thresh)


1. If DEEP-ENOUGH(Position, Depth),then return the
structure VALUE = STATIC (Position, Player);
PATH = nil
2. Otherwise, generate one more ply of the tree by calling the function MOVE-GEN
(Position, Player) and setting SUCCESSORS to the list it returns.
3. If SUCCESSORS is empty, there are no moves to be made, return the same structure that
would have been returned if DEEP-ENOUGH had returned TRUE.
4. If SUCCESSORS is not empty, then go through it, examining each element and keeping
track of the best one. This is done as
follows. For each element SUCC of
SUCCESSORS:
a) Set RESULT-SUCC to
MINIMAX-A-B (SUCC, Depth +1, OPPOSITE (Player), Pass-Thresh, Use-Thresh)
b) Set NEW-VALUE to VALUE (RESULT-SUCC).
c) If NEW-VALUE > Pass-Thresh, then we have found a successor that is better
than any that have been examined so far. Record this by doing the following.
i. Set Pass-Thresh to NEW-VALUE.
ii. The best known path is now from CURRENT to SUCC and then on to the
appropriate path from SUCC as determined by the recursive call to
MINIMAX-A-
B. So set BEST-PATH to the result of attaching SUCC to the front of PATH
(RESULT-SUCC).
d) If Pass-Thresh (reflecting the current best value) is not better than Use-Thresh,
then we should stop examining this branch. But both thresholds and values have
been inverted. So if Pass-Thresh >= Use-Thresh, then return immediately with the
value
VALUE = Pass-Thresh
PATH = BEST-PATH
5. Return the structure VALUE = Pass-Thresh PATH = BEST-PATH

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
➢ ADDITIONAL REFINEMENTS

✓ In addition to alpha-beta pruning, there are a variety of other modifications to the


minimax procedure that can also improve its performance.
Waiting for
Quiescence
Secondary Search
Using Book Moves
Alternatives to Minimax

➢ ITERATIVE DEEPENING

✓ A number of ideas for searching two-player game trees have led to new algorithms for
single- agent heuristic search. One such idea is iterative deepening, originally used in a
program called CHESS 4.5. Rather than searching to a fixed depth in the game tree,
CHESS 4.5 first searched only a single ply, applying its static evaluation function to the
result of each of its possible moves. It then initiated a new minimax search, this time to a
depth of two ply. This was followed by a three ply search, then a four-ply search, etc. The
name “iterative deepening” derives from the fact that on each iteration, the tree is searched
one level deeper. Figure below depicts this process.
Figure: Iterative Deepening

✓ First, game-playing programs are subject to time constraints. For example, a chess
program may be required to complete all its moves within two hours. Since it is impossible
to know in advance how long a fixed-depth tree search will take, a program may find itself
running out of time.
✓ of
Dept. With
ISE, iterative
RNSIT deepening, the current search can be aborted at any time and the best
Artificial Intelligence Module -IV
move found by the previous iteration can be played. With effective ordering, the alpha-
beta procedure can prune many more branches, and total search time can be decreased
drastically. This allows more time for deeper iterations.
✓ Depth-first search was efficient in terms of space but required some cutoff depth in order to
force backtracking when a solution was not found. Breadth-first search was guaranteed to
find the shortest solution path but required inordinate amounts of space because all leaf
nodes have to be kept in memory. An algorithm called depth-first iterative deepening
(DFID) combines the best aspects of depth –first and breadth –first search.

Algorithm: Depth-First Iterative Deepening

1. Set SEARCH-DEPTH = 1
2. Conduct a depth-first search to a depth of SEARCH-DEPTH. If a solution path is found,
then return it.
3. Otherwise, increment SEARCH-DEPTH by 1 and go to step 2.

✓ Clearly, DFID will find the shortest solution path to the goal state. Moreover, the
maximum amount of memory used by DFID is proportional to the number of nodes in
that solution path. The DFID is only slower than depth-first search by a constant factor.
DFID avoids the problem of choosing cutoffs without sacrificing efficiency, and DFID is
the optimal algorithm (in terms of space and time) for uninformed search.

Algorithm: Iterative- Deepening-A*


1. Set THRESHOLD = the heuristic evaluation of the start state.
2. Conduct a depth-first search, pruning any branch when its total cost function (g + h')
exceeds THRESHOLD. If a solution path is found during the search, return it.
3. Otherwise, increment THRESHOLD by the minimum amount it was exceeded during
the previous step, and then go to Step 2.

✓ Like A*, Iterative-Deepening-A* (IDA*) is guaranteed to find an optimal solution,


provided that h' is an admissible heuristic. Because of its depth-first search technique,
IDA* is very efficient with respect to space. IDA* was the first heuristic search algorithm to
find optimal solution paths for the 15-puzzle (a 4x4 version of the 8-puzzle) within
reasonable time and space constraints.

*******************

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

NATURAL LANGUAGE PROCESSING

Language is meant for communicating about the world. By studying language, we can come to
understand more about the world. The entire language-processing problem is divided into two tasks:
- Processing written text: using lexical, syntactic, and semantic knowledge of the language as well
as the required real world information.
- Processing spoken language: using all the information needed above plus additional knowledge
about phonology as well enough added information to handle the further ambiguities that arise in
speech.
Figure: Features of language that make it both Difficult and Useful
The Problem The Good Side
1. English sentences are incomplete descriptions
of the information that they are intended to
convey. For example:
Some dogs are outside.

Some dogs are on the lawn. Language allows speakers to be as vague or as


Three dogs are on the lawn. precise as they like. It also allows speakers to leave
Rover, Tripp, and Spot are on the lawn. out things they believe their hearers already know.

I called Lynda to ask her to the movies.


She said she’d love to go.

She was home when I called.


She answered the phone.
I actually asked her.
2. The same expression means different things 2. Language lets us communicate about an infinite
in different contexts. Ex : world using a finite number of symbols.
Where’s the water? (In a chemistry lab, it
must be pure)
Where’s the water? (When you are thirsty,
it must be potable)
Where’s the water? (Dealing with a leaky
roof, it can be filthy)
3. No natural language program can be 3. Language can evolve as the experiences that we
complete because new words, expression, and want to communicate about evolve.
meanings can be generated quite freely.
Ex: I’ll fax it to you.
1. There are lots of ways to say the same thing: 4. When you know a lot, facts imply each other.
Language is intended to be used by agents who
Mary was born on October 11. know a lot.
Mary’s birthday is October 11.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

➢ INTRODUCTION

Language is a pair of Source language and target representation, together with a mapping between
elements of each to the other. The target representation will be chosen to be appropriate for the task at
hand.
• Steps in the Process
We can break the process down into the following pieces:
1. Morphological Analysis- Individual words are analyzed into their components, and non-
word tokens, such as punctuation, are separated from the words.
2. Syntactic Analysis- linear sequences of words are transformed into structures that show how
the words relate to each other. Some word sequence may be rejected if they violate the
language’s rules for how words may be combined. For example, an English syntactic analyzer
would reject the sentence “Boy the go the to store.”
3. Semantic Analysis- The structures created by the syntactic analyzer are assigned meanings.
In other words, a mapping is made between the syntactic structures and objects in the task
domain. Structures for which no such mapping is possible may be rejected. For example, in
most universes, the sentence “Colorless green ideas sleep furiously” would be rejected as
semantically anomalous.
4. Discourse Integration- The meaning of an individual sentence may depend on the sentences
that precede it and may influence the meaning of the sentences that follow it. For example, the
word “it” in the sentence, “John wanted it”, depends on the prior discourse context, while
the word “John” may influence the meaning of later sentences.
5. Pragmatic Analysis- The structure representing what was said is reinterpreted to determine
what was actually meant. For example, the sentence “Do you know what time it is?” should
be interpreted as a request to be told the time.

✓ To make the overall language understanding problem tractable, there are following two ways of
decomposing a program:
- The processes and the knowledge required to perform the task.
- The global control structure that is imposed on those processes.
✓ Let’s consider an example to see how the individual processes work. In this example, we assume
that the processes happen sequentially. Suppose we have an English interface to an operating
system and the following sentence is typed:
I want to print Bill’s .init file.
Morphological Analysis
Morphological analysis must do the following things:
- Pull apart the word “Bill’s” into the proper noun “Bill” and the possessive suffix “ ’s ”.
- Recognize the sequence “.init” as a file extension that is functioning as an adjective in
the sentence
✓ In addition, this process will usually assign syntactic categories to all the words in the
sentence. This is usually done now because interpretations for affixes (prefixes and

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
suffixes) may depend on the syntactic category of the complete word. For example,
consider the word “prints”. This word is either a plural noun (with the “-s” marking
plural) or a third person singular verb (as in “he prints”), in which case the “-s”
indicates both singular and third person. If this step is done now, then in our example,
there will be ambiguity since “want,” “print, “and “file” can all function as more than
one syntactic category.
Syntactic Analysis
✓ Syntactic analysis must exploit the results of morphological analysis to build a
structural description of the sentence. The goal of this process, called parsing, is to
convert the flat list of words that forms the sentence into a structure that defines the
units that are represented by that fiat list. For our example sentence, the result of parsing
is shown in Figure below.
Figure: The Result of Syntactic Analysis of “I want to print Bill’s .init file.

✓ Although not all syntactic systems do, is create a set of entities we call reference
markers (RM). They are shown in parentheses in the parse tree. Each one corresponds
to some entity that has been mentioned in the sentence. These reference markers are
useful later since they provide a place in which to accumulate information about the
entities as we get it.
Semantic Analysis
✓ Semantic analysis must do two important things:
- It must map individual words into appropriate objects in the knowledge base or
database.
- It must create the correct structures to correspond to the way the meaning of the
individual words combine with each other.
✓ For this example, suppose that we have a frame-based knowledge base that contains the
units shown in Figure below.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

Figure: A Knowledge Base Fragment

✓ Then we can generate a partial meaning, with respect to that knowledge base, as shown
in Figure below. Reference marker RM1 corresponds to the top-level event of the
sentence. It is a wanting event in which the speaker (denoted by “I”) wants a printing
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
event to occur in which the same speaker prints a file whose extension is “.init” and
whose owner is Bill.
Figure: A Partial Meaning for a Sentence (semantic analysis)

Discourse Integration
✓ At this point, we have figured out what kinds of things this sentence is about. But we
do not yet know which specific individuals are being referred to. Specifically, we do not
know to whom the pronoun “I” or the proper noun “Bill” refers. To pin down these
references requires an appeal to a model of the current discourse context, from which we
can learn that the current user is User068 and that the only person named “Bill” about
whom we could be talking is User073. Once the correct reference for Bill is known, we
can also determine exactly which file is being referred to: F1 is the only file with the
extension “.init” that is owned by Bill.
Pragmatic Analysis
✓ We now have a complete description, in the terms provided by our knowledge base, of
what was said. The final step toward effective understanding is to decide what to do as a
result. One possible thing to do is to record what was said as a fact and be done with it.
For some sentences, including this one, the intended effect is different. We can discover
this intended effect by applying a set of rules that characterize cooperative dialogues. In
this example, we use the fact that when the user claims to want something that the
system is capable of performing, then the system should go ahead and do it. This
produces the final meaning shown in Figure below.
Figure: Representing the Intended Meaning

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

✓ The final step in pragmatic processing is to translate, from the knowledge-based


representation to a command to be executed by the system. In this case, we see that
the final result of the understanding process is: lpr/wsmith/stuff.init
Where “lpr” is the operating system’s file print command.

➢ SYNTACTIC PROCESSING

✓ Syntactic processing is the step in which a flat input sentence is converted into a hierarchical
structure that corresponds to the units of meaning in the sentence. This process is called
parsing. It plays an important role in many natural language understanding systems for two
reasons:
- Semantic Processing must operate on sentence constituents. If there is no syntactic parsing
step, then the semantics system must decide on its own constituents. Syntactic parsing is
computationally less expensive than is semantic processing. Thus it can play a significant
role in reducing overall system complexity.
- Although it is often possible to extract the meaning of a sentence without using grammatical
facts, it is not always possible to do so. Consider, for example, the sentences
The satellite orbited Mars.
Mars orbited the
satellite.
✓ In the second sentence, syntactic facts demand an interpretation in which a planet (Mars)
revolves around a satellite, even though the visible impossibility of such a scenario.
✓ Although there are many ways to produce a parse, almost all the systems that are actually used
have two main components:
- A declarative representation, called a grammar, of the syntactic facts about the language.
- A procedure, called a parser that compares the grammar against input sentences to
produce parsed structures.

• Grammars and Parsers


✓ The most common way to represent grammars is as a set of production rules as illustrated in
Figure below. This shows a simple context-free, phrase structure grammar for English.
Read the first rule as, “A sentence is composed of a noun phrase followed by a verb
phrase”. In this grammar, the vertical bar should be read as “ OR”. The denotes the
empty string.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
✓ Symbols that are further expanded by rules are called nonterminal symbols. Symbols that
correspond directly to strings that must be found in an input sentence are called terminal
symbols.
Fig.15.6 A Simple Grammar for a Fragment of English

✓ The simplest structure to build is a parse tree, which simply records the rules and how they
are matched. Figure below shows the parse tree that would be produced for the sentence “bill
printed the file” using this grammar.

✓ Notice that every node of the parse tree corresponds either to an input word or to a non-
terminal in the grammar. Each level in the parse tree corresponds to the application of one
grammar rule. As a result, it should be clear that a grammar specifies two things about a
language:
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
- Its weak generative capacity, by which we mean the set of sentences that are contained
within the language. This set (called the set grammatical sentences) is made up of
precisely those sentences that can be completely matched by a series of rules in the
grammar.
- Its strong generative capacity, by which we mean the structure to be assigned to each
grammatical sentence of the language.

✓ Top-Down versus Bottom-Up Parsing


To parse a sentence, it is necessary to find a way in which that sentence could have been
generated from the start symbol. There are two ways that this can be done:
- Top-Down Parsing: Being with the start symbol and apply the grammar rules
forward until the symbols at the terminals of the tree correspond to the components of
the sentence being parsed.
- Bottom-Up Parsing: Being with the sentence to be parsed and apply the grammar
rules backward until a single tree whose terminals are the words of the sentence and
whose top node is the start symbol has been produced.

✓ Finding One Interpretation or Finding Many


Suppose, for example, that a sentence processor looks at the words of an input sentence one
at a time, from left to right, and suppose that so far, it has seen:
- “Have” is the main verb of an imperative sentence, such as
“Have the students who missed the exam take it today”.
- “Have” is an auxiliary verb of an interrogative sentence, such as
“Have the students who missed the exam take it today?”
✓ There are four ways of handling sentences such as these:
- All Paths: Follow all possible paths and build all the possible intermediate
components.
- Best Path with Backtracking: Follow only one path at a time, but record, at every
choice point, the information that is necessary to make another choice if the chosen path
fails to lead to a complete interpretation of the sentence.
- Best Path with Patch-up: Fallow only one path at a time, but when an error is detected,
explicitly shuffle around the components that have already been formed.
- Wait and See: Fallow only one path, but rather than making decisions about the
function of each component as it is encountered, postpone the decision until enough
information is available to make the decision correctly.
✓ Parser Summary
There are many different kinds of parsing systems. There are three that have been used
fairly extensively in natural language systems:
- Chart parsers
- Definite clause grammars
- Augmented transition networks (or ATNs)

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

• Augmented Transition Networks


✓ An Augmented transition network (ATN) is a top-down parsing procedure that allows
various kinds of knowledge to be incorporated into the parsing system so it can operate
efficiently. The ATN is similar to finite state machine in which the class of labels that can
be attached to the arcs that define transitions between states has been augmented. Arcs may
be labeled with an arbitrary combination of the following:
- Specific words, such as “in”
- Word categories, such as “noun”
- Pushes to other networks that recognize significant components of a sentence. For
example, a network designed to recognize a prepositional phrase (PP) may include an
arc that asks for (“pushes for”) a noun phrase (NP).
- Procedures that perform arbitrary tests on both the current input and on sentence
components that have already been identified.
- Procedures that build structures that will form part of the final parse.
✓ Figure below shows an example of an ATN in graphical notation.
Figure: An ATN Network for a Fragment of English

✓ Figure below shows the top-level ATN of that example in a notation that a program could
read.
Figure: An ATN Grammar in List Form

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

✓ To see how an ATN works, let us trace the execution of this ATN as it parses the following
sentence: The long file has printed.
✓ This execution proceeds as follows:
1. Begin in state S.
2. Push to NP.
3. Do a category test to see “the” is a determiner.
4. This test succeeds, so set the DETERMINER register to DEFINITE and go to state Q6.
5. Do a category test to see if “long” an adjective.
6. This test succeeds, so append “long” to the list contained in the ADJS register. (This list
was previously empty.) Stay in state Q6.
7. Do a category test to see if “file” is an adjective. This test fails.
8. Do a category test to see if “file” is a noun. This test succeeds, so set the NOUN register
to “file” and go to state Q7.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
9. Push to PP.
10. Do a category test to see if “has” is a preposition. This test fails so pop and signal failure.
11. There is nothing else that can be done from state Q7, so pop and return the structure (NP
(FILE (LONG) DEFINITE)). The return causes the machine to be in state Q1, with the
SUBJ register set to the structure just returned and the TYPE register set to DCL.
12. Do a category test to see if “has” is a verb. This test succeeds, so set AUX register to NIL
and set the V register to “:has.” Go to state Q4.
13. Push to state NP. Since the next word, “printed”, is not a determiner or a proper noun,
NP will pop and return failure.
14. The only other thing to do in state Q4 is to halt. But more input remains, so a complete
parse has not been found. Backtracking is now required.
15. The last choice point was at state Q1, so return there. The registers AUX and V must be
unset.
16. Do a category test to see if “has” is an auxiliary. This test succeeds, so set the AUX
register to “has” and go to state Q3.
17. Do a category test to see if “printed” is a verb. This test succeeds, so set the V register to
“printed”. Go to state Q4.
18. Now, since the input is exhausted, Q4 is an acceptable final state. Pop and return the
structure ( S DCL (NP (FILE (LONG) DEFINITE)) HAS (VP PRINTED))
This structure is the output of the parse.

• Unification Grammars
Here, we describe a declarative approach to representing grammars.
When a parser applies grammar rules to a sentence, it performs two major kinds of operations:
- Matching (of sentence constituents to grammar rules)
- Building structure (corresponding to the result of combining constituents)
✓ Matching and structure building are operations that unification performs naturally. So an
obvious candidate for representing grammars is some structure on which we can define a
unification operator.
✓ Directed acyclic graphs (DAGs) can do exactly that.
✓ Each DAG represents a set of attribute –value pairs. For example, the graphs corresponding to
the words “the” and “file” are:

[CAT: DET [CAT: N


LEX: the] LEX: file
NUMBER: SING]
✓ Both words have a lexical category (CAT) and a lexical entry. In addition, the word “file” has
a value (sing) for the NUMBER attribute. The result of combining these two words to form a
simple NP can also be described as a graph:
[NP: [DET: the
HEAD: file NUMBER: SING]]
✓ It is this degenerate kind of a label that we need in order to describe the NP rule:

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

NP → DET N
We can write this as the following graph:
[CONSTITUENT 1: [CAT: DET
LEX: {1}]
CONSTITUENT 2: [CAT: N
LEX: {2}
NUMBER: {3}]
BUILD: [NP: [DET: {1}
HEAD: {2}
NUMBER: {3}]]]

This rule should be read as follows: Two constituents, described in the sub graphs labeled
CONSTITUENT1and CONSTITUENT 2, are to be combined. The first must be of CAT
DET. We do not care what its lexical entry is, but whatever it is will be bound to the label
{1}. The second constituent must be of CAT N. Its lexical will be bound to the label {2},
and its number will be bound to the label {3}. The result of combining these two
constituents is described in the sub graph labeled BUILD. This result will be a graph
corresponding to an NP with three attributes: DET, HEAD, and NUMBER. The values for
all these attributes are to be taken from the appropriate pieces of the graphs that are being
combined by the rule.

Algorithm: Graph-Unify

1. If either G1 or G2 is an attribute that is not itself an attribute-value pair then:


a) If the attributes conflict (as defined above), then fail.
b) If either is a variable, then bind it to the value of the other and return that value.
c) Otherwise, return the most general value that is consistent with both the original values.
Specifically, if disjunction is allowed, then return the inter section of the values.
2. Otherwise, do:
a) Set variable NEW to empty.
b) For each attribute A that is present (at the top level) in either G1 or G2 do
i) If A is not present at the top level in the other input, then add A and its value to NEW.
ii) If it is, then call Graph-Unify with the two values for A. If that fails, then fail. Otherwise,
take the new value of A to be the result of that unification and add A with its value to
NEW.
c) If there are any labels attached to G1 or G2, then bind them to NEW and return NEW.

➢ SEMANTIC ANALYSIS

✓ We must still produce a representation of the meaning of the sentence. Because


understanding is a mapping process, we must first define the language into which we are trying
to map. We call the final meaning representation language, including both the representational
framework and the specific meaning vocabulary, the target language.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

Lexical Processing
✓ The first step in any semantic processing system is to look up the individual words in a
dictionary (or lexicon) and extract their meaning. Many words have several meanings, and
it may not be possible to choose the correct one just by looking at the word itself. For
example, the word “diamond” might have the following set of meanings:
- A geometrical shape with four equal sides.
- A baseball field
- An extremely hard and valuable gemstone
✓ To select the correct meaning for the word “diamond” in the sentence,
John saw Susan’s diamond shimmering from across the room.
It is necessary to know that neither geometrical shapes nor baseball fields shimmer,
whereas gemstones do.
✓ The process of determining the correct meaning of an individual word is called word
sense disambiguation or lexical disambiguation. It is done by associating, with each word
in the lexicon, information about the contexts in which each of the word’s senses may
appear. Each of the words in a sentence can serve as part of the context in which the
meanings of the other words must be determined.
✓ Sometimes only very straightforward information about each word sense is necessary. For
example, the baseball field interpretation of “diamond” could be marked as a LOCATION.
Then the correct meaning of “diamond” in the sentence “I’ll meet you at the diamond”
could easily be determined if the fact that at requires a TIME or a LOCATION as its object
were recorded as part of the lexical entry for at. Such simple properties of word sense are
called semantics markers. Other useful semantic markers are
- PHYSICAL_OBJECT
- ANIMATE_OBJECT
- ABSTRACT_OBJECT
✓ Using these markers, the correct meaning of “diamond” in the sentence “I dropped my
diamond” can be computed. As part of its lexical entry, the verb “drop” will specify that its
object must be a PHYSICAL_OBJECT. The gemstone meaning of “diamond” will be
marked as a PHYSICAL_OBJECT. So it will be selected as the appropriate meaning in this
context.
Sentence-Level Processing
✓ Several approaches to the problem of creating a semantic representation of a sentence have
been developed, including the following:
- Semantic grammars, which combine syntactic, semantic, and pragmatic knowledge into
a single set of rules in the form of a grammar.
- Case grammars, in which the structure that is built by the parser contains some semantic
information.
- Conceptual parsing, in which syntactic and semantic knowledge combined into a single
interpretation system that is driven by the semantic knowledge.
- Approximately compositional semantic interpretation, in which semantic processing
is applied to the result of performing a syntactic parse.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV

• Semantic Grammars
✓ A semantic grammar is a context-free grammar in which the choice of nonterminal and
production rules is governed by semantic as well as syntactic function. In addition, there
is usually a semantic action associated with each grammar rule. The result of parsing and
applying all the associated semantic actions is the meaning of the sentence. An example of a
fragment of a semantic grammar is shown in Figure below. This grammar defines part of a
simple interface to an operating system.
Figure: A Semantic Grammar

✓ A semantic grammar can be used by a parsing system in exactly the same ways in which a
strictly syntactic grammar could be used. Figure below shows the result of applying this
semantic grammar to the sentence “ I want to print Bill’s .init file. ”
Figure: The Result of Parsing with a Semantic Grammar

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

The principal advantages of semantic grammas are the following:


- When the parse is complete, the result can be used immediately without the additional
stage of processing.
- Many ambiguities that would arise during a strictly syntactic parse can be avoided since
some of the interpretations do not make sense semantically.
- Syntactic issues that do not affect the semantics can be ignored.

• Case Grammars
✓ Case grammars provide a different approach to the problem of how syntactic and semantic
interpretation can be combined. Grammar rules are written to describe syntactic rather
than semantic regularities. But the structures the rules produce correspond to semantic
relations rather than to strictly syntactic ones. As an example, consider the two sentences and
the simplified forms of their conventional parse trees shown in Figure below.
Although the semantic roles of “Susan” and “the file” are identical in these two sentences,
their syntactic roles are reversed. Each is the subject in one sentence and the object in
another. Using a case grammar, the interpretations of the two sentences would both be
(Printed (agent Susan)
(Object File))
Figure: Syntactic Parses of an Active and a Passive Sentence.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Now consider the two sentences shown in Figure below:


Figure: Syntactic Parses of Two Similar Sentences

✓ The syntactic structures of these two sentences are almost identical. In one case, “Mother” is
the subject of “baked”, while in the other “the pie” is the subject. But the relationship
between Mother and baking is very different from that between the pie and baking. A case
grammar analysis of these two sentences reflects this difference. The first sentence would be
interpreted as
(Baked (agent Mother)
(Time period 3-
hours))
The second would be interpreted as
(Baked (object Pie)
(Time period 3-hours))

There is no clear agreement on exactly what the correct set of deep cases ought to be, but some
obvious ones are the following:
(A) Agent- Instigator of the action(typically animate)
(I)Instrument-Cause of the event or object use in causing the event (typically inanimate)
(D) Dative- Entity affected by the action(typically animate)
(F) Factitive- Object or being resulting from the event
(L) Locative-Place of the event
(S) Source-Place from which something moves
(G) Goal-Place to which something moves
(B) Beneficiary-Being on whose behalf the event occurred(typically animate)
(T) ime-Time at which the event occurred
(O) Object-Entity that is acted upon or that changes, the most general case

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
✓ The process of praising into a case representation is heavily directed by the lexical entries
associated with each verb. Figure below shows examples of a few such entries. Optional cases
are indicated in parentheses.
Figure: Some Verb Case Frames

• Conceptual Parsing
✓ Conceptual parsing, like semantic grammars, is a strategy for finding both the structure
and the meaning of a sentence in one step. Conceptual parsing is driven by a dictionary that
describes the meaning of words as conceptual dependency (CD) structures.
✓ Parsing a sentence into a conceptual dependency representation is similar to the process of
parsing using a case grammar. The first step is mapping a sentence into its CD representation
involves a syntactic processor that extracts the main noun and verb. It also determines the
syntactic category and aspectual class of the verb (i.e., stative, transitive, or intransitive). The
conceptual processor he takes over. It makes use of a verb-ACT dictionary, which contains
an entry for each environment in which a verb can appear. Figure below shows the
dictionary entries associated with the verb “want”. These three entries correspond to the
three kinds of wanting:
Figure: The Verb-ACT Directory

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
- Wanting something to happen
- Wanting an object
- Wanting a person
✓ Once the correct dictionary entry is chosen, the conceptual processor analyses the rest of the
sentence looking for the components that will fit into the empty slots of the verb structure. For
example, if the stative form of “want" has been found then the conceptual processor will look
for a conceptualization that can be inserted into the structure. So, if the sentence being
processed were John wanted Mary to go to the store. The structure shown in Figure below
would be built.
Figure: A CD Structure

✓ The conceptual processor examines possible interpretations in a well-defined order. For


example, if a phrase of the form "with PP" occurs, it could indicate any of the following
relationships between the PP and the conceptualizations of which it is a part:

1. Object of the instrumental case


2. Additional actor of the main ACT
3. Attribute of the PP just preceding it
4. Attribute of the actor of the conceptualization

Figure: Two CD Interpretations of a sentence John went to the park with the peacock

• Approximately Compositional Semantic Interpretation


✓ The final approach to semantics that we consider here is one in which syntactic parsing and
semantic interpretation are treated as separate steps. When we worked through the
example sentence "I want to print Bill’s .init file".
✓ If a strictly syntactic parse of a sentence has been produced then a straight forward way to
generate a semantic interpretation is the following:
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
- Look up each word in a lexicon that contains one or more definitions for the word, each
stated in terms of the chosen target representation.
- Use the structure information contained in the output of the parser to provide additional
constraints, beyond those extracted from the lexicon, on the way individual words may
combine to form larger meaning units.

Montague Semantics
✓ Although the exact form of semantic mapping rules in this approach depends on the way that
the syntactic grammar is defined, we illustrate the idea of compositional semantic rules in
Figure below.
Figure: Some Semantic Interpretation Rules

Figure: Combining Mapping Knowledge with the Knowledge Base

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Extended Reasoning with a Knowledge Base


✓ A significant amount of world knowledge may be necessary in order to do semantic
interpretation (and thus, sometimes, to get the correct syntactic parse). Sometimes the
knowledge is needed to enable the system to choose among competing interpretations.
Consider, for example, the sentences
1. John made a huge wedding cake with chocolate icing.
2. John made a huge wedding cake with Bill's mixer.
3. John made a huge wedding cake with a giant tower covered with roses.
4. John made a cherry pie with a giant tower covered with roses.

✓ Let us concentrate on the problem of deciding to which constituent the prepositional


phrase should be attached and of assigning a meaning to the preposition "with". We have
two main choices: either the phrase attaches to the action of making the cake and "with"
indicates the instrument relation, or the prepositional phrase attaches to the noun phrase
describing the dessert that was made, in which case "with" describes an additional
component of the dessert. The first two sentences are relatively straightforward if we imagine
that our knowledge base contains the following facts:
- Foods can be components of other foods.
- Mixers are used to make many kinds of desserts.
✓ But now consider the third sentence. A giant tower is neither a food nor a mixer. So it is not a
likely candidate for either role. What is required here is the much more specific (and
culturally dependent) fact that
- Wedding cakes often have towers and statues and bridges and flowers on them.

✓ Another use for knowledge is to enable the system to accept meanings that it is has not
been explicitly told about. Consider the following sentences as examples:
1. Sue likes to read Joyce.
2. Washington backed out of the summit talks.
3. The stranded explorer ate squirrels.

✓ Suppose our system has only the following meanings for the words "Joyce",
"Washington", and "Squirrel"
1. Joyce - instance: Author; last-name: Joyce
2. Washington - instance. City; name: Washington
3. Squirrel - isa: Rodent;

✓ But suppose that we also have only the following meanings for the verbs in these sentences:
1. Read - isa: Mental - event; object: must be <printed- material>
2. Back out - isa: Mental - event; agent: must be <animate-entity>
3. eat- isa: Ingestion-Event; object: must be <food>

The Interaction between Syntax and Semantics


✓ If we take a compositional approach to semantics, then we apply semantic interpretation rules
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
to each syntactic constituent, eventually producing an interpretation for an entire sentence.
But making a commitment about what to do imply no specific commitment about when to do
it. To implement a system, however, we must make decision on how to control will be passed
back and forth between the syntactic and the semantic processors. Two extreme positions are:
- Every time a syntactic constituent is formed, apply semantic interpretation to it
immediately.
- Wait until the entire sentence has been parsed, and then interpret the whole thing.

➢ DISCOURSE AND PRAGMATIC PROCESSING

✓ To understand even a single sentence, it is necessary to consider the discourse and pragmatic
context in which the sentence was uttered. These issues become even more important when we
want to understand texts and dialogues. There are number of important relationships that
may hold between phrases and parts of their discourse contexts, including:
Identical entities. Consider the text
- Bill had a red balloon.
- John wanted it.
The word "it" should be identified as referring to the red balloon. References such
as this are called anaphoric references or anaphora.
Parts of entities. Consider the text
- Sue opened the book she just bought.
- The title page was torn.
The phrase "the title page" should be recognized as being part of the book that was just
bought.
Parts of actions. Consider the text
- John went on a business trip to New York.
- He left on an early morning flight.
Taking a flight should be recognized as part of going on a trip.
Entities involved in actions. Consider the text
- My house was broken into last week.
- They took the TV and the stereo.
The pronoun "they" should be recognized as referring to the burglars who broke into
the house.
Elements of sets. Consider the text
- The decals we have in stock are stars, the moon, item and a flag.
- I'll take two moons.
The moons in the second sentence should be understood to be some of the moons
mentioned in the first sentence. Notice that to understand the second sentence at all
requires that we use the context of the first sentence to establish that the word "moons"
means moon decals.
Names of individuals. Consider the text
- Dave went to the movies.
Dave should be understood to be some person named Dave. Although there are many,

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
the speaker had one particular one in mind and the discourse context should tell us
which.
Casual chains. Consider the text
- There was a big snow storm yesterday.
- The schools were closed today.
The snow should be recognized as the reason that the schools were closed.
Planning sequences. Consider the text
- Sally wanted a new car.
- She decided to get a job.
Sally's sudden interest in a job should be recognized as arising out of her desire for a
new car and thus for the money to buy one.
Illocutionary force. Consider the sentence
- It sure is cold in here.
In many circumstances, this sentence should be recognized as having, as its intended
effect, that the hearer should do something like close the window or turn up the
thermostat.
Implicit presuppositions. Consider the query
- Did Joe fail CS101?
The speaker's presuppositions, including the fact that CS101 is a valid course, that Joe is
a student, and that Joe took CS 101, should be recognized so that if any of them is not
satisfied, the speaker can be informed.
• Using Focus in Understanding
✓ There are two important parts of the process of using knowledge to facilitate
understanding:
- Focus on the relevant part(s) of the available knowledge base.
- Use that knowledge to resolve ambiguities and to make connections among things that
were said.
✓ This task decomposes into three subtasks: making the cake, making the filling, and
combining the two components. The structure of the paragraph of instructions is: overall
sketch of the task, instructions for step1, instructions for step2, and then instructions for
step 3.
✓ A second property of coherent discourse is that dramatic changes of focus are usually
signaled explicitly with phrases such as "on the other hand"," to return to an earlier
topic", or "a second issue is".
✓ Assuming that all this knowledge has been used successfully to focus on the relevant part(s)
of the knowledge base, the second issue is how to use the focused knowledge to help in
understanding. There are as many ways of doing this as there are discourse phenomena that
require it. In the last section, we presented a sample list of those phenomena.

• Modeling Beliefs
✓ In order for a program to be able to participate intelligently in a dialogue, it must be able to
represent not only its own beliefs about the world, but also its knowledge of the order

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV
dialogue participant's beliefs about the world, that person's beliefs about the computer's
beliefs and so forth. The remark "She knew I knew she knew I knew she knew" may be a bit
extreme, but we do that kind of thinking all the time.

Modeling Shared Beliefs


✓ Shared beliefs can be modeled without any explicit notion of belief in the knowledge
base. All we need to do is represent the shared beliefs as facts, and they will be accessed
whenever knowledge about anyone's beliefs is needed.
There are two steps in the process of using a script to aid in language understanding:
- Select the appropriate script(s) from memory.
- Use the script(s) to fill in unspecified parts of the text to be understood.

Modeling Individual Beliefs


✓ As soon as we decide to represent individual beliefs, we need to introduce some
explicit predicate(s) to indicate that a fact is believed. Up until now, belief has been
indicated only by the presence or absence of assertions in the knowledge base. To model
belief, we need to move to logic that supports reasoning about belief propositions. Logic,
or "classical" logic, deals with the truth or falsehood of different statements as they are.
Modal logic, on the other hand, concerns itself with the different "modes" in which a
statement may be true. Modal logics allow us to talk about the truth of a set of
propositions not only in the current state of the real world, but also about their truth or
falsehood in the past or the future (these are called temporal logics), and about their truth
or falsehood under circumstances that might have been, but were not (these are
sometimes called conditional logics).
✓ In particular, to model individual belief we define a modal operator BELIEVE, that
enables us to make assertions of the form BELIEVE (A, P), which is true whenever A
believes P to be true. Notice that this can occur even if P is believed by someone else to
be false or even if P is false.
Another useful modal operator is KNOW:
BELIEVE (A , P) ^ P → KNOW (A, P)
✓ A third useful modal operator is KNOW-WHAT (A, P), which is true if A knows the
value of the function P. For example, we might say that A knows the value of his age.
✓ An alternative way to represent individual beliefs is to use the idea of knowledge base
partitioning. Partitioning enables us to do two things:
1. Represent efficiently the large set of beliefs shared by the participants. We discussed
one way of doing this above.
2. Represent accurately the smaller set of beliefs that are not shared.
✓ Figure below shows an example of a partitioned belief space.
Three different belief spaces are shown:
- S1 believes that Mary hit Bill.
- S2 believes that Sue hit Bill.
- S3 believes that someone hit Bill. It is important to be able to handle incomplete
beliefs of this kind, since they frequently serve as the basis for questions, such as, in
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
this case. "Who hit Bill?"
Figure: A Partitioned Semantic Net Showing Three Belief Spaces

• Using Goals and Plans for Understanding


✓ Consider the text
John was anxious to get his daughter's new bike put together before Christmas
Eve. He looked high and low for a screwdriver.
✓ To understand this story, we need to recognize that John had
1. A goal, getting the bike put together.
2. A plan, which involves putting together the various subparts until the bike is complete.
At least one of the resulting sub plans involves using a screwdriver to screw two parts
together.
✓ Some of the common goals that can be identified in stories of all softs (including children's
stories, newspaper reports, and history books) are
- Satisfaction goals, such as sleep, food, and water.
- Enjoyment goals, such as entertainment and competition.
- Achievement goals, such as possession, power, and status.
- Preservation goals, such as health and possessions.
- Pleasing goals, which involve satisfying some other kind of goal for someone else.
- Instrumental goals, which enable preconditions for other, higher-level goals.

✓ To achieve their goals, people exhibit plans. For example, to understand this simple text
about John, we need to make use of the fact that John was exploiting the operator USE (by A
of P to perform G), which can be described as:
USE (A, P, G):
Precondition: KNOW-WHAT (A, LOCATION (P))
NEAR (A,P)
HAS-CONTROL-OF (A, P)
READY (P)
Post condition: DONE (G)
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
✓ In other words, for A to use P to perform G, A must know the location of P, A must be near
P, A must have control of P (for example, I cannot use a screwdriver that you are holding
and refuse to give to me), and P must be ready for use (for example, I cannot use the
broken screwdriver).

• Speech Acts
✓ Language is a form of behavior. We use it as one way to accomplish our goals. In essence,
we make communicative plans in much the same sense that we make plans for anything else .
In fact, John could have achieved his goal of locating a screwdriver by asking someone
where it is was rather than by looking for it. The elements of communicative plans are
called speech acts. For example, we can define the basic speech act A INFORM B of P as
follows:
INFORM (A, B, P)
Precondition: BELIEVE (A, P)
KNOW-WHAT (A, LOCATION (A, P))
Post condition: BELIEVE (B, BELIEVE (A, P))
BELIEVE-IN (B, A) → BELIEVER, (B, P)
✓ To execute this operation, A must believe P and A must know where B is. The result of this
operator is that B believes that A believes P, and if B believes in the truth of what A says,
then B also believes P.
✓ We can define other speech acts similarly. For example, we can define ASK-WHAT (in
which A asks B the value of some predicate P):
ASK-WHAT (A, B, P)
Pre condition: KNOW-WHAT (A, LOCATION (B))
KNOW-WHAT (B, P)
WILLING-TO-PERFORM (B, INFORM (B, A, P))
Post condition: KNOW-WHAT (A, P)
✓ This is the action that John could have performed as an alternative way of finding a
screwdriver. We can also define other speech acts, such as A REQUSET B to perform R:
REQUEST (A, B, R)
Pre condition: KNOW-WHAT (A, LOCATION (B))
CAN-PERFORM (B, R)
WILLING-TO-PERFORM (B, R)
Post condition: WILL (PERFORM (B, R))

• Conversational Postulates
✓ This analysis of language is complicated by the fact that we do not always say exactly what
we mean. Instead, we often use indirect speech acts, such as "Do you know what time it
is?" or "It sure is cold in here". Computational treatments of this phenomenon usually rely
on models of the speaker's goals and of ways that those goals might reasonably be
achieved by using language.
✓ Fortunately, there is a certain amount of regularity in people's goals and in the way language
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
can be used to achieve them. This regularity gives rise to a set of conversational postulates,
which are rules about conversation that are shared by all speakers. Usually these rules are
followed. Sometimes they are not, but when this happens, the violation of the rules
communicates something in itself. Some of these conversational postulates are:
Sincerity Conditions: For a request by A of B to do R to be sincere, A must want B to
do R, A must assume B can do R, A must assume B is willing to do R, and A must
believe that B would not have done R anyway. If A attempts to verify one of these
conditions by asking a question of B, that question should normally be interpreted by B
as equivalent to the request R. For example
A: Can you open the door?
Reasonableness Conditions: For a request by A of B to do R to be reasonable, A must
have a reason for wanting R done, A must have a reason for assuming that B can do R, A
must have a reason for assuming that B is willing to do R, and A must have a reason for
assuming that B was not already planning to do R. Reasonableness conditions described
above, they account for the coherence of the following interchange:
A: Can you open the door? B:
Why do you want it open?
Appropriate Conditions: For a statement to be appropriate, it must provide the correct
amount of information, it must accurately reflect the speaker's beliefs, it must be concise
and unambiguous, and it must be polite. These conditions account for A's response in
the following interchange:
A: Who won the race?
B: Someone with long, dark hair.
A: I thought you knew all the runners.
✓ A inferred from B's incomplete response that B did not know who won the race, because if B
had known she would have provided a name. Of course, sometimes people "cop out" of
these conventions. In the following dialogue, B is explicitly copping out:
A: Who is going to be nominated for the
position? B: I'm sorry, I cannot answer that
question.

➢ STATISTICAL NATURAL LANGUAGE PROCESSING

• Corpora
✓ The term "corpus" is derived from the Latin word meaning "body". The term could be used
to define a collection of written text or spoken words of a language. In general a corpus could
be defined as a large collection of segments of a language. These segments are selected and
ordered based on some explicit linguistic criteria so that they may be used to depict a sample
of that language. Corpora maybe available in the form of a collection of raw text or in a
more sophisticated annotated or marked-up form wherein information about the words is
also included to ease the process of language processing.

✓ Several kinds of corpora exist. These include ones containing written or spoken language,
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
new or old texts, texts from either one or different languages. Textual content could mean
the content of a complete book or books, newspapers, magazines, WebPages, journals,
speeches, etc. The British National Corpus (BNC), for instance is said to have a collection of
around a hundred million written and spoken language samples. Some corpora may contain
texts on a particular domain of study or a dialect. Such corpora are called Sublanguage
Corpora. Others may focus specifically to select areas like medicine, law, literature, novels
etc.

➢ SPELL CHECKING

✓ A spell checker is one of the basic tools required for language processing. It is used in a wide
variety of computing environments including word processing, character or text recognition
systems, speech recognition and generation. Spell checking is one of the pre-processing
formalities for most natural language processes. Studies on computer aided spell checking date
back to the early 1960's and with the advent of it being applies to new languages, continue to be
one of the challenging areas in information processing. Spell checking involves identifying
words and non words and also suggesting the possible alternatives for its correction. Most
available spell checkers focus on processing isolated words and do not take into account the
content. For instance if you try typing:
"Henry sar on the box"
in Microsoft word 2003 and find what suggestions it serves, you will find that the correct word
sat is missing. Now try typing this:
"Henry at on the box"
Here you will find that the error remains undetected as the word at is spelt correctly as an
isolated word. Observe that context plays a vital role in spell checking.

• Spelling Errors
✓ People conducted a survey on misspelled words and found that most of the non words were
a result of single error misspellings. Based on this survey it was found that the three causes of
error are;
- Insertion: Insertion of an extra letter while typing. E.g. maximum typed as maximum.
The extra i has been inserted within the word.
- Deletion: A case of a letter missing or not typed in a word. E.g. netwrk instead of
network.
- Substitution: Typing of a letter in place of the correct one as in intellugence wherein the
letter i has been wrongly substituted by u.
✓ Spelling errors may be classified into the following types-
- Typographic errors: As the name suggests, these errors are those that are caused due to
mistakes committed while typing. A typical example is netwrk instead of network.
- Orthographic errors: These, on the other hand, result due to lack of comprehension of
the concerned language on part of the user. Example of such spelling errors are
arithmetic, wellcome and accomodation.
- Phonetic errors: These result due to poor cognition on part of the listener. The word
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
rough could be spelt as ruff and listen as lisen. Note that both the misspelled words ruff
and lisen have the same phonetic pronunciation as their actual spellings. Such errors may
distort misspelled words more than typographic editing actions that cause a misspelling
(viz. insertion, deletion, transposition, or substitution error) as in case of ruff. Words like
piece, peace and peas, reed and read and quite and quiet may all be spelt correctly but
can lead to confusion depending on the context.

• Spell Checking Techniques


✓ One could imagine a naive spell checker as a large corpus of correct words. Thus if a word in the
text being corrected does not match with one in the corpus then it results in a spelling error. An
exhaustive corpus would of course be a mandatory requirement.
✓ Spell checking techniques can be broadly classified into three categories:

(a) Non-Word Error Detection


✓ The process involves the detection of misspelled words or non-words. For example- The
word soper is a non-word and its correct form being super (or maybe sober).
✓ The most commonly used techniques to detect such errors are the N-gram analysis and
Dictionary look-up. As discussed earlier, N-gram techniques make use of the
probabilities of occurrence of N-grams in a large corpus of text to decide on the error in
the word. Those strings that contain highly infrequent sequences are created as cases of
spelling errors.
✓ Note that in the context of spell checkers we take N-grams to be a sequence of letters
(alphabet) rather than words. Here we try to predict the next letter (alphabet) rather than
the next word. These techniques have often been used in text (handwritten or printed)
recognition systems which are processed by an Optical Character Recognition (OCR)
system. The OCR uses features of each character such as the curves and the loops made by
them to identify the character. Quite often these OCR methods lead to errors. The number
0, the alphabet O and D are quite often sources of errors as they look alike. This calls for
a spell checker that can post-process the OCR output. One common N-gram approach uses
tables to predict whether a sequence of characters does exist within corpora and then flags
an error. Dictionary look-up involves the use of an efficient dictionary lookup coupled
with pattern-matching algorithms (such as hashing techniques, finite state automata, etc...)
dictionary partitioning schemes and morphological processing methods.

(b) Isolated-Word Error Correction


✓ This process focuses on the correction of an isolated non-word by finding its nearest and
meaningful word and makes an attempt to rectify the error. It thus transforms the word
soper into super by some means but without looking into the context.
✓ This correction is usually performed as a context independent suggestion generation
exercise. The techniques employed herein include the minimum edit distance techniques,
similarly key techniques, rule-based methods, N-gram, probabilistic and neural network
based techniques (Kukich 1992).
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
✓ Isolated-word error correction may be looked upon as a combination of three sub-
problems- Error detection, Candidate (Correct word) generation and Ranking of the
correct candidates. Error detection as already mentioned could use either of the dictionary
or the N-gram approaches. The possible correct candidates are found using a dictionary or
by looking-up a pre-processed database of correct N-grams. Ranking of these candidates is
done by measuring the lexical or similarity distance between the misspelled word and the
candidate.

Minimum Edit Distance Technique


✓ The minimum edit distance between the misspelled word and the possible correct
candidate as the minimum number of edit operations needed to transform the
misspelled word to the correct candidate. By edit operations we mean- insertions,
deletions and substitutions of a single character (alphabet) to transform one word to
the other. The minimum number of such operations required to effect the transform is
commonly known as the Levenshtein distance named after Vladimir Levenshtein
who first used this metric as a distance. As an example inspect the way in which you
could transform the word drive (below) to the word time and arrive at the distance 3
between them.

✓ A variant of the Leveshtein distance is the Damerau-Leveshtein distance which also takes
into account the transposition of two characters in addition to insertion, deletion and
substitution.

(c) Context dependent Error detection and correction


✓ These processes try, in addition to detect errors, try to find whether the corrected word fits
into the context of the sentence. These are naturally more complex to implement and
require more resources than the previous method. How would you correct the wise words
of Lord Buddha-
"Peace comes from within"
if it were typed as-
"Piece comes from within"?
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
Note that the first word in both these statements is a correct word.
✓ This involves correction of real-word errors or those that result in another valid word.
Non-word errors that have more than one potential correction also fall in this category.
The strategies commonly used find their basis on traditional and statistical natural
language processing techniques.

• Soundex Algorithm
✓ The Soundex algorithm can be effectively used as a simple phonetic based spell checker. It
makes use of rules found using the phonetics of the language in question. We discuss this
algorithm with reference to English.
✓ The Soundex algorithm uses a code to check for the closest word. The code was used to index
names in the U.S. census. The code for a word consists of its first letter followed by three
numbers that encode the remaining consonants. Those consonants that generate the same sound
have the same number. Thus the labials B, F, P and V imply the same number as1.
✓ Here is the algorithm:
- Remove all punctuation marks and capitalize the letters in the word.
- Retain the first letter of the word.
- Remove any occurrence of the letters - A, E, I, O, U, H, W, Y apart from the very first
letter.
- Replace the letters (other than the first) by the numbers shown in Table below. If two or
more adjacent letters, not separated by vowels, have the same numeric value, retain only
one of them.
- Return the first four characters; pad with zeroes if there are less than four.
Table: Substitutions for generating the Soundex code

Substitute
Letters(s)
with Integer
B, F, P, V 1
C, G, J, K, S, X, Z 2
D, T 3
L 4
M, N 5
R 6

✓ Normally the Soundex code contains-


First character of word, Code_1, Code_2, Code_3. Table below shows the Soundex codes for
few words.

Dept. of ISE, RNSIT


Artificial Intelligence Module -IV

Table: Soundex codes for some words

Word Soundex code


Great, great G630
Network, network N362
Henry, Henary H560
Torn T650
Worn W650
Horn H650

✓ Note that the last three words are different only in the starting alphabet. This algorithm can
thus be used to measure the similarity of two words. This measure can then be used to find
possible good candidates for correction to be effected for a misspelled word such as rorn
(viz. Torn, worn, horn).

Question Bank for IV and V module

1. Explain various learning Techniques with examples.


2. What is an analogy? Explain two types of Analogy with examples.
3. What is formal learning theory? Explain with examples.
4. What is an expert system? List and explain various expert systems.
5. What is an expert system shell? Explain with an example.
6. What is knowledge Acquisition? Discuss various tools for KA.
7. List and describe the steps in Natural language processing (NLP)
8. What is Syntactic and Semantic analysis? Explain.
9. Explain Focus in understanding.
10. What are spell checking techniques? Explain.

Concentrate on

1. List and Describe the steps in Natural language processing (NLP)


2. What are spell checking techniques? Explain.
3. Explain rote learning with an example.
4. What do you mean by learning? Discuss Inductive learning with an example.
5. What is an analogy? Explain derivational Analogy.
6. Write short notes on the following Expert systems: Prospector, Design Advisor

Dept. of ISE, RNSIT

You might also like