18CS753 Ai Module 4
18CS753 Ai Module 4
18CS753 Ai Module 4
Module-4
GAME PLAYING
OVERVIEW
Games hold an inexplicable fascination for many people, and the notion that computers might play
games has existed at least as long as computers. Charles Babbage, the nineteenth-century computer
architect, thought about programming his Analytical Engine to play chess and later of building a
machine to play tic-tac-toe. Two of the pioneers of the science of information and computing
contributed to the fledgling computer game-playing literature. A few years later, Alan Turing described
a chess-playing program, although he never built it. (Read, just to know)
✓ A game is defined as a sequence of choices where each choice is made from a number of
discrete alternatives. Each sequence ends in a certain outcome and every outcome has a
definite value for the opening player. In case of two player games, both the players have
opposite goals.
✓ There were two reasons that games appeared to be a good domain in which to explore machine
intelligence:
They provide a structured task in which it is very easy to measure success or failure.
They did not obviously require large amounts of knowledge. They were thought to be
solvable by straightforward search from the starting state to a winning position.
✓ The first of these reasons remains valid and accounts for continued interest in the area of
game playing by machine. Unfortunately, the second is not true for any but the simplest games.
For example, consider chess.
The average branching factor is around 35.
In an average game, each player might make 50 moves.
So in order to examine the complete game tree, we would have to examine 35100 positions.
✓ Thus it is clear that a program that simply does a straightforward search of the game tree will
not be able to select even its first move during the lifetime of its opponent. Some kind of
heuristic search procedure is necessary.
✓ To improve the effectiveness of a search-based problem-solving program two things can be
done:
Improve the generate procedure so that only good moves (or paths) are generated.
Improve the test procedure so that the best moves (or paths) will be recognized and
explored first.
✓ The ideal way to use a search procedure to find a solution to a problem is to generate moves
through the problem space until a goal state is reached. In the context of game-playing
programs, a goal state is one in which we win.
✓ In order to choose the best moves (ply), these methods use a static evaluation function. This
function is similar to that of the heuristic function h' in the A* algorithm.
• Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state with initial value of Maximizer and
determines the higher nodes values. It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will
find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to
the root node, but in real games, there will be more than 4 layers.
That was the complete workflow of the minimax two player game.
o The main drawback of the minimax algorithm is that it gets really slow for complex games such as
Chess, go, etc. This type of games has a huge branching factor, and the player has lots of choices to
decide. This limitation of the minimax algorithm can be improved from alpha-beta pruning which
we have discussed in the next topic.
Minimax algorithm performs a complete depth-first exploration of game tree. If the maximum
depth of the tree is ‘m’ and there are ‘b’ legal moves at each point (branching factor), then the
time complexity of the algorithm is O(bm). The space complexity is O(b*m). This algorithm serves
as a basis for mathematical analysis of games.
➢ ADDING ALPHA-BETA CUTOFFS/ALPHA BETA PRUNING
✓ Alpha_Beta pruning strategy is used to reduce the number of tree branches explored,
there by eliminate the large parts of the tree from consideration.
✓ Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique
for the minimax algorithm.
✓ As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half.
Hence there is a technique by which without checking each node of the game tree we can compute
the correct minimax decision, and this technique is called pruning. This involves two threshold
parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
✓ Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
✓ The two-parameter can be defined as:
o Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
o Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
✓ The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but
making algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
The main condition which required for alpha-beta pruning is: α>=β
o While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
o We will only pass the alpha, beta values to the child nodes.
Let's take an example of two-player search tree to understand the working of Alpha-beta pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β= +∞, these
value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and Node B passes the same
value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is compared with
firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn of Min, Now
β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at node B now α=
-∞, and β= 3.
In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞,
and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value of alpha will
be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where α>=β, so the right successor
of E will be pruned, and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A, the value of
alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these two values now
passes to right successor of A which is Node C.At node C, α=3 and β= +∞, and the same values will be
passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and max(3,0)= 3, and
then compared with right child which is 1, and max(3,1)= 3 still α remains 3, but the node value of F will
become 1.
Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta will be
changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again it satisfies the
condition α>=β, so the next child of C which is G will be pruned, and the algorithm will not compute the
entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following is the final
game tree which is the showing the nodes which are computed and nodes which has never computed. Hence
the optimal value for the maximizer is 3 for this example.
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node is examined.
Move order is an important aspect of alpha-beta pruning.
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of the leaves of the tree, and
works exactly as minimax algorithm. In this case, it also consumes more time because of alpha-beta factors,
such a move of pruning is called worst ordering. In this case, the best move occurs on the right side of the tree.
The time complexity for such an order is O(bm).
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning happens in the tree, and
best moves occur at the left side of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in ideal ordering is O(bm/2).
➢ ITERATIVE DEEPENING
✓ A number of ideas for searching two-player game trees have led to new algorithms for
single- agent heuristic search. One such idea is iterative deepening, originally used in a
program called CHESS 4.5. Rather than searching to a fixed depth in the game tree,
CHESS 4.5 first searched only a single ply, applying its static evaluation function to the
result of each of its possible moves. It then initiated a new minimax search, this time to a
depth of two ply. This was followed by a three ply search, then a four-ply search, etc. The
name “iterative deepening” derives from the fact that on each iteration, the tree is searched
one level deeper. Figure below depicts this process.
Figure: Iterative Deepening
✓ First, game-playing programs are subject to time constraints. For example, a chess
program may be required to complete all its moves within two hours. Since it is impossible
to know in advance how long a fixed-depth tree search will take, a program may find itself
running out of time.
✓ of
Dept. With
ISE, iterative
RNSIT deepening, the current search can be aborted at any time and the best
Artificial Intelligence Module -IV
move found by the previous iteration can be played. With effective ordering, the alpha-
beta procedure can prune many more branches, and total search time can be decreased
drastically. This allows more time for deeper iterations.
✓ Depth-first search was efficient in terms of space but required some cutoff depth in order to
force backtracking when a solution was not found. Breadth-first search was guaranteed to
find the shortest solution path but required inordinate amounts of space because all leaf
nodes have to be kept in memory. An algorithm called depth-first iterative deepening
(DFID) combines the best aspects of depth –first and breadth –first search.
1. Set SEARCH-DEPTH = 1
2. Conduct a depth-first search to a depth of SEARCH-DEPTH. If a solution path is found,
then return it.
3. Otherwise, increment SEARCH-DEPTH by 1 and go to step 2.
✓ Clearly, DFID will find the shortest solution path to the goal state. Moreover, the
maximum amount of memory used by DFID is proportional to the number of nodes in
that solution path. The DFID is only slower than depth-first search by a constant factor.
DFID avoids the problem of choosing cutoffs without sacrificing efficiency, and DFID is
the optimal algorithm (in terms of space and time) for uninformed search.
*******************
Language is meant for communicating about the world. By studying language, we can come to
understand more about the world. The entire language-processing problem is divided into two tasks:
- Processing written text: using lexical, syntactic, and semantic knowledge of the language as well
as the required real world information.
- Processing spoken language: using all the information needed above plus additional knowledge
about phonology as well enough added information to handle the further ambiguities that arise in
speech.
Figure: Features of language that make it both Difficult and Useful
The Problem The Good Side
1. English sentences are incomplete descriptions
of the information that they are intended to
convey. For example:
Some dogs are outside.
➢ INTRODUCTION
Language is a pair of Source language and target representation, together with a mapping between
elements of each to the other. The target representation will be chosen to be appropriate for the task at
hand.
• Steps in the Process
We can break the process down into the following pieces:
1. Morphological Analysis- Individual words are analyzed into their components, and non-
word tokens, such as punctuation, are separated from the words.
2. Syntactic Analysis- linear sequences of words are transformed into structures that show how
the words relate to each other. Some word sequence may be rejected if they violate the
language’s rules for how words may be combined. For example, an English syntactic analyzer
would reject the sentence “Boy the go the to store.”
3. Semantic Analysis- The structures created by the syntactic analyzer are assigned meanings.
In other words, a mapping is made between the syntactic structures and objects in the task
domain. Structures for which no such mapping is possible may be rejected. For example, in
most universes, the sentence “Colorless green ideas sleep furiously” would be rejected as
semantically anomalous.
4. Discourse Integration- The meaning of an individual sentence may depend on the sentences
that precede it and may influence the meaning of the sentences that follow it. For example, the
word “it” in the sentence, “John wanted it”, depends on the prior discourse context, while
the word “John” may influence the meaning of later sentences.
5. Pragmatic Analysis- The structure representing what was said is reinterpreted to determine
what was actually meant. For example, the sentence “Do you know what time it is?” should
be interpreted as a request to be told the time.
✓ To make the overall language understanding problem tractable, there are following two ways of
decomposing a program:
- The processes and the knowledge required to perform the task.
- The global control structure that is imposed on those processes.
✓ Let’s consider an example to see how the individual processes work. In this example, we assume
that the processes happen sequentially. Suppose we have an English interface to an operating
system and the following sentence is typed:
I want to print Bill’s .init file.
Morphological Analysis
Morphological analysis must do the following things:
- Pull apart the word “Bill’s” into the proper noun “Bill” and the possessive suffix “ ’s ”.
- Recognize the sequence “.init” as a file extension that is functioning as an adjective in
the sentence
✓ In addition, this process will usually assign syntactic categories to all the words in the
sentence. This is usually done now because interpretations for affixes (prefixes and
✓ Although not all syntactic systems do, is create a set of entities we call reference
markers (RM). They are shown in parentheses in the parse tree. Each one corresponds
to some entity that has been mentioned in the sentence. These reference markers are
useful later since they provide a place in which to accumulate information about the
entities as we get it.
Semantic Analysis
✓ Semantic analysis must do two important things:
- It must map individual words into appropriate objects in the knowledge base or
database.
- It must create the correct structures to correspond to the way the meaning of the
individual words combine with each other.
✓ For this example, suppose that we have a frame-based knowledge base that contains the
units shown in Figure below.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
✓ Then we can generate a partial meaning, with respect to that knowledge base, as shown
in Figure below. Reference marker RM1 corresponds to the top-level event of the
sentence. It is a wanting event in which the speaker (denoted by “I”) wants a printing
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
event to occur in which the same speaker prints a file whose extension is “.init” and
whose owner is Bill.
Figure: A Partial Meaning for a Sentence (semantic analysis)
Discourse Integration
✓ At this point, we have figured out what kinds of things this sentence is about. But we
do not yet know which specific individuals are being referred to. Specifically, we do not
know to whom the pronoun “I” or the proper noun “Bill” refers. To pin down these
references requires an appeal to a model of the current discourse context, from which we
can learn that the current user is User068 and that the only person named “Bill” about
whom we could be talking is User073. Once the correct reference for Bill is known, we
can also determine exactly which file is being referred to: F1 is the only file with the
extension “.init” that is owned by Bill.
Pragmatic Analysis
✓ We now have a complete description, in the terms provided by our knowledge base, of
what was said. The final step toward effective understanding is to decide what to do as a
result. One possible thing to do is to record what was said as a fact and be done with it.
For some sentences, including this one, the intended effect is different. We can discover
this intended effect by applying a set of rules that characterize cooperative dialogues. In
this example, we use the fact that when the user claims to want something that the
system is capable of performing, then the system should go ahead and do it. This
produces the final meaning shown in Figure below.
Figure: Representing the Intended Meaning
➢ SYNTACTIC PROCESSING
✓ Syntactic processing is the step in which a flat input sentence is converted into a hierarchical
structure that corresponds to the units of meaning in the sentence. This process is called
parsing. It plays an important role in many natural language understanding systems for two
reasons:
- Semantic Processing must operate on sentence constituents. If there is no syntactic parsing
step, then the semantics system must decide on its own constituents. Syntactic parsing is
computationally less expensive than is semantic processing. Thus it can play a significant
role in reducing overall system complexity.
- Although it is often possible to extract the meaning of a sentence without using grammatical
facts, it is not always possible to do so. Consider, for example, the sentences
The satellite orbited Mars.
Mars orbited the
satellite.
✓ In the second sentence, syntactic facts demand an interpretation in which a planet (Mars)
revolves around a satellite, even though the visible impossibility of such a scenario.
✓ Although there are many ways to produce a parse, almost all the systems that are actually used
have two main components:
- A declarative representation, called a grammar, of the syntactic facts about the language.
- A procedure, called a parser that compares the grammar against input sentences to
produce parsed structures.
✓ The simplest structure to build is a parse tree, which simply records the rules and how they
are matched. Figure below shows the parse tree that would be produced for the sentence “bill
printed the file” using this grammar.
✓ Notice that every node of the parse tree corresponds either to an input word or to a non-
terminal in the grammar. Each level in the parse tree corresponds to the application of one
grammar rule. As a result, it should be clear that a grammar specifies two things about a
language:
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
- Its weak generative capacity, by which we mean the set of sentences that are contained
within the language. This set (called the set grammatical sentences) is made up of
precisely those sentences that can be completely matched by a series of rules in the
grammar.
- Its strong generative capacity, by which we mean the structure to be assigned to each
grammatical sentence of the language.
✓ Figure below shows the top-level ATN of that example in a notation that a program could
read.
Figure: An ATN Grammar in List Form
✓ To see how an ATN works, let us trace the execution of this ATN as it parses the following
sentence: The long file has printed.
✓ This execution proceeds as follows:
1. Begin in state S.
2. Push to NP.
3. Do a category test to see “the” is a determiner.
4. This test succeeds, so set the DETERMINER register to DEFINITE and go to state Q6.
5. Do a category test to see if “long” an adjective.
6. This test succeeds, so append “long” to the list contained in the ADJS register. (This list
was previously empty.) Stay in state Q6.
7. Do a category test to see if “file” is an adjective. This test fails.
8. Do a category test to see if “file” is a noun. This test succeeds, so set the NOUN register
to “file” and go to state Q7.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
9. Push to PP.
10. Do a category test to see if “has” is a preposition. This test fails so pop and signal failure.
11. There is nothing else that can be done from state Q7, so pop and return the structure (NP
(FILE (LONG) DEFINITE)). The return causes the machine to be in state Q1, with the
SUBJ register set to the structure just returned and the TYPE register set to DCL.
12. Do a category test to see if “has” is a verb. This test succeeds, so set AUX register to NIL
and set the V register to “:has.” Go to state Q4.
13. Push to state NP. Since the next word, “printed”, is not a determiner or a proper noun,
NP will pop and return failure.
14. The only other thing to do in state Q4 is to halt. But more input remains, so a complete
parse has not been found. Backtracking is now required.
15. The last choice point was at state Q1, so return there. The registers AUX and V must be
unset.
16. Do a category test to see if “has” is an auxiliary. This test succeeds, so set the AUX
register to “has” and go to state Q3.
17. Do a category test to see if “printed” is a verb. This test succeeds, so set the V register to
“printed”. Go to state Q4.
18. Now, since the input is exhausted, Q4 is an acceptable final state. Pop and return the
structure ( S DCL (NP (FILE (LONG) DEFINITE)) HAS (VP PRINTED))
This structure is the output of the parse.
• Unification Grammars
Here, we describe a declarative approach to representing grammars.
When a parser applies grammar rules to a sentence, it performs two major kinds of operations:
- Matching (of sentence constituents to grammar rules)
- Building structure (corresponding to the result of combining constituents)
✓ Matching and structure building are operations that unification performs naturally. So an
obvious candidate for representing grammars is some structure on which we can define a
unification operator.
✓ Directed acyclic graphs (DAGs) can do exactly that.
✓ Each DAG represents a set of attribute –value pairs. For example, the graphs corresponding to
the words “the” and “file” are:
NP → DET N
We can write this as the following graph:
[CONSTITUENT 1: [CAT: DET
LEX: {1}]
CONSTITUENT 2: [CAT: N
LEX: {2}
NUMBER: {3}]
BUILD: [NP: [DET: {1}
HEAD: {2}
NUMBER: {3}]]]
This rule should be read as follows: Two constituents, described in the sub graphs labeled
CONSTITUENT1and CONSTITUENT 2, are to be combined. The first must be of CAT
DET. We do not care what its lexical entry is, but whatever it is will be bound to the label
{1}. The second constituent must be of CAT N. Its lexical will be bound to the label {2},
and its number will be bound to the label {3}. The result of combining these two
constituents is described in the sub graph labeled BUILD. This result will be a graph
corresponding to an NP with three attributes: DET, HEAD, and NUMBER. The values for
all these attributes are to be taken from the appropriate pieces of the graphs that are being
combined by the rule.
Algorithm: Graph-Unify
➢ SEMANTIC ANALYSIS
Lexical Processing
✓ The first step in any semantic processing system is to look up the individual words in a
dictionary (or lexicon) and extract their meaning. Many words have several meanings, and
it may not be possible to choose the correct one just by looking at the word itself. For
example, the word “diamond” might have the following set of meanings:
- A geometrical shape with four equal sides.
- A baseball field
- An extremely hard and valuable gemstone
✓ To select the correct meaning for the word “diamond” in the sentence,
John saw Susan’s diamond shimmering from across the room.
It is necessary to know that neither geometrical shapes nor baseball fields shimmer,
whereas gemstones do.
✓ The process of determining the correct meaning of an individual word is called word
sense disambiguation or lexical disambiguation. It is done by associating, with each word
in the lexicon, information about the contexts in which each of the word’s senses may
appear. Each of the words in a sentence can serve as part of the context in which the
meanings of the other words must be determined.
✓ Sometimes only very straightforward information about each word sense is necessary. For
example, the baseball field interpretation of “diamond” could be marked as a LOCATION.
Then the correct meaning of “diamond” in the sentence “I’ll meet you at the diamond”
could easily be determined if the fact that at requires a TIME or a LOCATION as its object
were recorded as part of the lexical entry for at. Such simple properties of word sense are
called semantics markers. Other useful semantic markers are
- PHYSICAL_OBJECT
- ANIMATE_OBJECT
- ABSTRACT_OBJECT
✓ Using these markers, the correct meaning of “diamond” in the sentence “I dropped my
diamond” can be computed. As part of its lexical entry, the verb “drop” will specify that its
object must be a PHYSICAL_OBJECT. The gemstone meaning of “diamond” will be
marked as a PHYSICAL_OBJECT. So it will be selected as the appropriate meaning in this
context.
Sentence-Level Processing
✓ Several approaches to the problem of creating a semantic representation of a sentence have
been developed, including the following:
- Semantic grammars, which combine syntactic, semantic, and pragmatic knowledge into
a single set of rules in the form of a grammar.
- Case grammars, in which the structure that is built by the parser contains some semantic
information.
- Conceptual parsing, in which syntactic and semantic knowledge combined into a single
interpretation system that is driven by the semantic knowledge.
- Approximately compositional semantic interpretation, in which semantic processing
is applied to the result of performing a syntactic parse.
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
• Semantic Grammars
✓ A semantic grammar is a context-free grammar in which the choice of nonterminal and
production rules is governed by semantic as well as syntactic function. In addition, there
is usually a semantic action associated with each grammar rule. The result of parsing and
applying all the associated semantic actions is the meaning of the sentence. An example of a
fragment of a semantic grammar is shown in Figure below. This grammar defines part of a
simple interface to an operating system.
Figure: A Semantic Grammar
✓ A semantic grammar can be used by a parsing system in exactly the same ways in which a
strictly syntactic grammar could be used. Figure below shows the result of applying this
semantic grammar to the sentence “ I want to print Bill’s .init file. ”
Figure: The Result of Parsing with a Semantic Grammar
• Case Grammars
✓ Case grammars provide a different approach to the problem of how syntactic and semantic
interpretation can be combined. Grammar rules are written to describe syntactic rather
than semantic regularities. But the structures the rules produce correspond to semantic
relations rather than to strictly syntactic ones. As an example, consider the two sentences and
the simplified forms of their conventional parse trees shown in Figure below.
Although the semantic roles of “Susan” and “the file” are identical in these two sentences,
their syntactic roles are reversed. Each is the subject in one sentence and the object in
another. Using a case grammar, the interpretations of the two sentences would both be
(Printed (agent Susan)
(Object File))
Figure: Syntactic Parses of an Active and a Passive Sentence.
✓ The syntactic structures of these two sentences are almost identical. In one case, “Mother” is
the subject of “baked”, while in the other “the pie” is the subject. But the relationship
between Mother and baking is very different from that between the pie and baking. A case
grammar analysis of these two sentences reflects this difference. The first sentence would be
interpreted as
(Baked (agent Mother)
(Time period 3-
hours))
The second would be interpreted as
(Baked (object Pie)
(Time period 3-hours))
There is no clear agreement on exactly what the correct set of deep cases ought to be, but some
obvious ones are the following:
(A) Agent- Instigator of the action(typically animate)
(I)Instrument-Cause of the event or object use in causing the event (typically inanimate)
(D) Dative- Entity affected by the action(typically animate)
(F) Factitive- Object or being resulting from the event
(L) Locative-Place of the event
(S) Source-Place from which something moves
(G) Goal-Place to which something moves
(B) Beneficiary-Being on whose behalf the event occurred(typically animate)
(T) ime-Time at which the event occurred
(O) Object-Entity that is acted upon or that changes, the most general case
• Conceptual Parsing
✓ Conceptual parsing, like semantic grammars, is a strategy for finding both the structure
and the meaning of a sentence in one step. Conceptual parsing is driven by a dictionary that
describes the meaning of words as conceptual dependency (CD) structures.
✓ Parsing a sentence into a conceptual dependency representation is similar to the process of
parsing using a case grammar. The first step is mapping a sentence into its CD representation
involves a syntactic processor that extracts the main noun and verb. It also determines the
syntactic category and aspectual class of the verb (i.e., stative, transitive, or intransitive). The
conceptual processor he takes over. It makes use of a verb-ACT dictionary, which contains
an entry for each environment in which a verb can appear. Figure below shows the
dictionary entries associated with the verb “want”. These three entries correspond to the
three kinds of wanting:
Figure: The Verb-ACT Directory
Figure: Two CD Interpretations of a sentence John went to the park with the peacock
Montague Semantics
✓ Although the exact form of semantic mapping rules in this approach depends on the way that
the syntactic grammar is defined, we illustrate the idea of compositional semantic rules in
Figure below.
Figure: Some Semantic Interpretation Rules
✓ Another use for knowledge is to enable the system to accept meanings that it is has not
been explicitly told about. Consider the following sentences as examples:
1. Sue likes to read Joyce.
2. Washington backed out of the summit talks.
3. The stranded explorer ate squirrels.
✓ Suppose our system has only the following meanings for the words "Joyce",
"Washington", and "Squirrel"
1. Joyce - instance: Author; last-name: Joyce
2. Washington - instance. City; name: Washington
3. Squirrel - isa: Rodent;
✓ But suppose that we also have only the following meanings for the verbs in these sentences:
1. Read - isa: Mental - event; object: must be <printed- material>
2. Back out - isa: Mental - event; agent: must be <animate-entity>
3. eat- isa: Ingestion-Event; object: must be <food>
✓ To understand even a single sentence, it is necessary to consider the discourse and pragmatic
context in which the sentence was uttered. These issues become even more important when we
want to understand texts and dialogues. There are number of important relationships that
may hold between phrases and parts of their discourse contexts, including:
Identical entities. Consider the text
- Bill had a red balloon.
- John wanted it.
The word "it" should be identified as referring to the red balloon. References such
as this are called anaphoric references or anaphora.
Parts of entities. Consider the text
- Sue opened the book she just bought.
- The title page was torn.
The phrase "the title page" should be recognized as being part of the book that was just
bought.
Parts of actions. Consider the text
- John went on a business trip to New York.
- He left on an early morning flight.
Taking a flight should be recognized as part of going on a trip.
Entities involved in actions. Consider the text
- My house was broken into last week.
- They took the TV and the stereo.
The pronoun "they" should be recognized as referring to the burglars who broke into
the house.
Elements of sets. Consider the text
- The decals we have in stock are stars, the moon, item and a flag.
- I'll take two moons.
The moons in the second sentence should be understood to be some of the moons
mentioned in the first sentence. Notice that to understand the second sentence at all
requires that we use the context of the first sentence to establish that the word "moons"
means moon decals.
Names of individuals. Consider the text
- Dave went to the movies.
Dave should be understood to be some person named Dave. Although there are many,
• Modeling Beliefs
✓ In order for a program to be able to participate intelligently in a dialogue, it must be able to
represent not only its own beliefs about the world, but also its knowledge of the order
✓ To achieve their goals, people exhibit plans. For example, to understand this simple text
about John, we need to make use of the fact that John was exploiting the operator USE (by A
of P to perform G), which can be described as:
USE (A, P, G):
Precondition: KNOW-WHAT (A, LOCATION (P))
NEAR (A,P)
HAS-CONTROL-OF (A, P)
READY (P)
Post condition: DONE (G)
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
✓ In other words, for A to use P to perform G, A must know the location of P, A must be near
P, A must have control of P (for example, I cannot use a screwdriver that you are holding
and refuse to give to me), and P must be ready for use (for example, I cannot use the
broken screwdriver).
• Speech Acts
✓ Language is a form of behavior. We use it as one way to accomplish our goals. In essence,
we make communicative plans in much the same sense that we make plans for anything else .
In fact, John could have achieved his goal of locating a screwdriver by asking someone
where it is was rather than by looking for it. The elements of communicative plans are
called speech acts. For example, we can define the basic speech act A INFORM B of P as
follows:
INFORM (A, B, P)
Precondition: BELIEVE (A, P)
KNOW-WHAT (A, LOCATION (A, P))
Post condition: BELIEVE (B, BELIEVE (A, P))
BELIEVE-IN (B, A) → BELIEVER, (B, P)
✓ To execute this operation, A must believe P and A must know where B is. The result of this
operator is that B believes that A believes P, and if B believes in the truth of what A says,
then B also believes P.
✓ We can define other speech acts similarly. For example, we can define ASK-WHAT (in
which A asks B the value of some predicate P):
ASK-WHAT (A, B, P)
Pre condition: KNOW-WHAT (A, LOCATION (B))
KNOW-WHAT (B, P)
WILLING-TO-PERFORM (B, INFORM (B, A, P))
Post condition: KNOW-WHAT (A, P)
✓ This is the action that John could have performed as an alternative way of finding a
screwdriver. We can also define other speech acts, such as A REQUSET B to perform R:
REQUEST (A, B, R)
Pre condition: KNOW-WHAT (A, LOCATION (B))
CAN-PERFORM (B, R)
WILLING-TO-PERFORM (B, R)
Post condition: WILL (PERFORM (B, R))
• Conversational Postulates
✓ This analysis of language is complicated by the fact that we do not always say exactly what
we mean. Instead, we often use indirect speech acts, such as "Do you know what time it
is?" or "It sure is cold in here". Computational treatments of this phenomenon usually rely
on models of the speaker's goals and of ways that those goals might reasonably be
achieved by using language.
✓ Fortunately, there is a certain amount of regularity in people's goals and in the way language
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
can be used to achieve them. This regularity gives rise to a set of conversational postulates,
which are rules about conversation that are shared by all speakers. Usually these rules are
followed. Sometimes they are not, but when this happens, the violation of the rules
communicates something in itself. Some of these conversational postulates are:
Sincerity Conditions: For a request by A of B to do R to be sincere, A must want B to
do R, A must assume B can do R, A must assume B is willing to do R, and A must
believe that B would not have done R anyway. If A attempts to verify one of these
conditions by asking a question of B, that question should normally be interpreted by B
as equivalent to the request R. For example
A: Can you open the door?
Reasonableness Conditions: For a request by A of B to do R to be reasonable, A must
have a reason for wanting R done, A must have a reason for assuming that B can do R, A
must have a reason for assuming that B is willing to do R, and A must have a reason for
assuming that B was not already planning to do R. Reasonableness conditions described
above, they account for the coherence of the following interchange:
A: Can you open the door? B:
Why do you want it open?
Appropriate Conditions: For a statement to be appropriate, it must provide the correct
amount of information, it must accurately reflect the speaker's beliefs, it must be concise
and unambiguous, and it must be polite. These conditions account for A's response in
the following interchange:
A: Who won the race?
B: Someone with long, dark hair.
A: I thought you knew all the runners.
✓ A inferred from B's incomplete response that B did not know who won the race, because if B
had known she would have provided a name. Of course, sometimes people "cop out" of
these conventions. In the following dialogue, B is explicitly copping out:
A: Who is going to be nominated for the
position? B: I'm sorry, I cannot answer that
question.
• Corpora
✓ The term "corpus" is derived from the Latin word meaning "body". The term could be used
to define a collection of written text or spoken words of a language. In general a corpus could
be defined as a large collection of segments of a language. These segments are selected and
ordered based on some explicit linguistic criteria so that they may be used to depict a sample
of that language. Corpora maybe available in the form of a collection of raw text or in a
more sophisticated annotated or marked-up form wherein information about the words is
also included to ease the process of language processing.
✓ Several kinds of corpora exist. These include ones containing written or spoken language,
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
new or old texts, texts from either one or different languages. Textual content could mean
the content of a complete book or books, newspapers, magazines, WebPages, journals,
speeches, etc. The British National Corpus (BNC), for instance is said to have a collection of
around a hundred million written and spoken language samples. Some corpora may contain
texts on a particular domain of study or a dialect. Such corpora are called Sublanguage
Corpora. Others may focus specifically to select areas like medicine, law, literature, novels
etc.
➢ SPELL CHECKING
✓ A spell checker is one of the basic tools required for language processing. It is used in a wide
variety of computing environments including word processing, character or text recognition
systems, speech recognition and generation. Spell checking is one of the pre-processing
formalities for most natural language processes. Studies on computer aided spell checking date
back to the early 1960's and with the advent of it being applies to new languages, continue to be
one of the challenging areas in information processing. Spell checking involves identifying
words and non words and also suggesting the possible alternatives for its correction. Most
available spell checkers focus on processing isolated words and do not take into account the
content. For instance if you try typing:
"Henry sar on the box"
in Microsoft word 2003 and find what suggestions it serves, you will find that the correct word
sat is missing. Now try typing this:
"Henry at on the box"
Here you will find that the error remains undetected as the word at is spelt correctly as an
isolated word. Observe that context plays a vital role in spell checking.
• Spelling Errors
✓ People conducted a survey on misspelled words and found that most of the non words were
a result of single error misspellings. Based on this survey it was found that the three causes of
error are;
- Insertion: Insertion of an extra letter while typing. E.g. maximum typed as maximum.
The extra i has been inserted within the word.
- Deletion: A case of a letter missing or not typed in a word. E.g. netwrk instead of
network.
- Substitution: Typing of a letter in place of the correct one as in intellugence wherein the
letter i has been wrongly substituted by u.
✓ Spelling errors may be classified into the following types-
- Typographic errors: As the name suggests, these errors are those that are caused due to
mistakes committed while typing. A typical example is netwrk instead of network.
- Orthographic errors: These, on the other hand, result due to lack of comprehension of
the concerned language on part of the user. Example of such spelling errors are
arithmetic, wellcome and accomodation.
- Phonetic errors: These result due to poor cognition on part of the listener. The word
Dept. of ISE, RNSIT
Artificial Intelligence Module -IV
rough could be spelt as ruff and listen as lisen. Note that both the misspelled words ruff
and lisen have the same phonetic pronunciation as their actual spellings. Such errors may
distort misspelled words more than typographic editing actions that cause a misspelling
(viz. insertion, deletion, transposition, or substitution error) as in case of ruff. Words like
piece, peace and peas, reed and read and quite and quiet may all be spelt correctly but
can lead to confusion depending on the context.
✓ A variant of the Leveshtein distance is the Damerau-Leveshtein distance which also takes
into account the transposition of two characters in addition to insertion, deletion and
substitution.
• Soundex Algorithm
✓ The Soundex algorithm can be effectively used as a simple phonetic based spell checker. It
makes use of rules found using the phonetics of the language in question. We discuss this
algorithm with reference to English.
✓ The Soundex algorithm uses a code to check for the closest word. The code was used to index
names in the U.S. census. The code for a word consists of its first letter followed by three
numbers that encode the remaining consonants. Those consonants that generate the same sound
have the same number. Thus the labials B, F, P and V imply the same number as1.
✓ Here is the algorithm:
- Remove all punctuation marks and capitalize the letters in the word.
- Retain the first letter of the word.
- Remove any occurrence of the letters - A, E, I, O, U, H, W, Y apart from the very first
letter.
- Replace the letters (other than the first) by the numbers shown in Table below. If two or
more adjacent letters, not separated by vowels, have the same numeric value, retain only
one of them.
- Return the first four characters; pad with zeroes if there are less than four.
Table: Substitutions for generating the Soundex code
Substitute
Letters(s)
with Integer
B, F, P, V 1
C, G, J, K, S, X, Z 2
D, T 3
L 4
M, N 5
R 6
✓ Note that the last three words are different only in the starting alphabet. This algorithm can
thus be used to measure the similarity of two words. This measure can then be used to find
possible good candidates for correction to be effected for a misspelled word such as rorn
(viz. Torn, worn, horn).
Concentrate on