Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
85 views29 pages

Theory of Computation Class Notes: Based On The Books by Sudkamp and by Hopcroft, Motwani and Ullman

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 29

Theory of Computation Class Notes1

based on the books by Sudkamp and by Hopcroft, Motwani and Ullman

ii

Contents
1 Introduction 1.1 Sets . . . . . . . . . . . . . . . 1.2 Functions and Relations . . . . 1.3 Countable and uncountable sets 1.4 Proof Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 3 5 5 9 9 12 13 18 19 19 19 21

2 Languages and Grammars 2.1 Languages . . . . . . . . . . . . . . . . . . . 2.2 Regular Expressions . . . . . . . . . . . . . 2.3 Grammars . . . . . . . . . . . . . . . . . . . 2.4 Classication of Grammars and Languages . 2.5 Normal Forms of Context-Free Grammars . 2.5.1 Chomsky Normal Form (CNF) . . . 2.5.2 Greibach Normal Form (GNF) . . . 3 Finite State Automata

iii

iv

CONTENTS

List of Figures
2.1 Derivation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

vi

LIST OF FIGURES

Chapter 1

Introduction
1.1 Sets

A set is a collection of elements. To indicate that x is an element of the set S, we write x S. The statement that x is not in S is written as x S. A set is specied by enclosing some description of / its elements in curly braces; for example, the set of all natural numbers 0, 1, 2, is denoted by N = {0, 1, 2, 3, }. We use ellipses (i.e.,. . .) when the meaning is clear, thus Jn = {1, 2, 3, , n} represents the set of all natural numbers from 1 to n. When the need arises, we use more explicit notation, in which we write S = {i|i 0, i is even} for the last example. We read this as S is the set of all i, such that i is greater than zero, and i is even. Considering a universal set U, the complement S of S is dened as S = {x|x U x S} / The usual set operations are union (), intersection (), and dierence(), dened as S1 S2 = {x|x S1 x S2 }

S1 S2 = {x|x S1 x S2 } S1 S2 = {x|x S1 x S2 } / The set with no elements, called the empty set is denoted by . It is obvious that S=S=S S= =U S=S

A set S1 is said to be a subset of S if every element of S1 is also an element of S. We write this as S1 S If S1 S, but S contains an element not in S1 , we say that S1 is a proper subset of S; we write this as S1 S 1

2 The following identities are known as the de Morgans laws, 1. S1 S2 = S1 S2 , 2. S1 S2 = S1 S2 , 1. S1 S2 = S1 S2 , x S1 S2 x U and x S1 S2 /

CHAPTER 1. INTRODUCTION

x U and (x S1 or x S2 )

(def.union) (negation of disjunction)

x S 1 S2

(x S1 and x S2 )

(x U and x S1 ) and (x U and x S2 ) / /

x U and ((x S1 ) and (x S2 )) x U and (x S1 and x S2 ) / /

(def.complement) (def.intersection)

If S1 and S2 have no common element, that is, S1 S2 = , then the sets are said to be disjoint. A set is said to be nite if it contains a nite number of elements; otherwise it is innite. The size of a nite set is the number of elements in it; this is denoted by |S| (or #S). A set may have many subsets. The set of all subsets of a set S is called the power set of S and is denoted by 2S or P (S). Observe that 2S is a set of sets. Example 1.1.1 If S is the set {1, 2, 3}, then its power set is 2S = {, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}} Here |S| = 3 and |2S | = 8. This is an instance of a general result, if S is nite, then |2S | = 2|S| Proof: (By induction on the number of elements in S).

Basis: |S| = 1 2S = {, S} |2S | = 21 = 2 Induction Hypothesis: Assume the property holds for all sets S with k elements. Induction Step: Show that the property holds for (all sets with) k + 1 elements. Denote Sk+1 = {y1 , y2 , . . . , yk+1 } = Sk {yk+1 }

1.2. FUNCTIONS AND RELATIONS where Sk = {y1 , y2 , y3 , . . . , yk } 2Sk+1 = 2Sk {yk+1 }

x,ySk {x, y, yk+1 } . . . Sk+1 2Sk has 2k elements by the induction hypothesis. The number of sets in 2Sk+1 which contain yk+1 is also 2k . Consequently |2Sk+1 | = 2 2k = 2k+1 .

{y1 , yk+1 } {y2 , yk+1 } . . . {yk , yk+1 }

A set which has as its elements ordered sequences of elements from other sets is called the Cartesian product of the other sets. For the Cartesian product of two sets, which itself is a set of ordered pairs, we write S = S1 S2 = {(x, y) | x S1 , y S2 } Example 1.1.2 Let S1 = {1, 2} and S2 = {1, 2, 3}. Then S1 S2 = {(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3)} Note that the order in which the elements of a pair are written matters; the pair (3, 2) is not in S 1 S2 . Example 1.1.3 If A is the set of throws of a coin, i.e., A ={head,tail}, then A A = {(head,head),(head,tail),(tail,head),(tail,tail)} the set of all possible throws of two coins. The notation is extended in an obvious fashion to the Cartesian product of more than two sets; generally S1 S2 Sn = {(x1 , x2 , , xn ) | xi Si }

1.2

Functions and Relations

A function is a rule that assigns to elements of one set (the function domain) a unique element of another set (the range). We write f : S1 S2 to indicate that the domain of the function f is a subset of S1 and that the range of f is a subset of S2 . If the domain of f is all of S1 , we say that f is a total function on S1 ; otherwise f is said to be a partial function on S1 . 1. Domain f = {x S1 | (x, y) f, for some y S2 } = Df 2. Range f = {y S2 | (x, y) f, for some x S1 } = Rf

4 3. The restriction of f to A S1 , f|A = {(x, y) f | x A} 4. The inverse f 1 : S2 S1 is {(y, x) | (x, y) f } 5. f : S1 S1 is called a function on S1

CHAPTER 1. INTRODUCTION

6. If x Df then f is dened at x; otherwise f is undened at x; 7. f is a total function if Df = S1 . 8. f is a partial function if Df S1 9. f is an onto function or surjection if Rf = S2 . If Rf S2 then f is a function from S1 (Df ) into S2 10. f is a one to one function or injection if (f (x) = z and f (y) = z) x = y 11. A total function f is a bijection if it is both an injection and a surjection. A function can be represented by a set of pairs {(x1 , y1 ), (x2 , y2 ), }, where each xi is an element in the domain of the function, and yi is the corresponding value in its range. For such a set to dene a function, each xi can occur at most once as the rst element of a pair. If this is not satised, such a set is called a relation. A specic kind of relation is an equivalence relation. A relation denoted r on X is an equivalence relation if it satises three rules, the reexivity rule: (x, x) r x X the symmetry rule: (x, y) r then (y, x) r x, y X and the transitivity rule: (x, y) r, (y, z) r then (x, z) r x, y, z X An equivalence relation on X induces a partition on X into disjoint subsets called equivalence classes Xj , j Xj = X, such that elements from the same class belong to the relation, and any two elements taken from dierent classes are not in the relation. Example 1.2.1 The relation congruence mod m (modulo m) on the set of the integers Z. i = j mod m if i j is divisible by m; Z is partitioned into m equivalence classes: { , 2m, m, 0, m, 2m, } { , 2m + 1, m + 1, 1, m + 1, 2m + 1, } { , 2m + 2, m + 2, 2, m + 2, 2m + 2, } { , m 1, 1, m 1, 2m, 3m 1, }

1.3. COUNTABLE AND UNCOUNTABLE SETS

1.3

Countable and uncountable sets

Cardinality is a measure that compares the size of sets. The cardinality of a nite set is the number of elements in it. The cardinality of a nite set can thus be obtained by counting the elements of the set.Two sets X and Y have the same cardinality if there is a total one to one function from X onto Y (i.e., a bijection from X to Y ). The cardinality of a set X is less than or equal to the cardinality of a set Y if there is a total one to one function from X into Y . We denote cardinality of X by #X or |X|. A set that has the same cardinality as the set of natural numbers N , is said to be countably innite or denumerable. Sets that are either nite or denumerable are referred to as countable sets. The elements of a countably innite set can be indexed (or enumerated) using N as the index set. The index mapping yields an enumeration of the countably innite set. Sets that are not countable are said to be uncountable. The cardinality of denumerable sets is #N = 0 (aleph0 ) The cardinality of the set of the real numbers, #R = 1 (aleph1 ) A set is innite if it has proper subset of the same cardinality. Example 1.3.1 The set J = N {0} is countably innite; the function s(n) = n + 1 denes a one-to-one mapping from N onto J . The set J , obtained by removing an element from N , has the same cardinality as N . Clearly, there is no one to one mapping of a nite set onto a proper subset of itself. It is this property that dierentiates nite and innite sets. Example 1.3.2 The set of odd natural numbers is denumerable. The function f (n) = 2n + 1 establishes the bijection between N and the set of the odd natural numbers. The one to one correspondence between the natural numbers and the set of all integers exhibits the countability of set of integers. A correspondence is dened by the function f (n) = Example 1.3.3 #Q+ = #J = #N Q+ is the set of the rational numbers
p q n 2 n 2

+ 1 if n is odd if n is even

> 0, where p and q are integers, q = 0.

1.4

Proof Techniques

We will give examples of proof by induction, proof by contradiction, and proof by Cantor diagonalization. In proof by induction, we have a sequence of statements P1 , P2 , , about which we want to make some claim. Suppose that we know that the claim holds for all statements P1 , P2 , , up to Pn . We then try to argue that this implies that the claim also holds for Pn+1 . If we can carry out this inductive step for all positive n, and if we have some starting point for the induction, we can say that the claim holds for all statements in the sequence. The starting point for an induction is called the basis. The assumption that the claim holds for

CHAPTER 1. INTRODUCTION

statements P1 , P2 , , Pn is the induction hypothesis, and the argument connecting the induction hypothesis to Pn+1 is the induction step. Inductive arguments become clearer if we explicitly show these three parts. Example 1.4.1 Let us prove
n 2 i=0 i

n(n+1)(2n+1) 6

by mathematical induction. We establish (a) the basis by substituting 0 for n in


n 2 i=0 i

n(n+1)(2n+1) 6

and observing that both sides are 0.

(b) For the induction hypothesis, we assume that the property holds with n = k;
k 2 i=0 i k(k+1)(2k+1) 6

(c) In the induction step, we show that the property holds for n = k + 1; i.e.,
k 2 i=0 i

= =

(k)(k+1)(2k+1) 6 (k+1)(k+2)(2k+3) 6

Since

k+1 2 i=0 i

k+1 2 i=0 i

k 2 i=0 i

+ (k + 1)2

and in view of the induction hypothesis, we need only show that


(k)(k+1)(2k+1) 6

+ (k + 1)2 =

(k+1)(k+2)(2k+3) 6

The latter equality follows from simple algebraic manipulation. In a proof by contradiction, we assume the opposite or contrary of the property to be proved; then we prove that the assumption is invalid.

Example 1.4.2 Show that 2 is not a rational number. As in proofs by contradiction, we assume the contrary of what we want to show. Here we assume all that 2 is a rational number so that it can be written as n 2 = m, n where n and m are integers without a common factor. Rearranging ( 2 = m ), we have 2m2 = n2 Therefore n2 must be even. This implies that n is even, so that we can write n = 2k or 2m2 = 4k 2 and m2 = 2k 2

1.4. PROOF TECHNIQUES

Therefore m is even. But this contradicts our assumption that n and m have no common factor. n Thus, m and n in ( 2 = m ) cannot exist and 2 is not a rational number. This example exhibits the essence of a proof by contradiction. By making a certain assumption we are led to a contradiction of the assumption or some known fact. If all steps in our argument are logically sound, we must conclude that our initial assumption was false. To illustrate Cantors diagonalization method, we prove that the set A = {f |f a total function, f : N N }, is uncountable. This is essentially a proof by contradiction; so we assume that A is countable, i.e., we can give an enumeration f0 , f1 , f2 , of A. To come to a contradiction, we construct a new function f as f(x) = fx (x) + 1 xN

The function f is constructed from the diagonal of the function values of fi A as represented in the gure below. For each x, f diers from fx on input x. Hence f does not appear in the given enumeration. However f is total and f : N N . Such an f can be given for any chosen enumeration. This leads to a contradiction. Therefore A cannot be enumerated; hence A is uncountable.

f1 f1 (0) f1 (1) f1 (2) f2 f2 (0) f2 (1) f2 (2)

f0 f0 (0) f0 (1) f0 (2)

f3 f3 (0) f3 (1) f3 (2)

Remarks: The set of all innite sequences of 0s and 1s is uncountable. With each innite sequence of 0s and 1s we can associate a real number in the range [0, 1). As a consequence, the set of real numbers in the range [0, 1) is uncountable. Note that the set of all real numbers is also uncountable.

CHAPTER 1. INTRODUCTION

Chapter 2

Languages and Grammars


2.1 Languages

We start with a nite, nonempty set of symbols, called the alphabet. From the individual symbols we construct strings (over or on ), which are nite sequences of symbols from the alphabet. The empty string is a string with no symbols at all. Any set of strings over/on is a language over/on . Example 2.1.1 = {c}

L1 = {cc} L2 = {c, cc, ccc}

L3 = {w|w = ck , k = 0, 1, 2, . . .} = {, c, cc, ccc, . . .}

Example 2.1.2 = {a, b} L1 = {ab, ba, aa, bb, }

L2 = {w|w = (ab)k , k = 0, 1, 2, 3, . . .} = {, ab, abab, ababab, . . .}

The concatenation of two strings w and v is the string obtained by appending the symbols of v to the right end of w, that is, if w = a 1 a2 . . . an and v = b 1 b2 . . . bm , then the concatenation of w and v, denoted by wv, is wv = a1 a2 . . . an b1 b2 . . . bm If w is a string, then w n is the string obtained by concatening w with itself n times. As a special case, we dene w0 = , 9

10

CHAPTER 2. LANGUAGES AND GRAMMARS

for all w. Note that w = w = w for all w. The reverse of a string is obtained by writing the symbols in reverse order; if w is a string as shown above, then its reverse w R is w R = a n . . . a2 a1 If w = uv, then u is said to be prex and v a sux of w. The length of a string w, denoted by |w|, is the number of symbols in the string. Note that, || = 0 If u and v are strings, then the length of their concatenation is the sum of the individual lengths, |uv| = |u| + |v| Let us show that |uv| = |u| + |v|. To prove this by induction on the length of strings, let us dene the length of a string recursively, by |a| = 1 |wa| = |w| + 1 for all a and w any string on . This denition is a formal statement of our intuitive understanding of the length of a string: the length of a single symbol is one, and the length of any string is incremented by one if we add another symbol to it. Basis: |uv| = |u| + |v| holds for all u of any length and all v of length 1 (by denition). Induction Hypothesis: we assume that |uv| = |u| + |v| holds for all u of any length and all v of length 1, 2, . . . , n. Induction Step: Take any v of length n + 1 and write it as v = wa. Then, |v| = |w| + 1, |uv| = |uwa| = |uw| + 1. By the induction hypothesis (which is applicable since w is of length n). |uw| = |u| + |w|. so that |uv| = |u| + |w| + 1 = |u| + |v|. which completes the induction step. If is an alphabet, then we use to denote the set of strings obtained by concatenating zero or more symbols from . We denote + = {}. The sets and + are always innite. A language can thus be dened as a subset of . A string w in a language L is also called a word or a sentence of L. Example 2.1.3 = {a, b}. Then

2.1. LANGUAGES = {, a, b, aa, ab, ba, bb, aaa, aab, . . .}. The set {a, aa, aab}. is a language on . Because it has a nite number of words, we call it a nite language. The set L = {an bn |n 0}

11

is also a language on . The strings aabb and aaaabbbb are words in the language L, but the string abb is not in L. This language is innite. Since languages are sets, the union, intersection, and dierence of two languages are immediately dened. The complement of a language is dened with respect to ; that is, the complement of L is L = L The concatenation of two languages L1 and L2 is the set of all strings obtained by concatenating any element of L1 with any element of L2 ; specically, L1 L2 = {xy | x L1 and y L2 } We dene L as L concatenated with itself n times, with the special case L0 = {} for every language L. Example 2.1.4 L1 = {a, aaa} L2 = {b, bbb}
n

L1 L2 = {ab, abbb, aaab, aaabbb} Example 2.1.5 For L = {an bn |n 0}, then L L = L2 = {an bn am bm |n 0, m 0} The string aabbaaabbb is in L2 .The star-closure or Kleene closure of a language is dened as L = L 0 L 1 L 2

=
i=0

Li

and the positive closure as L+ = L 1 L 2

=
i=1

Li

12

CHAPTER 2. LANGUAGES AND GRAMMARS

2.2

Regular Expressions

Denition 2.2.1 Let be a given alphabet. Then, 1. , {}, and {a} a are regular sets. They are called primitive regular sets. 2. If S and S1 are regular sets, so are S , X Y and X Y. 3. A set is a regular set if it is a primitive regular set or can be derived from the primitive regular sets by applying a nite number of the operations cup, * and concatenation. Denition 2.2.2 Let be a given alphabet. Then, 1. , (representing {}), a (representing {a}) a are regular expressions. They are called primitive regular expressions. 2. If r and r1 are regular expressions so are (r), (r ), (r1 + r2 ), (r r1 ). 3. A string is a regular expression if it is a primitive regular expression or can be derived from the primitive regular expressions by applying a nite number of the operations +, * and concatenation. A regular expression denotes a regular set. Regarding the notation of regular expression, texts will usually print them boldface; however, we assume that it will be understood that, in the context of regular expressions, is used to represent {} and a is used to represent {a}. Example 2.2.1 b (ab ab ) is a regular expression.

Example 2.2.2 (c + da bb) = {c, dbb, dabb, daabb, . . .} = {, c, cc, . . . , dbb, dbbdbb, . . . , dabb, dabbdabb, . . . , cdbb, cdabb, . . .} Beyond the usual properties of + and concatenation, important equivalences involving regular expressions concern porperties of the closure (Kleene star) operation. Some are given below, where , , stand for arbitrary regular expressions: 1. ( ) = . 2. ( ) = . 3. + = . 4. ( + ) = + . 5. () = () . 6. ( + ) = ( + ) . 7. ( + ) = ( ) . 8. ( + ) = ( ) . In general, the distribution law does not hold for the closure operation. For example, the statement ? + = ( + ) is false because the right hand side denotes no string in which both and appear.

2.3. GRAMMARS

13

2.3

Grammars
G = (V, , S, P )

Denition 2.3.1 A grammar G is dened as a quadruple

where V is a is a SV P is a

nite set of symbols called variables or nonterminals, nite set of symbols called terminal symbols or terminals, is a special symbol called the start symbol, nite set of productions or rules or production rules.

We assume V and are non-empty and disjoint sets. Production rules specify the transformation of one string into another. They are of the form xy where x (V )+ and y (V ) . Given a string w of the form w = uxv we say that the production x y is applicable to this string, and we may use it to replace x with y, thereby obtaining a new string, w z; we say that w derives z or that z is derived from w. Successive strings are derived by applying the productions of the grammar in arbitrary order. A production can be used whenever it is applicable, and it can be applied as often as desired. If w1 w 2 w 3 w we say that w1 derives w, and write w1 w. The * indicates that an unspecied number of steps (including zero) can be taken to derive w from w1 . Thus ww is always the case. If we want to indicate that atleast one production must be applied, we can write wv Let G = (V, , S, P ) be a grammar. Then the set L(G) = {w |s w} is the language generated by G. If w L(G), then the sequence S w1 w2 w is a derivation of the sentence (or word) w. The strings S, w1 , w2 , , are called sentential forms of the derivation.
+

14 Example 2.3.1 Consider the grammar

CHAPTER 2. LANGUAGES AND GRAMMARS

G = ({S}, {a, b}, S, P ) with P given by, S aSb S Then S aSb aaSbb aabb, so we can write S aabb. The string aabb is a sentence in the language generated by G. Example 2.3.2 P: < sentence >< N oun phrase >< V erb phrase > < N oun phrase >< Determiner >< N oun phrase > | < Adjective >< N oun >

< N oun phrase >< Article >< N oun > < V erb phrase >< V erb >< N oun phrase > < Determiner > T his < Adjective > Old < N oun > M an|Bus < V erb > M issed < Article > T he

Example 2.3.3 < expression >< variable > | < expression >< operation >< expression > < variable > A|B|C| |Z < operation > +| | |/ Leftmost Derivation < expression >< expression >< operation >< expression > < variable >< operation >< expression > A < operation >< expression > A+ < expression > A+ < expression >< operation >< expression > A+ < variable >< operation >< variable > A + B < operation >< expression > A + B < expression > A + B < variable > A+BC

2.3. GRAMMARS

15

Figure 2.1: Derivation tree This is a leftmost derivation of the string A + B C in the grammar (corresponding to A + (B C)). Note that another leftmost derivation can be given for the above expression. A grammar G (such as the one above) is called ambiguous if some string in L(G) has more than one leftmost derivation. An unambiguous grammar for the language is the following: < expr >< multi expr > | < multi expr >< add expr >< expr > < multi expr >< variable > | < variable >< multi op >< variable >

< multi op > | / < add op > + |

< variable > A | B | C | | Z

Note that, for an inherently ambiguous language L, every grammar that generates L is ambiguous. Example 2.3.4 G : S | aSb | bSa | SS L = {w|na (w) = nb (w)} Show that L(G) = L 1. L(G) L. (All strings derived by G, are in L.) For w L(G), all productions of G add a number of as which is same as the number of bs added; na (w) = nb (w) wL 2. L L(G) Let w L. By denition of L, na (w) = nb (w). We show that w L(G) by induction (on the

 CA @ PEDBI

! RC Q

 !1 ) ' CA @ HG&  0FEDB

 !1 ) ' % " $2  0(&$#  7 3 $98654    % " UeH# T! DDf! c  CA @ 0RD#  d1 c

 6  ! b8R#  !1 ) ' CA @ $2  aEDB 9T7 `  !1 ) ' CA @ $G&  0SB  6 T 3 HGWYXDGWVU !     

16 length of w).

CHAPTER 2. LANGUAGES AND GRAMMARS

Basis: is in both L and L(G). |w| = 2. The only two strings of length 2 in L are ab and ba S aSb ab S bSa ba Induction Hypothesis: w L with 2 |w| 2i, we assume that w L(G). Induction Step: Let w1 L, |w1 | = 2i + 2. (a) w1 of the form w1 = awb (or bwa) where |w| = 2i w L(G) (by I. H.) We derive w1 = awb using the rule S aSb. We derive w1 = bwa using the rule S bSa.

(b) w1 = awa or w1 = bwb Let us assign a count of +1 to a and -1 to b; Thus for w1 L the total count = 0.

We will now show that count goes through 0 at least once within w1 = awa (case bwb is similar) w1 = a (count = +1) (count goes through 0) (count = -1) a (by end, count = 0). w1 = w (count = 0) w where w L, w L. We also have |w | 2 and |w | 2 so that |w | 2i and |w | 2i w , w L(G) (I. H.) w1 = w w can be derived in G from w and w , using the rule S SS. Example 2.3.5 L(G) = {a2 |n 0} G = (V, T, S, P ) where V = {S, [, ], A, D} T = {a} P :S [A]
n

Aa

DA AAD ]

[ [D | D] ]

2.3. GRAMMARS For example, let us derive a4 . S [A] [DA] [AAD] [AA] [DAA] [AADA] [AAAAD] [AAAA] AAAA AAAA aaaa a4 Example 2.3.6 L(G) = {w {a, b, c} | na (w) = nb (w) = nc (w)} V = {A, B, C, S} T = {a, b, c} P : S |ABCS AB BA AC CA BC CB BA AB CA AC CB BC A a B b C c derive ccbaba Solution: S ABCS ABCABCS ABCABC ABCABC ACBACB CABCAB CACBBA CCABBA CCBABA ccbaba Example 2.3.7 S | aSb

17

L(G) = {, ab, aabb, aaabbb, . . .} L = {ai bi |i 0} To prove that L = L(G) 1. L(G) L 2. L L(G)

18 2. L L(G) : Let w L, w = ak bk we apply S aSb (k times), thus

CHAPTER 2. LANGUAGES AND GRAMMARS

Sak Sbk then S S a k bk 1. L(G) L: We need to show that, if w can be derived in G, then w L. is in the language, by denition. We rst show that all sentential forms are of the form ai Sbi , by induction on the length of the sentential form. Basis: (i = 1) aSb is a sentential form, since S aSb. Induction Hypothesis: Sentential form of length 2i + 1 is of the form ai Sbi . Induction Step: Sentential form of length 2(i + 1) + 1 = 2i + 3 is derived as S aSb a(ai Sbi )b = ai+1 Sbi+1 . To get a sentence, we must apply the production S ; i.e., S ai Sbi ai bi represents all possible derivations; hence G derives only strings of the form ai bi (i 0).

2.4

Classication of Grammars and Languages

A classication of grammars (and the corresponding classes of languages) is given with respect to the form of the grammar rules x y, into the Type 0, Type 1, Type 2 and Type 3 classes. Type 0 Unrestricted grammars do not put restrictions on the production rules. Type 1 If all the grammar rules x y satisfy |x| |y|, then the grammar is context sensitive or Type 1. Grammar G will generate a language L(G) which is called a context-sensitive language. Note that x has to be of length at least 1 and thereby y too. Hence, it is not possible to derive the empty string in such a grammar. Type 2 If all production rules are of the form x y where |x| = 1, then the grammar is said to be context-free or Type 2 (i.e., the left hand side of each rule is of length 1). Type 3 If the production rules are of the following forms: A xB Ax where x (a string of all terminals or the empty string), and A, B V (variables), then the grammar is called right linear. Similarly, for a left linear grammar, the production rules are of the form A Bx Ax

2.5. NORMAL FORMS OF CONTEXT-FREE GRAMMARS For a regular grammar, the production rules are of the form A aB Aa A with a .

19

A language which can be generated by a regular grammar will (later) be shown to be regular. Note that, a language that can be derived by a regular grammar i it can be derived by a right linear grammar i it can be derived by a left linear grammar.

2.5
2.5.1

Normal Forms of Context-Free Grammars


Chomsky Normal Form (CNF)

Denition 2.5.1 A context-free grammar G = (V, , P, S) is in Chomsky Normal Form if each rule is of the form i) A BC ii) A a iii) S where B, C V {S} Theorem 2.5.1 Let G = (V, , P, S) be a context-free grammar. There is an algorithm to construct a grammar G = (V, , P , S) in Chomsky normal form that is equivalent to G (L(G ) = L(G)). Example 2.5.1 Convert the given grammar G to CNF. G :S aABC|a A aA|a B bcB|bc C cC|c

Solution: A CNF equivalent G can be given as : G : S A T1 |a A a T1 AT2 T2 BC AAA B B T3 |B C B b T3 C B C C C|c C c

2.5.2

Greibach Normal Form (GNF)

If a grammar is in GNF, then the length of the terminals prex of the sentential form is increased at every grammar rule application, thereby enabling the prevention of the left recursion.

20

CHAPTER 2. LANGUAGES AND GRAMMARS

Denition 2.5.2 A context-free grammar G = (V, , P, S) is in Greibach Normal Form if each rule is of the form, i) A aA1 A2 . . . An ii) A a iii) S

Chapter 3

Finite State Automata

21

22

CHAPTER 3. FINITE STATE AUTOMATA

Bibliography
[1] A. Thomas Sudkamp, An Introduction to the Theory of Computer Science, Languages and Machnes, 3rd edition, Addison-Wesley 2005. [2] John E. Hopcroft, Rajeev Motwani and Jerey D. Ullman, Introduction to Automata Theory, Languages and Computation, 2nd edition, Addison-Wesley 2001.

23

You might also like