Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
38 views

TOC Notes

Uploaded by

venkat Mohan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

TOC Notes

Uploaded by

venkat Mohan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 93

THEORY OF COMPUTATION

UNIT NO: I FINITE AUTOMATA

What is TOC?
In theoretical computer science, the theory of computation is the branch that deals with
whether and how efficiently problems can be solved on a model of computation, using an
algorithm. The field is divided into three major branches: automata theory, computability theory
and computational complexity theory.
In order to perform a rigorous study of computation, computer scientists work with a
mathematical abstraction of computers called a model of computation. There are several
models in use, but the most commonly examined is the Turing machine.
Automata theory
In theoretical computer science, automata theory is the study of abstract machines (or more
appropriately, abstract 'mathematical' machines or systems) and the computational problems that
can be solved using these machines. These abstract machines are called automata.
This automaton consists of
 states (represented in the figure by circles),
 and transitions (represented by arrows).
As the automaton sees a symbol of input, it makes a transition (or jump) to another state,
according to its transition function (which takes the current state and the recent symbol as its
inputs).
Uses of Automata: compiler design and parsing.

Introduction to formal proof:


Basic Symbols used :
U – Union
∩- Conjunction ϵ
- Empty String
Φ – NULL set
7- negation
‘ – compliment
= > implies

Theory of computation
Additive inverse: a+(-a)=0
Multiplicative inverse: a*1/a=1
Universal set U={1,2,3,4,5}
Subset A={1,3}
A’ ={2,4,5}
Absorption law: AU(A ∩B) = A, A∩(AUB) = A

De Morgan’s Law:
(AUB)’ =A’ ∩ B’
(A∩B)’ = A’ U B’
Double compliment
(A’)’ =A
A ∩ A’ = Φ

Logic relations:
a b = > 7a U b
7(a∩b)=7a U 7b

Relations:
Let a and b be two sets a relation R contains aXb.
Relations used in TOC:
Reflexive:a=a
Symmetric: aRb = > bRa
Transition: aRb, bRc = > aRc
If a given relation is reflexive, symmentric and transitive then the relation is called equivalence
relation.

Deductive proof: Consists of sequence of statements whose truth lead us from some
initial statement called the hypothesis or the give statement to a conclusion statement.

Additional forms of proof:


Proof of sets
Proof by contradiction
Proof by counter example

Direct proof (AKA) Constructive proof:


If p is true then q is true
Eg: if a and b are odd numbers then product is also an odd
number. Odd number can be represented as 2n+1
a=2x+1, b=2y+1
product of a X b = (2x+1) X (2y+1)= 2(2xy+x+y)+1 = 2z+1 (odd number)

Theory of computation
Proof by contrapositive:

Theory of computation
Proof by Contradiction:

H and not C implies falsehood.

Be regarded as an observation than a theorem.

Proof by mathematical Induction:

Languages :

The languages we consider for our discussion is an abstraction of natural languages. That

Theory of computation
is, our focus here is on formal languages that need precise and formal definitions.
Programming languages belong to this category.

Symbols :

Symbols are indivisible objects or entity that cannot be defined. That is, symbols are the
atoms of the world of languages. A symbol is any single object such as, a, 0, 1, #, begin,
or do.

Alphabets :

An alphabet is a finite, nonempty set of symbols. The alphabet of a language is normally


denoted by∑. When more than one alphabets are considered for discussion, then
subscripts may be used (e.g. ∑ 1,∑2 etc) or sometimes other symbol like G may also be
introduced.

Strings or Words over Alphabet :

A string or word over an alphabet ∑ is a finite sequence of concatenated symbols of ∑.

Example : 0110, 11, 001 are three strings over the binary alphabet { 0, 1 } .

aab, abcb, b, cc are four strings over the alphabet { a, b, c }.

It is not the case that a string over some alphabet should contain all the symbols from the
alphabet. For example, the string cc over the alphabet { a, b, c } does not contain the
symbols a and b. Hence, it is true that a string over an alphabet is also a string over any
superset of that alphabet.

Length of a string :
The number of symbols in a string w is called its length, denoted by |w|.

Example : | 011 | = 4, |11| = 2, | b | = 1

Convention : We will use small case letters towards the beginning of the English
alphabet to denote symbols of an alphabet and small case letters towards the end to
denote strings over an alphabet. That is, (symbols) and
are strings.

Example : Consider the string 011 over the binary alphabet. All the prefixes, suffixes and
substrings of this string are listed below.

Prefixes: ε, 0, 01, 011.


Suffixes: ε, 1, 11, 011.
Substrings: ε, 0, 1, 01, 11, 011.

Theory of computation
Note that x is a prefix (suffix or substring) to x, for any string x and ε is a prefix (suffix or
substring) to any string.

A string x is a proper prefix (suffix) of string y if x is a prefix (suffix) of y and x ≠ y.


In the above example, all prefixes except 011 are proper prefixes.

Powers of Strings : For any string x and integer n>=0, we use xx to denote the string
formed by sequentially concatenating n copies of x. We can also give an inductive
definition of xx as follows: xx = e,
if n= 0 ; otherwise xx= xx-1.

Reversal :
For any string the reversal of the string is
. An inductive definition of reversal can be given as follows:

Languages
Defn: A language is a set of strings over an alphabet.
 A more restricted definition requires some forms of restrictions on the
strings, i.e., strings that satisfy certain properties
 Defn. The syntax of a language restricts the set of strings that satisfy
certain properties.

Defn: A string over an alphabet X, denoted , is a finite sequence of elements


from X, which are indivisible objects
e.g., Strings can be words in English
The set of strings over an alphabet is defined recursively (as given below)
Example: Given  = {a, b}, * includes , a, b, aa, ab, ba, bb, aaa, …
Defn 2.1.2. A language over an alphabet  is a subset of *.
Defn 2.1.3. Concatenation, is the fundamental binary operation in the generation
of strings, which is associative, but not commutative, is defined as
i. Basis: If length(v) = 0, then v =  and uv = u
ii. Recursion: Let v be a string with length(v) = n (> 0). Then v = wa,
for string w with length n-1 and a  , and uv = (uw)a.

Convention : Capital letters A, B, C, L, etc. with or without subscripts are normally used
to denote languages.

Set operations on languages : Since languages are set of strings we can apply set
operations to languages. Here are some simple examples (though there is nothing new in
it).
(Generates) (Recognizes)
Grammar Language Automata

Theory of computation
Automata: A algorithm or program that automatically recognizes if a particular string
belongs to the language or not, by checking the grammar of the string.

An automata is an abstract computing device (or machine). There are different varities of
such abstract machines (also called models of computation) which can be defined
mathematically.

Every Automaton fulfills the three basic requirements.

• Every automaton consists of some essential features as in real computers. It has a


mechanism for reading input. The input is assumed to be a sequence of symbols
over a given alphabet and is placed on an input tape(or written on an input file).
The simpler automata can only read the input one symbol at a time from left to
right but not change. Powerful versions can both read (from left to right or right to
left) and change the input. The automaton can produce output of some form. If the
output in response to an input string is binary (say, accept or reject), then it is
called an accepter. If it produces an output sequence in response to an input
sequence, then it is called a transducer(or automaton with output).
• The automaton may have a temporary storage, consisting of an unlimited number
of cells, each capable of holding a symbol from an alphabet ( whcih may be
different from the input alphabet). The automaton can both read and change the
contents of the storage cells in the temporary storage. The accusing capability of
this storage varies depending on the type of the storage.

Figure 1: The figure above shows a diagrammatic representation of a generic automation.

Operation of the automation is defined as follows.


At any point of time the automaton is in some integral state and is reading a particular
symbol from the input tape by using the mechanism for reading input. In the next time
step the automaton then moves to some other integral (or remain in the same state) as
defined by the transition function. The transition function is based on the current state,
input symbol read, and the content of the temporary storage. At the same time the content
of the storage may be changed and the input read may be modifed. The automation may
also produce some output during this transition. The internal state, input and the content

Theory of computation
of storage at any point defines the configuration of the automaton at that point. The
transition from one configuration to the next ( as defined by the transition function) is
called a move. Finite state machine or Finite Automation is the simplest type of abstract
machine we consider. Any system that is at any point of time in one of a finite number of
interval state and moves among these states in a defined manner in response to some
input, can be modeled by a finite automaton. It doesnot have any temporary storage and
hence a restricted model of computation.

Finite Automata

Automata (singular : automation) are a particularly simple, but useful, model of


computation. They were initially proposed as a simple model for the behavior of neurons.

States, Transitions and Finite-State Transition System :

Let us first give some intuitive idea about a state of a system and state transitions before
describing finite automata.

Informally, a state of a system is an instantaneous description of that system which gives


all relevant information necessary to determine how the system can evolve from that
point on.

Transitions are changes of states that can occur spontaneously or in response to inputs to
the states. Though transitions usually take time, we assume that state transitions are
instantaneous (which is an abstraction).

Some examples of state transition systems are: digital systems, vending machines, etc.

A system containing only a finite number of states and transitions among them is called a
finite-state transition system.

Finite-state transition systems can be modeled abstractly by a mathematical model called


finite automation

Deterministic Finite (-state) Automata

Informally, a DFA (Deterministic Finite State Automaton) is a simple machine that reads
an input string -- one symbol at a time -- and then, after the input has been completely
read, decides whether to accept or reject the input. As the symbols are read from the tape,
the automaton can change its state, to reflect how it reacts to what it has seen so far. A
machine for which a deterministic code can be formulated, and if there is only one unique
way to formulate the code, then the machine is called deterministic finite automata.

Thus, a DFA conceptually consists of 3 parts:

Theory of computation
1. A tape to hold the input string. The tape is divided into a finite number of cells.
Each cell holds a symbol from ∑.
2. A tape head for reading symbols from the tape
3. A control , which itself consists of 3 things:
o finite number of states that the machine is allowed to be in (zero or more
states are designated as accept or final states),
o a current state, initially set to a start state, a state transition function for
changing the current state.

An automaton processes a string on the tape by repeating the following actions until the
tape head has traversed the entire string:

1. The tape head reads the current tape cell and sends the symbol s found there to the
control. Then the tape head moves to the next cell.
2. he control takes s and the current state and consults the state transition function to
get the next state, which becomes the new current state.

Once the entire string has been processed, the state in which the automation enters is
examined. If it is an accept state , the input string is accepted ; otherwise, the string is
rejected . Summarizing all the above we can formulate the following formal definition:

DFA (Deterministic Finite Automaton) is a quintuple M = (Q, , , q0, F), where


1) Q is a finite set of states
2)  is a finite set of (machine) alphabet
3)  is a transitive function from Q x  to Q, i.e.,
4) : Q x   Q
5) q0  Q, is the start state
6) F  Q, is the set of final (accepting) states

Theory of computation
It is a formal description of a DFA. But it is hard to comprehend. For ex. The language of
the DFA is any string over { 0, 1} having at least one 1

We can describe the same DFA by transition table or state transition diagram as
following:

It is easy to comprehend the transition diagram.

Explanation : We cannot reach find state or in the i/p string. There can be any
no. of 0's at the beginning. ( The self-loop at on label 0 indicates it ). Similarly there
can be any no. of 0's & 1's in any order at the end of the string.

Transition table :

It is basically a tabular representation of the transition function that takes two arguments
(a state and a symbol) and returns a value (the “next state”).

• Rows correspond to states,


• Columns correspond to input symbols,
• Entries correspond to next states
• The start state is marked with an arrow
• The accept states are marked with a star (*).

Theory of computation
(State) Transition diagram :

A state transition diagram or simply a transition diagram is a directed graph which can be
constructed as follows:

1. For each state in Q there is a node.

2. There is a directed edge from node q to node p labeled a iff . (If


there are several input symbols that cause a transition, the edge is labeled by the
list of these symbols.)
3. There is an arrow with no source into the start state.
4. Accepting states are indicated by double circle.
5.

6. Here is an informal description how a DFA operates. An input to a DFA can be any
string . Put a pointer to the start state q. Read the input string w from left
to
right, one symbol at a time, moving the pointer according to the transition function, . If
the next symbol of w is a and the pointer is on state p, move the pointer to .
When the end of the input string w is encountered, the pointer is on some state, r. The
string is said to be accepted by the DFA if and rejected if . Note that there
is no formal mechanism for moving the pointer.
7. A language is said to be regular if L = L(M) for some DFA M.

Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
UNIT 3
.
Context-Free Grammar (CFG)

CFG stands for context-free grammar. It is is a formal grammar which is used to generate all
possible patterns of strings in a given formal language. Context-free grammar G can be defined by
four tuples as:

1. G = (V, T, P, S)

Where,

G is the grammar, which consists of a set of the production rule. It is used to generate the string of a
language.

T is the final set of a terminal symbol. It is denoted by lower case letters.

V is the final set of a non-terminal symbol. It is denoted by capital letters.

P is a set of production rules, which is used for replacing non-terminals symbols(on the left side of
the production) in a string with other terminal or non-terminal symbols(on the right side of the
production).

S is the start symbol which is used to derive the string. We can derive the string by repeatedly
replacing a non-terminal by the right-hand side of the production until all non-terminal have been
replaced by terminal symbols.

Example 1:

Construct the CFG for the language having any number of a's over the set ∑= {a}.

Solution:

As we know the regular expression for the above language is

1. r.e. = a*

Theory of computation
Production rule for the Regular expression is as follows:

1. S → aS rule 1
2. S → ε rule 2

Now if we want to derive a string "aaaaaa", we can start with start symbols.

1. S
2. aS
3. aaS rule 1
4. aaaS rule 1
5. aaaaS rule 1
6. aaaaaS rule 1
7. aaaaaaS rule 1
8. aaaaaaε rule 2
9. aaaaaa

The r.e. = a* can generate a set of string {ε, a, aa, aaa,.....}. We can have a null string because S is a
start symbol and rule 2 gives S → ε.

Example 2:

Construct a CFG for the regular expression (0+1)*

Solution:

The CFG can be given by,

1. Production rule (P):


2. S → 0S | 1S
3. S → ε

The rules are in the combination of 0's and 1's with the start symbol. Since (0+1)* indicates {ε, 0, 1,
01, 10, 00, 11,....}. In this set, ε is a string, so in the rule, we can set the rule S → ε.

Example 3:

Construct a CFG for a language L = {wcwR | where w € (a, b)*}.

Solution:

The string that can be generated for a given language is {aacaa, bcb, abcba, bacab, abbcbba,.......}

The grammar could be:

1. S → aSa rule 1

Theory of computation
2. S → bSb rule 2
3. S → c rule 3

Now if we want to derive a string "abbcbba", we can start with start symbols.

1. S → aSa
2. S → abSba from rule 2
3. S → abbSbba from rule 2
4. S → abbcbba from rule 3

Thus any of this kind of string can be derived from the given production rules.

Example 4:

Construct a CFG for the language L = anb2n where n>=1.

Solution:

The string that can be generated for a given language is {abb, aabbbb, aaabbbbbb......}.

The grammar could be:

1. S → aSbb | abb

Now if we want to derive a string "aabbbb", we can start with start symbols.

1. S → aSbb
2. S → aabbbb

Derivation Trees

Derivation is a sequence of production rules. It is used to get the input string through these
production rules. During parsing, we have to take two decisions. These are as follows:

 We have to decide the non-terminal which is to be replaced.


 We have to decide the production rule by which the non-terminal will be replaced.

We have two options to decide which non-terminal to be placed with production rule.

1. Leftmost Derivation:
In the leftmost derivation, the input is scanned and replaced with the production rule from left to
right. So in leftmost derivation, we read the input string from left to right.

Theory of computation
Example:

Production rules:

1. E = E + E
2. E = E - E
3. E = a | b

Input

1. a - b + a

The leftmost derivation is:

1. E=E+E
2. E=E-E+E
3. E=a-E+E
4. E=a-b+E
5. E=a-b+a

2. Rightmost Derivation:
In rightmost derivation, the input is scanned and replaced with the production rule from right to
left. So in rightmost derivation, we read the input string from right to left.

Example

Production rules:

1. E = E + E
2. E = E - E
3. E = a | b

Input

1. a - b + a

The rightmost derivation is:

1. E=E-E
2. E=E-E+E
3. E=E-E+a
4. E=E-b+a
5. E=a-b+a

Theory of computation
When we use the leftmost derivation or rightmost derivation, we may get the same string. This
type of derivation does not affect on getting of a string.

Sentential Forms
A string of terminals and variables(α) is called sentential form if S*α

Examples of Derivation:

Example 1:

Derive the string "abb" for leftmost derivation and rightmost derivation using a CFG given by,

1. S → AB | ε
2. A → aB
3. B → Sb

Solution:

Leftmost derivation:

Rightmost derivation:

Theory of computation
Example 2:

Derive the string "aabbabba" for leftmost derivation and rightmost derivation using a CFG given by,

1. S → aB | bA
2. S → a | aS | bAA
3. S → b | aS | aBB

Solution:

Leftmost derivation:

1. S
2. aB S → aB
3. aaBB B → aBB
4. aabB B→b
5. aabbS B → bS
6. aabbaB S → aB
7. aabbabS B → bS
8. aabbabbA S → bA
9. aabbabba A→a

Rightmost derivation:

1. S
2. aB S → aB
3. aaBB B → aBB
4. aaBbS B → bS
5. aaBbbA S → bA
6. aaBbba A→a

Theory of computation
7. aabSbba B → bS
8. aabbAbba S → bA
9. aabbabba A→a

Example 3:

Derive the string "00101" for leftmost derivation and rightmost derivation using a CFG given by,

1. S → A1B
2. A → 0A | ε
3. B → 0B | 1B | ε

Solution:

Leftmost derivation:

1. S
2. A1B
3. 0A1B
4. 00A1B
5. 001B
6. 0010B
7. 00101B
8. 00101

Rightmost derivation:

1. S
2. A1B
3. A10B
4. A101B
5. A101
6. 0A101
7. 00A101
8. 00101

Derivation tree is a graphical representation for the derivation of the given production rules for a
given CFG. It is the simple way to show how the derivation can be done to obtain some string from
a given set of production rules. The derivation tree is also called a parse tree.

Parse tree follows the precedence of operators. The deepest sub-tree traversed first. So, the
operator in the parent node has less precedence over the operator in the sub-tree.

A parse tree contains the following properties:

1. The root node is always a node indicating start symbols.

Theory of computation
2. The derivation is read from left to right.
3. The leaf node is always terminal nodes.
4. The interior nodes are always the non-terminal nodes.

Example 1:

Production rules:

1. E = E + E
2. E = E * E
3. E = a | b | c

Input

1. a * b + c

Step 1:

Step 2:

Step 2:

Step 4:

Theory of computation
Step 5:

Note: We can draw a derivation tree step by step or directly in one step.

Example 2:

Draw a derivation tree for the string "bab" from the CFG given by

1. S → bSb | a | b

Solution:

Now, the derivation tree for the string "bbabb" is as follows:

The above tree is a derivation tree drawn for deriving a string bbabb. By simply reading the leaf
nodes, we can obtain the desired string. The same tree can also be denoted by,

Theory of computation
Example 3:

Construct a derivation tree for the string aabbabba for the CFG given by,

1. S → aB | bA
2. A → a | aS | bAA
3. B → b | bS | aBB

Solution:

To draw a tree, we will first try to obtain derivation for the string aabbabba

Now, the derivation tree is as follows:

Theory of computation
Example 4:

Show the derivation tree for string "aabbbb" with the following grammar.

1. S → AB | ε
2. A → aB
3. B → Sb

Solution:

To draw a tree we will first try to obtain derivation for the string aabbbb

Theory of computation
Now, the derivation tree for the string "aabbbb" is as follows:

Ambiguity in Grammar

A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than
one rightmost derivation or more than one parse tree for the given input string. If the grammar is not
ambiguous, then it is called unambiguous.

If the grammar has ambiguity, then it is not good for compiler construction. No method can
automatically detect and remove the ambiguity, but we can remove ambiguity by re-writing the
whole grammar without ambiguity.

Example 1:

Let us consider a grammar G with the production rule

1. E→I
2. E→E+E
3. E→E*E
4. E → (E)
5. I → ε | 0 | 1 | 2 | ... | 9

Solution:

For the string "3 * 2 + 5", the above grammar can generate two parse trees by leftmost derivation:

Theory of computation
Since there are two parse trees for a single string "3 * 2 + 5", the grammar G is ambiguous.

Example 2:

Check whether the given grammar G is ambiguous or not.

1. E → E + E
2. E → E - E
3. E → id

Solution:

From the above grammar String "id + id - id" can be derived in 2 ways:

First Leftmost derivation

1. E → E + E
2. → id + E
3. → id + E - E
4. → id + id - E
5. → id + id- id

Second Leftmost derivation

1. E → E - E
2. →E+E-E
3. → id + E - E
4. → id + id - E
5. → id + id - id

Theory of computation
Since there are two leftmost derivation for a single string "id + id - id", the grammar G is
ambiguous.

Example 3:

Check whether the given grammar G is ambiguous or not.

1. S → aSb | SS
2. S → ε

Solution:

For the string "aabb" the above grammar can generate two parse trees

Since there are two parse trees for a single string "aabb", the grammar G is ambiguous.

Example 4:

Check whether the given grammar G is ambiguous or not.

1. A → AA
2. A → (A)
3. A → a

Solution:

For the string "a(a)aa" the above grammar can generate two parse trees:

Theory of computation
Since there are two parse trees for a single string "a(a)aa", the grammar G is ambiguous.

Unambiguous Grammar

A grammar can be unambiguous if the grammar does not contain ambiguity that means if it does not
contain more than one leftmost derivation or more than one rightmost derivation or more than one
parse tree for the given input string.

To convert ambiguous grammar to unambiguous grammar, we will apply the following rules:

1. If the left associative operators (+, -, *, /) are used in the production rule, then apply left
recursion in the production rule. Left recursion means that the leftmost symbol on the right side is
the same as the non-terminal on the left side. For example,

1. X → Xa

2. If the right associative operates(^) is used in the production rule then apply right recursion in the
production rule. Right recursion means that the rightmost symbol on the left side is the same as the
non-terminal on the right side. For example,

1. X → aX

Example 1:

Consider a grammar G is given as follows:

Theory of computation
1. S → AB | aaB
2. A → a | Aa
3. B → b

Determine whether the grammar G is ambiguous or not. If G is ambiguous, construct an


unambiguous grammar equivalent to G.

Solution:

Let us derive the string "aab"

As there are two different parse tree for deriving the same string, the given grammar is ambiguous.

Unambiguous grammar will be:

1. S → AB
2. A → Aa | a
3. B → b

Example 2:

Show that the given grammar is ambiguous. Also, find an equivalent unambiguous grammar.

1. S → ABA
2. A → aA | ε
3. B → bB | ε

Theory of computation
Solution:

The given grammar is ambiguous because we can derive two different parse tree for string aa.

The unambiguous grammar is:

1. S → aXY | bYZ | ε
2. Z → aZ | a
3. X → aXY | a | ε
4. Y → bYZ | b | ε

Example 3:

Show that the given grammar is ambiguous. Also, find an equivalent unambiguous grammar.

1. E → E + E
2. E → E * E
3. E → id

Solution:

Let us derive the string "id + id * id"

Theory of computation
As there are two different parse tree for deriving the same string, the given grammar is ambiguous.

Unambiguous grammar will be:

1. E→E+T
2. E→T
3. T→T*F
4. T→F
5. F → id

Example 4:

Check that the given grammar is ambiguous or not. Also, find an equivalent unambiguous grammar.

1. S → S + S
2. S → S * S
3. S → S ^ S
4. S → a

Solution:

The given grammar is ambiguous because the derivation of string aab can be represented by the
following string:

Theory of computation
Unambiguous grammar will be:

1. S→S+A|
2. A→A*B|B
3. B→C^B|C
4. C→a

Pumping Lemma for CFG

Lemma

If L is a context-free language, there is a pumping length p such that any string w ∈ L of length ≥ p
can be written as w = uvxyz, where vy ≠ ε, |vxy| ≤ p, and for all i ≥ 0, uvixyiz ∈ L.

Applications of Pumping Lemma

Theory of computation
Pumping lemma is used to check whether a grammar is context free or not. Let us take an example
and show how it is checked.

Problem

Find out whether the language L = {xnynzn | n ≥ 1} is context free or not.

Solution

Let L is context free. Then, L must satisfy pumping lemma.

At first, choose a number n of the pumping lemma. Then, take z as 0n1n2n.

Break z into uvwxy, where

|vwx| ≤ n and vx ≠ ε.

Hence vwx cannot involve both 0s and 2s, since the last 0 and the first 2 are at least (n+1) positions
apart. There are two cases −

Case 1 − vwx has no 2s. Then vx has only 0s and 1s. Then uwy, which would have to be in L, has n
2s, but fewer than n 0s or 1s.

Case 2 − vwx has no 0s.

Here contradiction occurs.

Hence, L is not a context-free language.

Enumeration of Properties Of

CFL Context-free languages are closed


under −

 Union
 Concatenation
 Kleene Star operation

Union

Let L1 and L2 be two context free languages. Then L1 𝖴 L2 is also context free.

Example

Let L1 = { anbn , n > 0}. Corresponding grammar G1 will have P: S1 → aAb|ab

Theory of computation
Let L2 = { cmdm , m ≥ 0}. Corresponding grammar G2 will have P: S2 → cBb|

ε Union of L1 and L2, L = L1 𝖴 L2 = { anbn } 𝖴 { cmdm }

The corresponding grammar G will have the additional production S → S1 | S2

Concatenation

If L1 and L2 are context free languages, then L1L2 is also context free.

Example

Union of the languages L1 and L2, L = L1L2 = { anbncmdm }

The corresponding grammar G will have the additional production S → S1 S2

Kleene Star

If L is a context free language, then L* is also context free.

Example

Let L = { anbn , n ≥ 0}. Corresponding grammar G will have P: S → aAb| ε

Kleene Star L1 = { anbn }*

The corresponding grammar G1 will have additional productions S1 → SS1 | ε

Context-free languages are not closed under −

 Intersection − If L1 and L2 are context free languages, then L1 ∩ L2 is not


necessarily context free.
 Intersection with Regular Language − If L1 is a regular language and L2 is a context
free language, then L1 ∩ L2 is a context free language.
 Complement − If L1 is a context free language, then L1’ may not be context free.

Theory of computation
Pushdown Automata(PDA)

 Pushdown automata is a way to implement a CFG in the same way we design DFA for a regular
grammar. A DFA can remember a finite amount of information, but a PDA can remember an infinite
amount of information.
 Pushdown automata is simply an NFA augmented with an "external stack memory". The addition of
stack is used to provide a last-in-first-out memory management capability to Pushdown automata.
Pushdown automata can store an unbounded amount of information on the stack. It can access a
limited amount of information on the stack. A PDA can push an element onto the top of the stack
and pop off an element from the top of the stack. To read an element into the stack, the top elements
must be popped off and are lost.
 A PDA is more powerful than FA. Any language which can be acceptable by FA can also be
acceptable by PDA. PDA also accepts a class of language which even cannot be accepted by FA.
Thus PDA is much more superior to FA.

PDA Components:

Input tape: The input tape is divided in many cells or symbols. The input head is read-only and may
only move from left to right, one symbol at a time.

Theory of computation
Finite control: The finite control has some pointer which points the current symbol which is to be
read.

Stack: The stack is a structure in which we can push and remove the items from one end only. It has
an infinite size. In PDA, the stack is used to store the items temporarily.

Formal definition of PDA:

The PDA can be defined as a collection of 7 components:

Q: the finite set of states

∑: the input set

Γ: a stack symbol which can be pushed and popped from the stack

q0: the initial state

Z: a start symbol which is in Γ.

F: a set of final states

δ: mapping function which is used for moving from current state to next state.

Instantaneous Description (ID)

ID is an informal notation of how a PDA computes an input string and make a decision that string is
accepted or rejected.

An instantaneous description is a triple (q, w, α) where:

q describes the current state.

w describes the remaining input.

α describes the stack contents, top at the left.

Turnstile Notation:

⊢ sign describes the turnstile notation and represents one move.

⊢* sign describes a sequence of moves.

For example,

Theory of computation
(p, b, T) ⊢ (q, w, α)

In the above example, while taking a transition from state p to q, the input symbol 'b' is consumed,
and the top of the stack 'T' is represented by a new string α.

Acceptance of CFL
Example 1:

Design a PDA for accepting a language {anb2n | n>=1}.

Solution: In this language, n number of a's should be followed by 2n number of b's. Hence, we will
apply a very simple logic, and that is if we read single 'a', we will push two a's onto the stack. As
soon as we read 'b' then for every single 'b' only one 'a' should get popped from the stack.

The ID can be constructed as follows:

1. δ(q0, a, Z) = (q0, aaZ)


2. δ(q0, a, a) = (q0, aaa)

Now when we read b, we will change the state from q0 to q1 and start popping corresponding 'a'.
Hence,

1. δ(q0, b, a) = (q1, ε)

Thus this process of popping 'b' will be repeated unless all the symbols are read. Note that popping
action occurs in state q1 only.

1. δ(q1, b, a) = (q1, ε)

After reading all b's, all the corresponding a's should get popped. Hence when we read ε as input
symbol then there should be nothing in the stack. Hence the move will be:

1. δ(q1, ε, Z) = (q2, ε)

Where

PDA = ({q0, q1, q2}, {a, b}, {a, Z}, δ, q0, Z, {q2})

We can summarize the ID as:

1. δ(q0, a, Z) = (q0, aaZ)


2. δ(q0, a, a) = (q0, aaa)
3. δ(q0, b, a) = (q1, ε)

Theory of computation
4. δ(q1, b, a) = (q1, ε)
5. δ(q1, ε, Z) = (q2, ε)

Now we will simulate this PDA for the input string "aaabbbbbb".

1. δ(q0, aaabbbbbb, Z) ⊢ δ(q0, aabbbbbb, aaZ)


⊢ δ(q0, abbbbbb, aaaaZ)
⊢ δ(q0, bbbbbb, aaaaaaZ)
2.

⊢ δ(q1, bbbbb, aaaaaZ)


3.

⊢ δ(q1, bbbb, aaaaZ)


4.

⊢ δ(q1, bbb, aaaZ)


5.

⊢ δ(q1, bb, aaZ)


6.

⊢ δ(q1, b, aZ)
7.

⊢ δ(q1, ε, Z)
8.

⊢ δ(q2, ε)
9.
10.
11. ACCEPT

Example 2:

Design a PDA for accepting a language {0n1m0n | m, n>=1}.

Solution: In this PDA, n number of 0's are followed by any number of 1's followed n number of 0's.
Hence the logic for design of such PDA will be as follows:

Push all 0's onto the stack on encountering first 0's. Then if we read 1, just do nothing. Then read 0,
and on each read of 0, pop one 0 from the stack.

For instance:

Theory of computation
This scenario can be written in the ID form as:

1. δ(q0, 0, Z) = δ(q0, 0Z)


2. δ(q0, 0, 0) = δ(q0, 00)
3. δ(q0, 1, 0) = δ(q1, 0)
4. δ(q0, 1, 0) = δ(q1, 0)
5. δ(q1, 0, 0) = δ(q1, ε)
6. δ(q0, ε, Z) = δ(q2, Z) (ACCEPT state)

Now we will simulate this PDA for the input string "0011100".

1. δ(q0, 0011100, Z) ⊢ δ(q0, 011100, 0Z)


⊢ δ(q0, 11100, 00Z)
⊢ δ(q0, 1100, 00Z)
2.

⊢ δ(q1, 100, 00Z)


3.

⊢ δ(q1, 00, 00Z)


4.

⊢ δ(q1, 0, 0Z)
5.

⊢ δ(q1, ε, Z)
6.

⊢ δ(q2, Z)
7.
8.
9. ACCEPT

Theory of computation
PDA Acceptance

A language can be accepted by Pushdown automata using two approaches:

1. Acceptance by Final State: The PDA is said to accept its input by the final state if it enters
any final state in zero or more moves after reading the entire input.

Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by the final state can be defined as:

1. L(PDA) = {w | (q0, w, Z) ⊢* (p, ε, ε), q ∈ F}

2. Acceptance by Empty Stack: On reading the input string from the initial configuration for
some PDA, the stack of PDA gets empty.

Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by empty stack can be defined as:

1. N(PDA) = {w | (q0, w, Z) ⊢* (p, ε, ε), q ∈ Q}

Equivalence of Acceptance by Final State and Empty Stack

 If L = N(P1) for some PDA P1, then there is a PDA P2 such that L = L(P2). That means the
language accepted by empty stack PDA will also be accepted by final state PDA.
 If there is a language L = L (P1) for some PDA P1 then there is a PDA P2 such that L = N(P2).
That means language accepted by final state PDA is also acceptable by empty stack PDA.

Example:

Construct a PDA that accepts the language L over {0, 1} by empty stack which accepts all the
string of 0's and 1's in which a number of 0's are twice of number of 1's.

Solution:

There are two parts for designing this PDA:

 If 1 comes before any 0's


 If 0 comes before any 1's.

We are going to design the first part i.e. 1 comes before 0's. The logic is that read single 1 and push
two 1's onto the stack. Thereafter on reading two 0's, POP two 1's from the stack. The δ can be

1. δ(q0, 1, Z) = (q0, 11, Z) Here Z represents that stack is empty


2. δ(q0, 0, 1) = (q0, ε)

Now, consider the second part i.e. if 0 comes before 1's. The logic is that read first 0, push it onto
the stack and change state from q0 to q1. [Note that state q1 indicates that first 0 is read and still
second 0 has yet to read].

Theory of computation
Being in q1, if 1 is encountered then POP 0. Being in q1, if 0 is read then simply read that second 0
and move ahead. The δ will be:

1. δ(q0, 0, Z) = (q1, 0Z)


2. δ(q1, 0, 0) = (q1, 0)
3. δ(q1, 0, Z) = (q0, ε) (indicate that one 0 and one 1 is already read, so simply read the second 0)
4. δ(q1, 1, 0) = (q1, ε)

Now, summarize the complete PDA for given L is:

1. δ(q0, 1, Z) = (q0, 11Z)


2. δ(q0, 0, 1) = (q1, ε)
3. δ(q0, 0, Z) = (q1, 0Z)
4. δ(q1, 0, 0) = (q1, 0)
5. δ(q1, 0, Z) = (q0, ε)
6. δ(q0, ε, Z) = (q0, ε) ACCEPT state

Non-deterministic Pushdown Automata

The non-deterministic pushdown automata is very much similar to NFA. We will discuss some
CFGs which accepts NPDA.

The CFG which accepts deterministic PDA accepts non-deterministic PDAs as well. Similarly, there
are some CFGs which can be accepted only by NPDA and not by DPDA. Thus NPDA is more
powerful than DPDA.

Example:

Design PDA for Palindrome strips.

Solution:

Suppose the language consists of string L = {aba, aa, bb, bab, bbabb, aabaa,......]. The string can be
odd palindrome or even palindrome. The logic for constructing PDA is that we will push a symbol
onto the stack till half of the string then we will read each symbol and then perform the pop
operation. We will compare to see whether the symbol which is popped is similar to the symbol
which is read. Whether we reach to end of the input, we expect the stack to be empty.

This PDA is a non-deterministic PDA because finding the mid for the given string and reading the
string from left and matching it with from right (reverse) direction leads to non-deterministic moves.
Here is the ID.

Theory of computation
Simulation of abaaba

⊢ δ(q1, baaba, aZ)


1. δ(q1, abaaba, Z) Apply rule 1

⊢ δ(q1, aaba, baZ)


2. Apply rule 5

⊢ δ(q1, aba, abaZ)


3. Apply rule 4

⊢ δ(q2, ba, baZ)


4. Apply rule 7

⊢ δ(q2, a, aZ)
5. Apply rule 8

⊢ δ(q2, ε, Z)
6. Apply rule 7

⊢ δ(q2, ε)
7. Apply rule 11
8. Accept

Equivalence of CFL and PDA

The first symbol on R.H.S. production must be a terminal symbol. The following steps are used to
obtain PDA from CFG is:

Step 1: Convert the given productions of CFG into GNF.

Step 2: The PDA will only have one state {q}.

Step 3: The initial symbol of CFG will be the initial symbol in the PDA.

Step 4: For non-terminal symbol, add the following rule:

Theory of computation
1. δ(q, ε, A) = (q, α)

Where the production rule is A → α

Step 5: For each terminal symbols, add the following rule:

1. δ(q, a, a) = (q, ε) for every terminal symbol

InterConversion

Example 1:

Convert the following grammar to a PDA that accepts the same language.

1. S → 0S1 | A
2. A → 1A0 | S | ε

Solution:

The CFG can be first simplified by eliminating unit productions:

1. S → 0S1 | 1S0 | ε

Now we will convert this CFG to GNF:

1. S → 0SX | 1SY | ε
2. X → 1
3. Y → 0

The PDA can be:

R1: δ(q, ε, S) = {(q, 0SX) | (q, 1SY) | (q, ε)}


R2: δ(q, ε, X) = {(q, 1)}
R3: δ(q, ε, Y) = {(q, 0)}
R4: δ(q, 0, 0) = {(q, ε)}
R5: δ(q, 1, 1) = {(q, ε)}

Example 2:

Construct PDA for the given CFG, and test whether 0104 is acceptable by this PDA.

1. S → 0BB
2. B → 0S | 1S | 0

Solution:

The PDA can be given as:

Theory of computation
1. A = {(q), (0, 1), (S, B, 0, 1), δ, q, S, ?}

The production rule δ can be:

R1: δ(q, ε, S) = {(q, 0BB)}


R2: δ(q, ε, B) = {(q, 0S) | (q, 1S) | (q, 0)}
R3: δ(q, 0, 0) = {(q, ε)}
R4: δ(q, 1, 1) = {(q, ε)}

Testing 0104 i.e. 010000 against PDA:

1. δ(q, 010000, S) ⊢ δ(q, 010000, 0BB)


⊢ δ(q, 10000, BB)
⊢ δ(q, 10000,1SB)
2. R1

⊢ δ(q, 0000, SB)


3. R3

⊢ δ(q, 0000,
4. R2
5. R1

⊢ δ(q, 000, BBB)


0BBB)

⊢ δ(q, 000, 0BB)


6. R3

⊢ δ(q, 00, BB)


7. R2

⊢ δ(q, 00, 0B)


8. R3

⊢ δ(q, 0, B)
9. R2

⊢ δ(q, 0, 0)
10. R3

⊢ δ(q, ε)
11. R2
12. R3
13. ACCEPT

Thus 0104 is accepted by the PDA.

Example 3:

Draw a PDA for the CFG given below:

1. S → aSb
2. S → a | b | ε

Solution:

The PDA can be given as:

1. P = {(q), (a, b), (S, a, b, z0), δ, q, z0, q}

The mapping function δ will be:

R1: δ(q, ε, S) = {(q, aSb)}


R2: δ(q, ε, S) = {(q, a) | (q, b) | (q, ε)}
R3: δ(q, a, a) = {(q, ε)}
R4: δ(q, b, b) = {(q, ε)}
R5: δ(q, ε, z0) = {(q, ε)}

Theory of computation
Simulation: Consider the string aaabb

1. δ(q, εaaabb, S) ⊢ δ(q, aaabb, aSb)


⊢ δ(q, εaabb, Sb)
R3

⊢ δ(q, aabb, aSbb)


2. R1

⊢ δ(q, εabb, Sbb)


3. R3

⊢ δ(q, abb, abb)


4. R2

⊢ δ(q, bb, bb)


5. R3

⊢ δ(q, b, b)
6. R4

⊢ δ(q, ε, z0)
7. R4

⊢ δ(q, ε)
8. R5
9.
10. ACCEPT

Introduction to DCFL and DPDA

A DPDA is a PDA in which no state p has two different outgoing transitions


((p,x,α),(q,β)) and ((p,x′,α′),(q′,β′))
which are compatible in the sense that both could be applied. A DCFL is basically a language
which accepted by a DPDA, but we need to qualify this further.
We want to argue that the language L = { w ∈ {a,b}* : #a(w) = #b(w)} is deterministic
context free in the sense there is DPDA which accepts it.
In the PDA, the only non-determinism is the issue of guessing the end of input; however this form
of non-determinism is considered artificial. When one considers whether a language L supports a
DPDA or not, a dedicated end-of-input symbol is always added to strings in the language.
Formally, a language L over Σ is deterministic context free, or L is a DCFL , if
L$ is accepted by a DPDA M
where $ is a dedicated symbol not belonging to Σ. The significance is that we can make intelligent
usage of the knowledge of the end of input to decide what to do about the stack. In our case, we
would simply replace the transition into the final state by:

With this change, our PDA is now a DPDA:

If L(A) is a language accepted by a PDA A , it can also be accepted by a DPDA if and


only if there is a single computation from the initial configuration until an accepting one for all
strings

belonging to L(A) . If L(A) can be accepted by a PDA it is a context free language and if it can
be accepted by a DPDA it is a deterministic context-free language (DCFL).

Theory of computation
Not all context-free languages are deterministic. This makes the DPDA a strictly weaker
device than the PDA.

Theory of computation
UNIT-I AUTOMATA

PART-A(2-MARKS)

1 List any four ways of theorem proving.


2 Define Alphabets.
3 Write short notes on Strings.
4 What is the need for finite automata?
5 What is a finite automaton? Give two examples.
6 Define DFA.
7 Explain how DFA process strings.
8 Define transition diagram.
9 Define transition table.
10. Define the language of DFA.
+
11. Construct a finite automata that accepts {0,1} .
12. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings ending in 00.
13. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings with three consecutive 0’s.
14. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings with 011 as a substring.
15. Give the DFA accepting the language over the alphabet 0,1 that have the set of
th
all strings whose 10 symbol from the right end is 1.
16. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings such that each block of 5 consecutive symbol contains at least two
0’s.
17. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings that either begins or end(or both) with 01.
18. Give the DFA accepting the language over the alphabet 0,1 that have the set of
all strings such that the no of zero’s is divisible by 5 and the no of 1’s is
divisible by 3.
19. Find the language accepted by the DFA given below.
20. Define NFA.
21. Define the language of NFA.

Theory of computation
22. Is it true that the language accepted by any NFA is different from the regular
language? Justify your Answer.
23. Define ε-NFA.
24. Define ε closure.
25. Find the εclosure for each state from the following automata.
26. Define Regular expression. Give an example.
27. What are the operators of RE.
28. Write short notes on precedence of RE operators.
29. Write Regular Expression for the language that have the set of strings over
{a,b,c} containing at least one a and at least one b.

30. Write Regular Expression for the language that have the set of all strings of 0’s

Theory of computation
th
30 and 1’s whose 10 symbol from the right end is 1.
31 Write Regular Expression for the language that has the set of all strings of 0’s
and 1’s with at most one pair of consecutive 1’s.
32 Write Regular Expression for the language that have the set of all strings of 0’s
33 and 1’s such that every pair of adjacent 0’s appears before any pair of adjacent
34 1’s.
35 Write Regular Expression for the language that have the set of all strings of 0’s
and 1’s whose no of 0’s is divisible by 5.
36 Write Regular Expression for the language that has the set of all strings of
0’s and 1’s not containing 101 as a substring.
37 Write Regular Expression for the language that have theset of all strings of 0’s
and 1’s such that no prefix has two more 0’s than 1’s, not two more 1’s than
0’s.
38. Write Regular Expression for the language that have the set of all
strings of 0’s and 1’s whose no of 0’s is divisible by 5 and no of 1’s is
even.
39. Give English descriptions of the languages of the regular expression (1+ ε)(00*1)*0*.
40. Give English descriptions of the languages of the regular expression
(0*1*)*000(0+1)*.
41. Give English descriptions of the languages of the regular expression (0+10)*1*.
42. Convert the following RE to ε-NFA.01*.

43. State the pumping lemma for Regular languages.


44. What are the application of pumping language?
45. State the closure properties of Regular language.
46. Prove that if L and M are regular languages then so is LUM.
47. What do you mean by Homomorphism?
Suppose H is the homomorphism from the alphabets {0,1,2} to the alphabets
{a,b} defined by h(0)=a h(1)=ab h(2)=ba. What is h(0120) and h(21120).
48. Suppose H is the homomorphism from the alphabets {0,1,2} to the alphabets
{a,b} defined by h(0)=a h(1)=ab h(2)=ba. If L is the language L(01*2) what is
h(L).

49. Let R be any set of regular languages is U Ri regular?Prove it.


50. Show that the compliment of regular language is also regular.
51 . What is meant by equivalent states in DFA.

Theory of computation
Part B

1. a) If L is accepted by an NFA with ε-transition then show that L is accepted by


an NFA without ε-transition.
b) Construct a DFA equivalent to the NFA.
M=({p,q,r},{0,1}, δ,p,{q,s})
Where δis defined in the following table.

δ 0 1
p {q,s} {q}
q {r} {q,r}
r {s} {p}
s - {p}
n n
2. a)Show that the set L={a b /n>=1} is not a regular. (6) b)Construct a DFA equivalent to the

NFA given below: (10)

0 1
p {p,q} P
q r R
r s -
s s S
n
n
3. a)Check whether the language L=(0 1 /n>=1) is regular or not? Justify your answer.

b) Let L be a set accepted by a NFA then show that there exists aDFA that accepts L.

4.Define NFA with ε-transition. Prove that if L is accepted by an NFA with ε-transition
then L is also accepted by a NFA without ε-transition.
+
5. a) Construct a NDFA accepting all string in {a,b} with either two consecutive
a’s or two consecutive b’s.

b) Give the DFA accepting the following language:set of all strings beginning with a 1 that
when interpretedas a binary integer is a multiple of 5.

6. Draw the NFA to accept the following languages.

(i) Set of Strings over alphabet {0,1,…….9} such that the final digit has
appeared before. (8)
(ii)Set of strings of 0’s and 1’s such that there are two 0’s separated by a number of
positions that is a multiple of 4.

Theory of computation
7.a)Let L be a set accepted by an NFA.Then prove that there exists a deterministic finite
automaton that accepts L.Is the converse true? Justify your answer. (10)

b)Construct DFA equivalent to the NFA given below: (6)

8.a) Prove that a language L is accepted by some ε–NFA if and only if L is accepted by some
DFA. (8)

b) Consider the following ε–NFA.Compute the ε–closure of each state and find it’s equivalent
DFA. (8)

ε A b C

p {q} {p} Ф Ф

q {r} ф {q} Ф

*r Ф ф ф {r}

9.a) Prove that a language L is accepted by some DFA if L is accepted by some NFA.

b) Convert the following NFA to it’s equivalent DFA

0 1
p {p,q} {p}
q {r} {r}
r {s} ф

*s {s} {s}

10.a) Explain the construction of NFA with εtransition from any given regular
expression.

Theory of computation
b) Let A=(Q,∑, δ, q0 ,{qf ) be a DFA and suppose that for all a in ∑wehave δ(q0, a)= δ(qf ,a).
k
Show that if x is a non empty string in L(A),then for all k>0,x is also in L(A).

PART-B

11.a)Construct an NFA equivalent to (0+1)*(00+11)

12.a)Construct a Regular expression corresponding to the state diagram given in the


following figure.

i i
b) Show that the set E={0 1 |i>=1} is not Regular. (6)

13.a)Construct an NFA equivalent to the regular expression (0+1)*(00+11)(0+1)*.

b)Obtain the regular expression that denotes the language accepted by the following DFA.

14.a)Construct an NFA equivalent to the regular expression ((0+1)(0+1)(0+1))*

b)Construct an NFA equivalent to 10+(0+11)0*1

15.a)Obtain the regular expression denoting the language accepted by the following DFA (8)
b)Obtain the regular expression denoting the language accepted by the following DFA by
k
using the formula Rij

Theory of computation
16. a)Show that every set accepted by a DFA is denoted by a regular

Expression b)Construct an NFA equivalent to the following regular

expression01*+1.

2
17. a)Define a Regular set using pumping lemma Show that the language L={0i / i is an
integer,i>=1} is not regular

b)Construct an NFA equivalent to the regular expression 10+(0+11)0*1


n2/n
18. a) Show that the set L={O is an integer,n>=1} is not regular.
*
b)Construct an NFA equivalent to the following regular expression ((10)+(0+1)
01. (10) 9.a)Prove that if L=L(A) for some DFA A,then there is a regular
expression R such that L=L(R).
p
b) Show that the language {0 ,p is prime} is not regular.

19.Find whether the following languages are regular or not.


R
(i) L={w ε{a,b}|w=w }.
n m n+m
(ii) L={0 1 2 ,n,m>=1}
k 2
(iii) L={1 |k=n ,n>=1} . (4)
(iv) L1/L2={x | for some y εL2,xy εL1},where L1 and L2 are any two languages and
L1/L2 is the quotient of L1 and L2.
2
20.a) Find the regular expression for the set of all strings denoted by R 13 from the

Theory of computation
deterministic finite automata given below:

b)Verify whether the finite automata M1 and M2 given below are equivalent over {a,b}.

21.a)Construct transition diagram of a finite automaton corresponding to the regular


**
expression (ab+c ) b.

22..a)Find the regular expression corresponding to the finite automaton given below.

2
b)Find the regular expression for the set of all strings denoted by R 23 from the
deterministic finite automata given below.

k 2
23.a) Find whether the languages {ww,w is in (1+0)*} and {1 | k=n , n>=1} are regular
or not.

Theory of computation

You might also like