TOC Notes
TOC Notes
What is TOC?
In theoretical computer science, the theory of computation is the branch that deals with
whether and how efficiently problems can be solved on a model of computation, using an
algorithm. The field is divided into three major branches: automata theory, computability theory
and computational complexity theory.
In order to perform a rigorous study of computation, computer scientists work with a
mathematical abstraction of computers called a model of computation. There are several
models in use, but the most commonly examined is the Turing machine.
Automata theory
In theoretical computer science, automata theory is the study of abstract machines (or more
appropriately, abstract 'mathematical' machines or systems) and the computational problems that
can be solved using these machines. These abstract machines are called automata.
This automaton consists of
states (represented in the figure by circles),
and transitions (represented by arrows).
As the automaton sees a symbol of input, it makes a transition (or jump) to another state,
according to its transition function (which takes the current state and the recent symbol as its
inputs).
Uses of Automata: compiler design and parsing.
Theory of computation
Additive inverse: a+(-a)=0
Multiplicative inverse: a*1/a=1
Universal set U={1,2,3,4,5}
Subset A={1,3}
A’ ={2,4,5}
Absorption law: AU(A ∩B) = A, A∩(AUB) = A
De Morgan’s Law:
(AUB)’ =A’ ∩ B’
(A∩B)’ = A’ U B’
Double compliment
(A’)’ =A
A ∩ A’ = Φ
Logic relations:
a b = > 7a U b
7(a∩b)=7a U 7b
Relations:
Let a and b be two sets a relation R contains aXb.
Relations used in TOC:
Reflexive:a=a
Symmetric: aRb = > bRa
Transition: aRb, bRc = > aRc
If a given relation is reflexive, symmentric and transitive then the relation is called equivalence
relation.
Deductive proof: Consists of sequence of statements whose truth lead us from some
initial statement called the hypothesis or the give statement to a conclusion statement.
Theory of computation
Proof by contrapositive:
Theory of computation
Proof by Contradiction:
Languages :
The languages we consider for our discussion is an abstraction of natural languages. That
Theory of computation
is, our focus here is on formal languages that need precise and formal definitions.
Programming languages belong to this category.
Symbols :
Symbols are indivisible objects or entity that cannot be defined. That is, symbols are the
atoms of the world of languages. A symbol is any single object such as, a, 0, 1, #, begin,
or do.
Alphabets :
Example : 0110, 11, 001 are three strings over the binary alphabet { 0, 1 } .
It is not the case that a string over some alphabet should contain all the symbols from the
alphabet. For example, the string cc over the alphabet { a, b, c } does not contain the
symbols a and b. Hence, it is true that a string over an alphabet is also a string over any
superset of that alphabet.
Length of a string :
The number of symbols in a string w is called its length, denoted by |w|.
Convention : We will use small case letters towards the beginning of the English
alphabet to denote symbols of an alphabet and small case letters towards the end to
denote strings over an alphabet. That is, (symbols) and
are strings.
Example : Consider the string 011 over the binary alphabet. All the prefixes, suffixes and
substrings of this string are listed below.
Theory of computation
Note that x is a prefix (suffix or substring) to x, for any string x and ε is a prefix (suffix or
substring) to any string.
Powers of Strings : For any string x and integer n>=0, we use xx to denote the string
formed by sequentially concatenating n copies of x. We can also give an inductive
definition of xx as follows: xx = e,
if n= 0 ; otherwise xx= xx-1.
Reversal :
For any string the reversal of the string is
. An inductive definition of reversal can be given as follows:
Languages
Defn: A language is a set of strings over an alphabet.
A more restricted definition requires some forms of restrictions on the
strings, i.e., strings that satisfy certain properties
Defn. The syntax of a language restricts the set of strings that satisfy
certain properties.
Convention : Capital letters A, B, C, L, etc. with or without subscripts are normally used
to denote languages.
Set operations on languages : Since languages are set of strings we can apply set
operations to languages. Here are some simple examples (though there is nothing new in
it).
(Generates) (Recognizes)
Grammar Language Automata
Theory of computation
Automata: A algorithm or program that automatically recognizes if a particular string
belongs to the language or not, by checking the grammar of the string.
An automata is an abstract computing device (or machine). There are different varities of
such abstract machines (also called models of computation) which can be defined
mathematically.
Theory of computation
of storage at any point defines the configuration of the automaton at that point. The
transition from one configuration to the next ( as defined by the transition function) is
called a move. Finite state machine or Finite Automation is the simplest type of abstract
machine we consider. Any system that is at any point of time in one of a finite number of
interval state and moves among these states in a defined manner in response to some
input, can be modeled by a finite automaton. It doesnot have any temporary storage and
hence a restricted model of computation.
Finite Automata
Let us first give some intuitive idea about a state of a system and state transitions before
describing finite automata.
Transitions are changes of states that can occur spontaneously or in response to inputs to
the states. Though transitions usually take time, we assume that state transitions are
instantaneous (which is an abstraction).
Some examples of state transition systems are: digital systems, vending machines, etc.
A system containing only a finite number of states and transitions among them is called a
finite-state transition system.
Informally, a DFA (Deterministic Finite State Automaton) is a simple machine that reads
an input string -- one symbol at a time -- and then, after the input has been completely
read, decides whether to accept or reject the input. As the symbols are read from the tape,
the automaton can change its state, to reflect how it reacts to what it has seen so far. A
machine for which a deterministic code can be formulated, and if there is only one unique
way to formulate the code, then the machine is called deterministic finite automata.
Theory of computation
1. A tape to hold the input string. The tape is divided into a finite number of cells.
Each cell holds a symbol from ∑.
2. A tape head for reading symbols from the tape
3. A control , which itself consists of 3 things:
o finite number of states that the machine is allowed to be in (zero or more
states are designated as accept or final states),
o a current state, initially set to a start state, a state transition function for
changing the current state.
An automaton processes a string on the tape by repeating the following actions until the
tape head has traversed the entire string:
1. The tape head reads the current tape cell and sends the symbol s found there to the
control. Then the tape head moves to the next cell.
2. he control takes s and the current state and consults the state transition function to
get the next state, which becomes the new current state.
Once the entire string has been processed, the state in which the automation enters is
examined. If it is an accept state , the input string is accepted ; otherwise, the string is
rejected . Summarizing all the above we can formulate the following formal definition:
Theory of computation
It is a formal description of a DFA. But it is hard to comprehend. For ex. The language of
the DFA is any string over { 0, 1} having at least one 1
We can describe the same DFA by transition table or state transition diagram as
following:
Explanation : We cannot reach find state or in the i/p string. There can be any
no. of 0's at the beginning. ( The self-loop at on label 0 indicates it ). Similarly there
can be any no. of 0's & 1's in any order at the end of the string.
Transition table :
It is basically a tabular representation of the transition function that takes two arguments
(a state and a symbol) and returns a value (the “next state”).
Theory of computation
(State) Transition diagram :
A state transition diagram or simply a transition diagram is a directed graph which can be
constructed as follows:
6. Here is an informal description how a DFA operates. An input to a DFA can be any
string . Put a pointer to the start state q. Read the input string w from left
to
right, one symbol at a time, moving the pointer according to the transition function, . If
the next symbol of w is a and the pointer is on state p, move the pointer to .
When the end of the input string w is encountered, the pointer is on some state, r. The
string is said to be accepted by the DFA if and rejected if . Note that there
is no formal mechanism for moving the pointer.
7. A language is said to be regular if L = L(M) for some DFA M.
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
Theory of computation
UNIT 3
.
Context-Free Grammar (CFG)
CFG stands for context-free grammar. It is is a formal grammar which is used to generate all
possible patterns of strings in a given formal language. Context-free grammar G can be defined by
four tuples as:
1. G = (V, T, P, S)
Where,
G is the grammar, which consists of a set of the production rule. It is used to generate the string of a
language.
P is a set of production rules, which is used for replacing non-terminals symbols(on the left side of
the production) in a string with other terminal or non-terminal symbols(on the right side of the
production).
S is the start symbol which is used to derive the string. We can derive the string by repeatedly
replacing a non-terminal by the right-hand side of the production until all non-terminal have been
replaced by terminal symbols.
Example 1:
Construct the CFG for the language having any number of a's over the set ∑= {a}.
Solution:
1. r.e. = a*
Theory of computation
Production rule for the Regular expression is as follows:
1. S → aS rule 1
2. S → ε rule 2
Now if we want to derive a string "aaaaaa", we can start with start symbols.
1. S
2. aS
3. aaS rule 1
4. aaaS rule 1
5. aaaaS rule 1
6. aaaaaS rule 1
7. aaaaaaS rule 1
8. aaaaaaε rule 2
9. aaaaaa
The r.e. = a* can generate a set of string {ε, a, aa, aaa,.....}. We can have a null string because S is a
start symbol and rule 2 gives S → ε.
Example 2:
Solution:
The rules are in the combination of 0's and 1's with the start symbol. Since (0+1)* indicates {ε, 0, 1,
01, 10, 00, 11,....}. In this set, ε is a string, so in the rule, we can set the rule S → ε.
Example 3:
Solution:
The string that can be generated for a given language is {aacaa, bcb, abcba, bacab, abbcbba,.......}
1. S → aSa rule 1
Theory of computation
2. S → bSb rule 2
3. S → c rule 3
Now if we want to derive a string "abbcbba", we can start with start symbols.
1. S → aSa
2. S → abSba from rule 2
3. S → abbSbba from rule 2
4. S → abbcbba from rule 3
Thus any of this kind of string can be derived from the given production rules.
Example 4:
Solution:
The string that can be generated for a given language is {abb, aabbbb, aaabbbbbb......}.
1. S → aSbb | abb
Now if we want to derive a string "aabbbb", we can start with start symbols.
1. S → aSbb
2. S → aabbbb
Derivation Trees
Derivation is a sequence of production rules. It is used to get the input string through these
production rules. During parsing, we have to take two decisions. These are as follows:
We have two options to decide which non-terminal to be placed with production rule.
1. Leftmost Derivation:
In the leftmost derivation, the input is scanned and replaced with the production rule from left to
right. So in leftmost derivation, we read the input string from left to right.
Theory of computation
Example:
Production rules:
1. E = E + E
2. E = E - E
3. E = a | b
Input
1. a - b + a
1. E=E+E
2. E=E-E+E
3. E=a-E+E
4. E=a-b+E
5. E=a-b+a
2. Rightmost Derivation:
In rightmost derivation, the input is scanned and replaced with the production rule from right to
left. So in rightmost derivation, we read the input string from right to left.
Example
Production rules:
1. E = E + E
2. E = E - E
3. E = a | b
Input
1. a - b + a
1. E=E-E
2. E=E-E+E
3. E=E-E+a
4. E=E-b+a
5. E=a-b+a
Theory of computation
When we use the leftmost derivation or rightmost derivation, we may get the same string. This
type of derivation does not affect on getting of a string.
Sentential Forms
A string of terminals and variables(α) is called sentential form if S*α
Examples of Derivation:
Example 1:
Derive the string "abb" for leftmost derivation and rightmost derivation using a CFG given by,
1. S → AB | ε
2. A → aB
3. B → Sb
Solution:
Leftmost derivation:
Rightmost derivation:
Theory of computation
Example 2:
Derive the string "aabbabba" for leftmost derivation and rightmost derivation using a CFG given by,
1. S → aB | bA
2. S → a | aS | bAA
3. S → b | aS | aBB
Solution:
Leftmost derivation:
1. S
2. aB S → aB
3. aaBB B → aBB
4. aabB B→b
5. aabbS B → bS
6. aabbaB S → aB
7. aabbabS B → bS
8. aabbabbA S → bA
9. aabbabba A→a
Rightmost derivation:
1. S
2. aB S → aB
3. aaBB B → aBB
4. aaBbS B → bS
5. aaBbbA S → bA
6. aaBbba A→a
Theory of computation
7. aabSbba B → bS
8. aabbAbba S → bA
9. aabbabba A→a
Example 3:
Derive the string "00101" for leftmost derivation and rightmost derivation using a CFG given by,
1. S → A1B
2. A → 0A | ε
3. B → 0B | 1B | ε
Solution:
Leftmost derivation:
1. S
2. A1B
3. 0A1B
4. 00A1B
5. 001B
6. 0010B
7. 00101B
8. 00101
Rightmost derivation:
1. S
2. A1B
3. A10B
4. A101B
5. A101
6. 0A101
7. 00A101
8. 00101
Derivation tree is a graphical representation for the derivation of the given production rules for a
given CFG. It is the simple way to show how the derivation can be done to obtain some string from
a given set of production rules. The derivation tree is also called a parse tree.
Parse tree follows the precedence of operators. The deepest sub-tree traversed first. So, the
operator in the parent node has less precedence over the operator in the sub-tree.
Theory of computation
2. The derivation is read from left to right.
3. The leaf node is always terminal nodes.
4. The interior nodes are always the non-terminal nodes.
Example 1:
Production rules:
1. E = E + E
2. E = E * E
3. E = a | b | c
Input
1. a * b + c
Step 1:
Step 2:
Step 2:
Step 4:
Theory of computation
Step 5:
Note: We can draw a derivation tree step by step or directly in one step.
Example 2:
Draw a derivation tree for the string "bab" from the CFG given by
1. S → bSb | a | b
Solution:
The above tree is a derivation tree drawn for deriving a string bbabb. By simply reading the leaf
nodes, we can obtain the desired string. The same tree can also be denoted by,
Theory of computation
Example 3:
Construct a derivation tree for the string aabbabba for the CFG given by,
1. S → aB | bA
2. A → a | aS | bAA
3. B → b | bS | aBB
Solution:
To draw a tree, we will first try to obtain derivation for the string aabbabba
Theory of computation
Example 4:
Show the derivation tree for string "aabbbb" with the following grammar.
1. S → AB | ε
2. A → aB
3. B → Sb
Solution:
To draw a tree we will first try to obtain derivation for the string aabbbb
Theory of computation
Now, the derivation tree for the string "aabbbb" is as follows:
Ambiguity in Grammar
A grammar is said to be ambiguous if there exists more than one leftmost derivation or more than
one rightmost derivation or more than one parse tree for the given input string. If the grammar is not
ambiguous, then it is called unambiguous.
If the grammar has ambiguity, then it is not good for compiler construction. No method can
automatically detect and remove the ambiguity, but we can remove ambiguity by re-writing the
whole grammar without ambiguity.
Example 1:
1. E→I
2. E→E+E
3. E→E*E
4. E → (E)
5. I → ε | 0 | 1 | 2 | ... | 9
Solution:
For the string "3 * 2 + 5", the above grammar can generate two parse trees by leftmost derivation:
Theory of computation
Since there are two parse trees for a single string "3 * 2 + 5", the grammar G is ambiguous.
Example 2:
1. E → E + E
2. E → E - E
3. E → id
Solution:
From the above grammar String "id + id - id" can be derived in 2 ways:
1. E → E + E
2. → id + E
3. → id + E - E
4. → id + id - E
5. → id + id- id
1. E → E - E
2. →E+E-E
3. → id + E - E
4. → id + id - E
5. → id + id - id
Theory of computation
Since there are two leftmost derivation for a single string "id + id - id", the grammar G is
ambiguous.
Example 3:
1. S → aSb | SS
2. S → ε
Solution:
For the string "aabb" the above grammar can generate two parse trees
Since there are two parse trees for a single string "aabb", the grammar G is ambiguous.
Example 4:
1. A → AA
2. A → (A)
3. A → a
Solution:
For the string "a(a)aa" the above grammar can generate two parse trees:
Theory of computation
Since there are two parse trees for a single string "a(a)aa", the grammar G is ambiguous.
Unambiguous Grammar
A grammar can be unambiguous if the grammar does not contain ambiguity that means if it does not
contain more than one leftmost derivation or more than one rightmost derivation or more than one
parse tree for the given input string.
To convert ambiguous grammar to unambiguous grammar, we will apply the following rules:
1. If the left associative operators (+, -, *, /) are used in the production rule, then apply left
recursion in the production rule. Left recursion means that the leftmost symbol on the right side is
the same as the non-terminal on the left side. For example,
1. X → Xa
2. If the right associative operates(^) is used in the production rule then apply right recursion in the
production rule. Right recursion means that the rightmost symbol on the left side is the same as the
non-terminal on the right side. For example,
1. X → aX
Example 1:
Theory of computation
1. S → AB | aaB
2. A → a | Aa
3. B → b
Solution:
As there are two different parse tree for deriving the same string, the given grammar is ambiguous.
1. S → AB
2. A → Aa | a
3. B → b
Example 2:
Show that the given grammar is ambiguous. Also, find an equivalent unambiguous grammar.
1. S → ABA
2. A → aA | ε
3. B → bB | ε
Theory of computation
Solution:
The given grammar is ambiguous because we can derive two different parse tree for string aa.
1. S → aXY | bYZ | ε
2. Z → aZ | a
3. X → aXY | a | ε
4. Y → bYZ | b | ε
Example 3:
Show that the given grammar is ambiguous. Also, find an equivalent unambiguous grammar.
1. E → E + E
2. E → E * E
3. E → id
Solution:
Theory of computation
As there are two different parse tree for deriving the same string, the given grammar is ambiguous.
1. E→E+T
2. E→T
3. T→T*F
4. T→F
5. F → id
Example 4:
Check that the given grammar is ambiguous or not. Also, find an equivalent unambiguous grammar.
1. S → S + S
2. S → S * S
3. S → S ^ S
4. S → a
Solution:
The given grammar is ambiguous because the derivation of string aab can be represented by the
following string:
Theory of computation
Unambiguous grammar will be:
1. S→S+A|
2. A→A*B|B
3. B→C^B|C
4. C→a
Lemma
If L is a context-free language, there is a pumping length p such that any string w ∈ L of length ≥ p
can be written as w = uvxyz, where vy ≠ ε, |vxy| ≤ p, and for all i ≥ 0, uvixyiz ∈ L.
Theory of computation
Pumping lemma is used to check whether a grammar is context free or not. Let us take an example
and show how it is checked.
Problem
Solution
|vwx| ≤ n and vx ≠ ε.
Hence vwx cannot involve both 0s and 2s, since the last 0 and the first 2 are at least (n+1) positions
apart. There are two cases −
Case 1 − vwx has no 2s. Then vx has only 0s and 1s. Then uwy, which would have to be in L, has n
2s, but fewer than n 0s or 1s.
Enumeration of Properties Of
Union
Concatenation
Kleene Star operation
Union
Let L1 and L2 be two context free languages. Then L1 𝖴 L2 is also context free.
Example
Theory of computation
Let L2 = { cmdm , m ≥ 0}. Corresponding grammar G2 will have P: S2 → cBb|
Concatenation
If L1 and L2 are context free languages, then L1L2 is also context free.
Example
Kleene Star
Example
Theory of computation
Pushdown Automata(PDA)
Pushdown automata is a way to implement a CFG in the same way we design DFA for a regular
grammar. A DFA can remember a finite amount of information, but a PDA can remember an infinite
amount of information.
Pushdown automata is simply an NFA augmented with an "external stack memory". The addition of
stack is used to provide a last-in-first-out memory management capability to Pushdown automata.
Pushdown automata can store an unbounded amount of information on the stack. It can access a
limited amount of information on the stack. A PDA can push an element onto the top of the stack
and pop off an element from the top of the stack. To read an element into the stack, the top elements
must be popped off and are lost.
A PDA is more powerful than FA. Any language which can be acceptable by FA can also be
acceptable by PDA. PDA also accepts a class of language which even cannot be accepted by FA.
Thus PDA is much more superior to FA.
PDA Components:
Input tape: The input tape is divided in many cells or symbols. The input head is read-only and may
only move from left to right, one symbol at a time.
Theory of computation
Finite control: The finite control has some pointer which points the current symbol which is to be
read.
Stack: The stack is a structure in which we can push and remove the items from one end only. It has
an infinite size. In PDA, the stack is used to store the items temporarily.
Γ: a stack symbol which can be pushed and popped from the stack
δ: mapping function which is used for moving from current state to next state.
ID is an informal notation of how a PDA computes an input string and make a decision that string is
accepted or rejected.
Turnstile Notation:
For example,
Theory of computation
(p, b, T) ⊢ (q, w, α)
In the above example, while taking a transition from state p to q, the input symbol 'b' is consumed,
and the top of the stack 'T' is represented by a new string α.
Acceptance of CFL
Example 1:
Solution: In this language, n number of a's should be followed by 2n number of b's. Hence, we will
apply a very simple logic, and that is if we read single 'a', we will push two a's onto the stack. As
soon as we read 'b' then for every single 'b' only one 'a' should get popped from the stack.
Now when we read b, we will change the state from q0 to q1 and start popping corresponding 'a'.
Hence,
1. δ(q0, b, a) = (q1, ε)
Thus this process of popping 'b' will be repeated unless all the symbols are read. Note that popping
action occurs in state q1 only.
1. δ(q1, b, a) = (q1, ε)
After reading all b's, all the corresponding a's should get popped. Hence when we read ε as input
symbol then there should be nothing in the stack. Hence the move will be:
1. δ(q1, ε, Z) = (q2, ε)
Where
PDA = ({q0, q1, q2}, {a, b}, {a, Z}, δ, q0, Z, {q2})
Theory of computation
4. δ(q1, b, a) = (q1, ε)
5. δ(q1, ε, Z) = (q2, ε)
Now we will simulate this PDA for the input string "aaabbbbbb".
⊢ δ(q1, b, aZ)
7.
⊢ δ(q1, ε, Z)
8.
⊢ δ(q2, ε)
9.
10.
11. ACCEPT
Example 2:
Solution: In this PDA, n number of 0's are followed by any number of 1's followed n number of 0's.
Hence the logic for design of such PDA will be as follows:
Push all 0's onto the stack on encountering first 0's. Then if we read 1, just do nothing. Then read 0,
and on each read of 0, pop one 0 from the stack.
For instance:
Theory of computation
This scenario can be written in the ID form as:
Now we will simulate this PDA for the input string "0011100".
⊢ δ(q1, 0, 0Z)
5.
⊢ δ(q1, ε, Z)
6.
⊢ δ(q2, Z)
7.
8.
9. ACCEPT
Theory of computation
PDA Acceptance
1. Acceptance by Final State: The PDA is said to accept its input by the final state if it enters
any final state in zero or more moves after reading the entire input.
Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by the final state can be defined as:
2. Acceptance by Empty Stack: On reading the input string from the initial configuration for
some PDA, the stack of PDA gets empty.
Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by empty stack can be defined as:
If L = N(P1) for some PDA P1, then there is a PDA P2 such that L = L(P2). That means the
language accepted by empty stack PDA will also be accepted by final state PDA.
If there is a language L = L (P1) for some PDA P1 then there is a PDA P2 such that L = N(P2).
That means language accepted by final state PDA is also acceptable by empty stack PDA.
Example:
Construct a PDA that accepts the language L over {0, 1} by empty stack which accepts all the
string of 0's and 1's in which a number of 0's are twice of number of 1's.
Solution:
We are going to design the first part i.e. 1 comes before 0's. The logic is that read single 1 and push
two 1's onto the stack. Thereafter on reading two 0's, POP two 1's from the stack. The δ can be
Now, consider the second part i.e. if 0 comes before 1's. The logic is that read first 0, push it onto
the stack and change state from q0 to q1. [Note that state q1 indicates that first 0 is read and still
second 0 has yet to read].
Theory of computation
Being in q1, if 1 is encountered then POP 0. Being in q1, if 0 is read then simply read that second 0
and move ahead. The δ will be:
The non-deterministic pushdown automata is very much similar to NFA. We will discuss some
CFGs which accepts NPDA.
The CFG which accepts deterministic PDA accepts non-deterministic PDAs as well. Similarly, there
are some CFGs which can be accepted only by NPDA and not by DPDA. Thus NPDA is more
powerful than DPDA.
Example:
Solution:
Suppose the language consists of string L = {aba, aa, bb, bab, bbabb, aabaa,......]. The string can be
odd palindrome or even palindrome. The logic for constructing PDA is that we will push a symbol
onto the stack till half of the string then we will read each symbol and then perform the pop
operation. We will compare to see whether the symbol which is popped is similar to the symbol
which is read. Whether we reach to end of the input, we expect the stack to be empty.
This PDA is a non-deterministic PDA because finding the mid for the given string and reading the
string from left and matching it with from right (reverse) direction leads to non-deterministic moves.
Here is the ID.
Theory of computation
Simulation of abaaba
⊢ δ(q2, a, aZ)
5. Apply rule 8
⊢ δ(q2, ε, Z)
6. Apply rule 7
⊢ δ(q2, ε)
7. Apply rule 11
8. Accept
The first symbol on R.H.S. production must be a terminal symbol. The following steps are used to
obtain PDA from CFG is:
Step 3: The initial symbol of CFG will be the initial symbol in the PDA.
Theory of computation
1. δ(q, ε, A) = (q, α)
InterConversion
Example 1:
Convert the following grammar to a PDA that accepts the same language.
1. S → 0S1 | A
2. A → 1A0 | S | ε
Solution:
1. S → 0S1 | 1S0 | ε
1. S → 0SX | 1SY | ε
2. X → 1
3. Y → 0
Example 2:
Construct PDA for the given CFG, and test whether 0104 is acceptable by this PDA.
1. S → 0BB
2. B → 0S | 1S | 0
Solution:
Theory of computation
1. A = {(q), (0, 1), (S, B, 0, 1), δ, q, S, ?}
⊢ δ(q, 0000,
4. R2
5. R1
⊢ δ(q, 0, B)
9. R2
⊢ δ(q, 0, 0)
10. R3
⊢ δ(q, ε)
11. R2
12. R3
13. ACCEPT
Example 3:
1. S → aSb
2. S → a | b | ε
Solution:
Theory of computation
Simulation: Consider the string aaabb
⊢ δ(q, b, b)
6. R4
⊢ δ(q, ε, z0)
7. R4
⊢ δ(q, ε)
8. R5
9.
10. ACCEPT
belonging to L(A) . If L(A) can be accepted by a PDA it is a context free language and if it can
be accepted by a DPDA it is a deterministic context-free language (DCFL).
Theory of computation
Not all context-free languages are deterministic. This makes the DPDA a strictly weaker
device than the PDA.
Theory of computation
UNIT-I AUTOMATA
PART-A(2-MARKS)
Theory of computation
22. Is it true that the language accepted by any NFA is different from the regular
language? Justify your Answer.
23. Define ε-NFA.
24. Define ε closure.
25. Find the εclosure for each state from the following automata.
26. Define Regular expression. Give an example.
27. What are the operators of RE.
28. Write short notes on precedence of RE operators.
29. Write Regular Expression for the language that have the set of strings over
{a,b,c} containing at least one a and at least one b.
30. Write Regular Expression for the language that have the set of all strings of 0’s
Theory of computation
th
30 and 1’s whose 10 symbol from the right end is 1.
31 Write Regular Expression for the language that has the set of all strings of 0’s
and 1’s with at most one pair of consecutive 1’s.
32 Write Regular Expression for the language that have the set of all strings of 0’s
33 and 1’s such that every pair of adjacent 0’s appears before any pair of adjacent
34 1’s.
35 Write Regular Expression for the language that have the set of all strings of 0’s
and 1’s whose no of 0’s is divisible by 5.
36 Write Regular Expression for the language that has the set of all strings of
0’s and 1’s not containing 101 as a substring.
37 Write Regular Expression for the language that have theset of all strings of 0’s
and 1’s such that no prefix has two more 0’s than 1’s, not two more 1’s than
0’s.
38. Write Regular Expression for the language that have the set of all
strings of 0’s and 1’s whose no of 0’s is divisible by 5 and no of 1’s is
even.
39. Give English descriptions of the languages of the regular expression (1+ ε)(00*1)*0*.
40. Give English descriptions of the languages of the regular expression
(0*1*)*000(0+1)*.
41. Give English descriptions of the languages of the regular expression (0+10)*1*.
42. Convert the following RE to ε-NFA.01*.
Theory of computation
Part B
δ 0 1
p {q,s} {q}
q {r} {q,r}
r {s} {p}
s - {p}
n n
2. a)Show that the set L={a b /n>=1} is not a regular. (6) b)Construct a DFA equivalent to the
0 1
p {p,q} P
q r R
r s -
s s S
n
n
3. a)Check whether the language L=(0 1 /n>=1) is regular or not? Justify your answer.
b) Let L be a set accepted by a NFA then show that there exists aDFA that accepts L.
4.Define NFA with ε-transition. Prove that if L is accepted by an NFA with ε-transition
then L is also accepted by a NFA without ε-transition.
+
5. a) Construct a NDFA accepting all string in {a,b} with either two consecutive
a’s or two consecutive b’s.
b) Give the DFA accepting the following language:set of all strings beginning with a 1 that
when interpretedas a binary integer is a multiple of 5.
(i) Set of Strings over alphabet {0,1,…….9} such that the final digit has
appeared before. (8)
(ii)Set of strings of 0’s and 1’s such that there are two 0’s separated by a number of
positions that is a multiple of 4.
Theory of computation
7.a)Let L be a set accepted by an NFA.Then prove that there exists a deterministic finite
automaton that accepts L.Is the converse true? Justify your answer. (10)
8.a) Prove that a language L is accepted by some ε–NFA if and only if L is accepted by some
DFA. (8)
b) Consider the following ε–NFA.Compute the ε–closure of each state and find it’s equivalent
DFA. (8)
ε A b C
p {q} {p} Ф Ф
q {r} ф {q} Ф
*r Ф ф ф {r}
9.a) Prove that a language L is accepted by some DFA if L is accepted by some NFA.
0 1
p {p,q} {p}
q {r} {r}
r {s} ф
*s {s} {s}
10.a) Explain the construction of NFA with εtransition from any given regular
expression.
Theory of computation
b) Let A=(Q,∑, δ, q0 ,{qf ) be a DFA and suppose that for all a in ∑wehave δ(q0, a)= δ(qf ,a).
k
Show that if x is a non empty string in L(A),then for all k>0,x is also in L(A).
PART-B
i i
b) Show that the set E={0 1 |i>=1} is not Regular. (6)
b)Obtain the regular expression that denotes the language accepted by the following DFA.
15.a)Obtain the regular expression denoting the language accepted by the following DFA (8)
b)Obtain the regular expression denoting the language accepted by the following DFA by
k
using the formula Rij
Theory of computation
16. a)Show that every set accepted by a DFA is denoted by a regular
expression01*+1.
2
17. a)Define a Regular set using pumping lemma Show that the language L={0i / i is an
integer,i>=1} is not regular
Theory of computation
deterministic finite automata given below:
b)Verify whether the finite automata M1 and M2 given below are equivalent over {a,b}.
22..a)Find the regular expression corresponding to the finite automaton given below.
2
b)Find the regular expression for the set of all strings denoted by R 23 from the
deterministic finite automata given below.
k 2
23.a) Find whether the languages {ww,w is in (1+0)*} and {1 | k=n , n>=1} are regular
or not.
Theory of computation