Compiler Theory | Set 2

Last Updated : 13 Dec, 2022
Summarize
Comments
Improve
Suggest changes
Like Article
Like
Save
Share
Report
News Follow

The following questions have been asked in the GATE CS exam. 

1. Given the following expression grammar: 
E -> E * F | F+E | F 
F -> F-F | id which of the following is true? (GATE CS 2000) 
(a) * has higher precedence than + 
(b) – has higher precedence than * 
(c) + and — have same precedence 
(d) + has higher precedence than * 

Answer(b) 

Precedence in grammar is enforced by making sure that a production rule with a higher precedence operator will never produce an expression with an operator with lower precedence. 
In the given grammar ‘-’ has higher precedence than ‘*’ 

2. Consider a program P that consists of two source modules M1 and M2 contained in two different files. If M1 contains a reference to a function defined in M2 the reference will be resolved at (GATE CS 2004) 
a) Edit time 
b) Compile time 
c) Link time 
d) Load time 

Answer (c) 
The compiler transforms source code into the target language. The target language is generally in a binary form known as object code. Typically, an object file can contain three kinds of symbols: 

* defined symbols, which allow it to be called by other modules, 
* undefined symbols, which call the other modules where these symbols are defined, and 
* local symbols used internally within the object file to facilitate relocation. 

When a program comprises multiple object files, the linker combines these files into a unified executable program, resolving the symbols as it goes along. 
http://en.wikipedia.org/wiki/Compiler 
http://en.wikipedia.org/wiki/Linker_%28computing%29 

3. Which of the following suffices to convert an arbitrary CFG to an LL(1) grammar? (GATE CS 2003) 
(a) Removing left recursion alone 
(b) Factoring the grammar alone 
(c) Removing left recursion and factoring the grammar 
(d) None of the above 

Answer(d) 
Removing left recursion and factoring the grammar does not suffice to convert an arbitrary CFG to LL(1) grammar. 
http://pages.cpsc.ucalgary.ca/~robin/class/411/LL1.3.html 

4. Assume that the SLR parser for a grammar G has n1 states and the LALR parser for G has n2 states. The relationship between n1 and n2 is (GATE CS 2003) 
(a) n1 is necessarily less than n2 
(b) n1 is necessarily equal to n2 
(c) n1 is necessarily greater than n2 
(d) none of the above 

Answer (b) 

http://parasol.tamu.edu/people/rwerger/Courses/434/lec10.pdf 
http://dragonbook.stanford.edu/lecture-notes/Stanford-CS143/11-LALR-Parsing.pdf 

Please see GATE Corner for all previous year papers/solutions/explanations, syllabus, important dates, notes, etc. 

Please write comments if you find any of the answers/explanations incorrect, or you want to share more information about the topics discussed above. 

 



Similar Reads

Incremental Compiler in Compiler Design
Incremental Compiler is a compiler that generates code for a statement, or group of statements, which is independent of the code generated for other statements. Examples : C/C++ GNU Compiler, Java eclipse platform, etc. The Incremental Compiler is such a compilation scheme in which only modified source text gets recompiled and merged with previousl
5 min read
Advantages of Multipass Compiler Over Single Pass Compiler
Programmers, write computer programs that make certain tasks easier for users. This program code is written in High-Level Programming languages like C, C++, etc. Computer device doesn't understand this language or the program written by a programmer, so the translator that translates the High-Level Program code into Machine Readable Instructions is
6 min read
Difference Between Native Compiler and Cross Compiler
Compilers are essential tools in software development, helping to convert high-level programming languages into machine-readable code. Among various types of compilers, native and cross-compilers are commonly used for different purposes. This article explains the difference between a native compiler and a cross compiler, shedding light on their fun
5 min read
Automata Theory | Set 3
Following questions have been asked in GATE CS 2011 exam. 1) The lexical analysis for a modern language such as Java needs the power of which one of the following machine models in a necessary and sufficient sense? (A) Finite state automata (B) Deterministic pushdown automata (C) Non-deterministic pushdown automata (D) Turing machine Answer (A) Lex
2 min read
Automata Theory | Set 4
Following questions have been asked in GATE CS 2011 exam. 1) Let P be a regular language and Q be context-free language such that Q ⊆ P. (For example, let P be the language represented by the regular expression p*q* and Q be {pnqn|n ∈ N}). Then which of the following is ALWAYS regular? (A) P ∩ Q (B) P - Q (C) ∑* - P (D) ∑* - Q
2 min read
Automata Theory | Set 6
Following questions have been asked in GATE CS 2010 exam. 1) Let L={w ∈ (0 + 1)*|w has even number of 1s}, i.e. L is the set of all bit strings with even number of 1s. Which one of the regular expression below represents L? (A) (0*10*1)* (B) 0*(10*10*)* (C) 0*(10*1*)*0* (D) 0*1(10*1)*10* Answer (B) Option (A) is incorrect because it cannot acc
2 min read
Automata Theory | Set 5
Following questions have been asked in GATE CS 2009 exam. 1) S --> aSa| bSb| a| b ;The language generated by the above grammar over the alphabet {a,b} is the set of (A) All palindromes. (B) All odd length palindromes. (C) Strings that begin and end with the same symbol (D) All even length palindromes. Answer (B) The strings accepted by language are
3 min read
Automata Theory | Set 2
Questions Asked in the GATE CS 2012 Exam1) What is the complement of the language accepted by the NFA shown below? Assume ∑ = {a} and ε is the empty string  (A) Φ (B) ε (C) a (D) {a, ε} Answer (B) Explanation: The given alphabet ∑ contains only one symbol {a} and the given NFA accepts all strings with any number of occurrences of 'a'. In other word
3 min read
Compiler Design - GATE CSE Previous Year Questions
Solving GATE Previous Year's Questions (PYQs) not only clears the concepts but also helps to gain flexibility, speed, accuracy, and understanding of the level of questions generally asked in the GATE exam, and that eventually helps you to gain good marks in the examination. Previous Year Questions help a candidate practice and revise for GATE, whic
4 min read
Syntax Directed Translation in Compiler Design
Parser uses a CFG(Context-free-Grammar) to validate the input string and produce output for the next phase of the compiler. Output could be either a parse tree or an abstract syntax tree. Now to interleave semantic analysis with the syntax analysis phase of the compiler, we use Syntax Directed Translation. Conceptually, with both syntax-directed de
5 min read
Understanding Typecasting as a compiler
Let's start our discussion with a very few lines of C - C/C++ Code #include <stdio.h> int main() { int x = 97; char ch = x; printf("The value of %d in character form is '%c'",x, ch); return 0; } Output : The value of 97 in character form is 'a' Let us understand that how the compiler is going to compile every instruction in this pro
5 min read
Error detection and Recovery in Compiler
In this phase of compilation, all possible errors made by the user are detected and reported to the user in form of error messages. This process of locating errors and reporting them to users is called the Error Handling process. Functions of an Error handler. DetectionReportingRecoveryClassification of Errors Compile-time errorsCompile-time errors
6 min read
Error Handling in Compiler Design
The tasks of the Error Handling process are to detect each error, report it to the user, and then make some recovery strategy and implement them to handle the error. During this whole process processing time of the program should not be slow. Functions of Error Handler: Error DetectionError ReportError RecoveryError handler=Error Detection+Error Re
7 min read
Shift Reduce Parser in Compiler
Prerequisite – Parsing | Set 2 (Bottom Up or Shift Reduce Parsers) Shift Reduce parser attempts for the construction of parse in a similar manner as done in bottom-up parsing i.e. the parse tree is constructed from leaves(bottom) to the root(up). A more general form of the shift-reduce parser is the LR parser. This parser requires some data structu
8 min read
Various Data Structures Used in Compiler
A compiler is a program that converts HLL(High-Level Language) to LLL(Low-Level Language) like machine-level language. The compiler has various data structures that the compiler uses to perform its operations. These data structures are needed by the phases of the compiler. Now we are going to discuss the various data structures in the compiler. The
4 min read
Compiler Design | Detection of a Loop in Three Address Code
Prerequisite - Three address code in Compiler Loop optimization is the phase after the Intermediate Code Generation. The main intention of this phase is to reduce the number of lines in a program. In any program majority of the time is spent actually inside the loop for an iterative program. In the case of the recursive program a block will be ther
3 min read
Synthesis Phase in Compiler Design
Pre-requisites: Phases of a Compiler The synthesis phase, also known as the code generation or code optimization phase, is the final step of a compiler. It takes the intermediate code generated by the front end of the compiler and converts it into machine code or assembly code, which can be executed by a computer. The intermediate code can be in th
4 min read
Labeling Algorithm in Compiler Design
Labeling algorithm is used by compiler during code generation phase. Basically, this algorithm is used to find out how many registers will be required by a program to complete its execution. Labeling algorithm works in bottom-up fashion. We will start labeling firstly child nodes and then interior nodes. Rules of labeling algorithm are: Traverse th
3 min read
Transition diagram for Identifiers in Compiler Design
Transition diagram is a special kind of flowchart for language analysis. In transition diagram the boxes of flowchart are drawn as circle and called as states. States are connected by arrows called as edges. The label or weight on edge indicates the input character that can appear after that state. Transition diagram of identifier is given below: T
4 min read
Input Buffering in Compiler Design
The lexical analyzer scans the input from left to right one character at a time. It uses two pointers begin ptr(bp) and forward ptr(fp) to keep track of the pointer of the input scanned. Input buffering is an important concept in compiler design that refers to the way in which the compiler reads input from the source code. In many cases, the compil
5 min read
Bootstrapping in Compiler Design
Bootstrapping is a process in which simple language is used to translate more complicated program which in turn may handle for more complicated program. This complicated program can further handle even more complicated program and so on. Writing a compiler for any high level language is a complicated process. It takes lot of time to write a compile
4 min read
Loop Optimization in Compiler Design
Loop Optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops. Loop Optimization is a machine independent optimizatio
4 min read
Semantic Analysis in Compiler Design
Semantic Analysis is the third phase of Compiler. Semantic Analysis makes sure that declarations and statements of program are semantically correct. It is a collection of procedures which is called by parser as and when required by grammar. Both syntax tree of previous phase and symbol table are used to check the consistency of the given code. Type
2 min read
Error Recovery Strategies in Compiler Design
The error may occur at various levels of compilation, so error handling is important for the correct execution of code. There are mainly five error recovery strategies, which are as follows: Panic modePhrase level recoveryError productionGlobal correctionSymbol tablePanic Mode:This strategy is used by most parsing methods. In this method of discove
4 min read
Difference between Cross-Assembler and Compiler
1. Cross-Assembler : A cross-assembler is an assembler that runs on a computer with one type of processor but generates machine code for a different type of processor. For example, if we use a PC with the 8086 compatible machine language to generate a machine code for the 8085 processor, we need a cross-assembler program that runs on the PC compati
3 min read
Working of Compiler Phases with Example
In this article, we are going to cover an overview that how we can each compiler phase works individually with the help of an example. Let's discuss one by one. Pre-requisite - Introduction to compiler phases You will see how compiler phases like lexical analyzer, Syntax analyzer, Semantic Analyzer, Intermediate code generator, code Optimizer, and
3 min read
Working of Lexical Analyzer in compiler
In this article, we are going to cover how the lexical analyzer works and will also cover the basic architecture of lexical analyzer. Let's discuss one by one. Pre-requisite - Introduction to Lexical Analyzer Lexical Analyzer : It is the first phase of a compiler is known as Scanner (It's scan the program).Lexical Analyzer will divide the program i
2 min read
BNF Notation in Compiler Design
BNF stands for Backus Naur Form notation. It is a formal method for describing the syntax of programming language which is understood as Backus Naur Formas introduced by John Bakus and Peter Naur in 1960. BNF and CFG (Context Free Grammar) were nearly identical. BNF may be a meta-language (a language that cannot describe another language) for prima
3 min read
Target Code Generation in Compiler Design
Target code generation is the final Phase of Compiler. Input : Optimized Intermediate Representation. Output : Target Code. Task Performed : Register allocation methods and optimization, assembly level code. Method : Three popular strategies for register allocation and optimization. Implementation : Algorithms. Target code generation deals with ass
2 min read
Register Allocation Algorithms in Compiler Design
Register allocation is an important method in the final phase of the compiler . Registers are faster to access than cache memory . Registers are available in small size up to few hundred Kb .Thus it is necessary to use minimum number of registers for variable allocation . There are three popular Register allocation algorithms . Naive Register Alloc
5 min read
Article Tags :
three90RightbarBannerImg