Compiler Design
Compiler Design
A compiler translates the code written in one language to some other language without
changing the meaning of the program. It is also expected that a compiler should make the
target code efficient and optimized in terms of time and space.
Compiler design principles provide an in-depth view of translation and optimization process.
Compiler design covers basic translation mechanism and error detection & recovery. It
includes lexical, syntax, and semantic analysis as front end, and code generation and
optimization as back-end.
Audience
This tutorial is designed for students interested in learning the basic principles of
compilers.Enthusiastic readers who would like to know more about compilers and
those who wish to design a compiler themselves may start from here.
Prerequisites
This tutorial requires no prior knowledge of compiler design but requires basic
understanding of at least one programming language such as C, Java etc.It would
be an additional advantage if you have had prior exposure to Assembly
Programming.
Compiler Design - Overview
Computers are a balanced mix of software and hardware. Hardware is just a piece
of mechanical device and its functions are being controlled by a compatible
software. Hardware understands instructions in the form of electronic charge, which
is the counterpart of binary language in software programming. Binary language has
only two alphabets, 0 and 1. To instruct, the hardware codes must be written in
binary format, which is simply a series of 1s and 0s. It would be a difficult and
cumbersome task for computer programmers to write such codes, which is why we
have compilers to write such codes.
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that produces
input for compilers. It deals with macro-processing, augmentation, file inclusion,
language extension, etc.
Interpreter
An interpreter, like a compiler, translates high-level language into low-level machine
language. The difference lies in the way they read the source code or input. A
compiler reads the whole source code at once, creates tokens, checks semantics,
generates intermediate code, executes the whole program and may involve many
passes. In contrast, an interpreter reads a statement from the input, converts it to an
intermediate code, executes it, then takes the next statement in sequence. If an
error occurs, an interpreter stops execution and reports it. whereas a compiler reads
the whole program even if it encounters several errors.
Assembler
An assembler translates assembly language programs into machine code.The
output of an assembler is called an object file, which contains a combination of
machine instructions as well as the data required to place these instructions in
memory.
Linker
Linker is a computer program that links and merges various object files together in
order to make an executable file. All these files might have been compiled by
separate assemblers. The major task of a linker is to search and locate referenced
module/routines in a program and to determine the memory location where these
codes will be loaded, making the program instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading executable files
into memory and execute them. It calculates the size of a program (instructions and
data) and creates memory space for it. It initializes various registers to initiate
execution.
Cross-compiler
A compiler that runs on platform (A) and is capable of generating executable code
for platform (B) is called a cross-compiler.
Source-to-source Compiler
A compiler that takes the source code of one programming language and translates
it into the source code of another programming language is called a source-to-
source compiler.
Analysis Phase
Known as the front-end of the compiler, the analysis phase of the compiler reads
the source program, divides it into core parts and then checks for lexical, grammar
and syntax errors.The analysis phase generates an intermediate representation of
the source program and symbol table, which should be fed to the Synthesis phase
as input.
Synthesis Phase
Known as the back-end of the compiler, the synthesis phase generates the target
program with the help of intermediate source code representation and symbol table.
A compiler can have many phases and passes.
Pass : A pass refers to the traversal of a compiler through the entire
program.
Phase : A phase of a compiler is a distinguishable stage, which takes input
from the previous stage, processes and yields output that can be used as
input for the next stage. A pass can have more than one phase.
Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the token produced
by lexical analysis as input and generates a parse tree (or syntax tree). In this
phase, token arrangements are checked against the source code grammar, i.e. the
parser checks if the expression made by the tokens is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows the rules of
language. For example, assignment of values is between compatible data types,
and adding string to an integer. Also, the semantic analyzer keeps track of
identifiers, their types and expressions; whether identifiers are declared before use
or not etc. The semantic analyzer produces an annotated syntax tree as an output.
Code Optimization
The next phase does code optimization of the intermediate code. Optimization can
be assumed as something that removes unnecessary code lines, and arranges the
sequence of statements in order to speed up the program execution without wasting
resources (CPU, memory).
Code Generation
In this phase, the code generator takes the optimized representation of the
intermediate code and maps it to the target machine language. The code generator
translates the intermediate code into a sequence of (generally) re-locatable machine
code. Sequence of instructions of machine code performs the task as the
intermediate code would do.
Symbol Table
It is a data-structure maintained throughout all the phases of a compiler. All the
identifier's names along with their types are stored here. The symbol table makes it
easier for the compiler to quickly search the identifier record and retrieve it. The
symbol table is also used for scope management.
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a token. There
are some predefined rules for every lexeme to be identified as a valid token. These
rules are defined by grammar rules, by means of a pattern. A pattern explains what
can be a token, and these patterns are defined by means of regular expressions.
In programming language, keywords, constants, identifiers, strings, numbers,
operators and punctuations symbols can be considered as tokens.
For example, in C language, the variable declaration line
int value = 100;
Specifications of Tokens
Let us understand how the language theory undertakes the following terms:
Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets,
{0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F} is a set of Hexadecimal alphabets, {a-z, A-Z} is a
set of English language alphabets.
Strings
Any finite sequence of alphabets is called a string. Length of the string is the total
number of occurrence of alphabets, e.g., the length of the string tutorialspoint is 14
and is denoted by |tutorialspoint| = 14. A string having no alphabets, i.e. a string of
zero length is known as an empty string and is denoted by ε (epsilon).
Special Symbols
A typical high-level language contains the following symbols:-
Arithmetic Symbols Addition(+), Subtraction(-), Modulo(%), Multiplication(*), Division(/)
Assignment =
Preprocessor #
Language
A language is considered as a finite set of strings over some finite set of alphabets.
Computer languages are considered as finite sets, and mathematically set
operations can be performed on them. Finite languages can be described by means
of regular expressions.
While scanning both lexemes till ‘int’, the lexical analyzer cannot determine whether
it is a keyword int or the initials of identifier int value.
The Longest Match Rule states that the lexeme scanned should be determined
based on the longest match among all the tokens available.
The lexical analyzer also follows rule priority where a reserved word, e.g., a
keyword, of a language is given priority over user input. That is, if the lexical
analyzer finds a lexeme that matches with any existing reserved word, it should
generate an error.
Operations
The various operations on languages are:
Union of two languages L and M is written as
L U M = {s | s is in L or s is in M}
Concatenation of two languages L and M is written as
LM = {st | s is in L and t is in M}
The Kleene Closure of a language L is written as
L* = Zero or more occurrence of language L.
Notations
If r and s are regular expressions denoting the languages L(r) and L(s), then
Union : (r)|(s) is a regular expression denoting L(r) U L(s)
Concatenation : (r)(s) is a regular expression denoting L(r)L(s)
Kleene closure : (r)* is a regular expression denoting (L(r))*
(r) is a regular expression denoting L(r)
Example : We assume FA accepts any three digit binary value ending in digit 1. FA
= {Q(q0, qf), Σ(0,1), q0, qf, δ}
Context-Free Grammar
In this section, we will first see the definition of context-free grammar and introduce
terminologies used in parsing technology.
A context-free grammar has four components:
A set of non-terminals (V). Non-terminals are syntactic variables that denote
sets of strings. The non-terminals define sets of strings that help define the
language generated by the grammar.
A set of tokens, known as terminal symbols (Σ). Terminals are the basic
symbols from which strings are formed.
A set of productions (P). The productions of a grammar specify the manner
in which the terminals and non-terminals can be combined to form strings.
Each production consists of a non-terminal called the left side of the
production, an arrow, and a sequence of tokens and/or on- terminals, called
the right side of the production.
One of the non-terminals is designated as the start symbol (S); from where
the production begins.
The strings are derived from the start symbol by repeatedly replacing a non-terminal
(initially the start symbol) by the right side of a production, for that non-terminal.
Example
We take the problem of palindrome language, which cannot be described by means
of Regular Expression. That is, L = { w | w = wR } is not a regular language. But it
can be described by means of CFG, as illustrated below:
G = ( V, Σ, P, S )
Where:
V = { Q, Z, N }
Σ = { 0, 1 }
P = { Q → Z | Q → N | Q → ℇ | Z → 0Q0 | N → 1Q1 }
S = { Q }
This grammar describes palindrome language, such as: 1001, 11100111, 00100,
1010101, 11111, etc.
Syntax Analyzers
A syntax analyzer or parser takes the input from a lexical analyzer in the form of
token streams. The parser analyzes the source code (token stream) against the
production rules to detect any errors in the code. The output of this phase is a parse
tree.
This way, the parser accomplishes two tasks, i.e., parsing the code, looking for
errors and generating a parse tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in the
program. Parsers use error recovering strategies, which we will learn later in this
chapter.
Derivation
A derivation is basically a sequence of production rules, in order to get the input
string. During parsing, we take two decisions for some sentential form of input:
Left-most Derivation
If the sentential form of an input is scanned and replaced from left to right, it is called
left-most derivation. The sentential form derived by the left-most derivation is called
the left-sentential form.
Right-most Derivation
If we scan and replace the input with production rules, from right to left, it is known
as right-most derivation. The sentential form derived from the right-most derivation is
called the right-sentential form.
Example
Production rules:
E → E + E
E → E * E
E → id
Input string: id + id * id
The left-most derivation is:
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see how
strings are derived from the start symbol. The start symbol of the derivation
becomes the root of the parse tree. Let us see this by an example from the last
topic.
We take the left-most derivation of a + b * c
The left-most derivation is:
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
Step 1:
E→E*E
Step 2:
E→E+E*E
Step 3:
E → id + E * E
Step 4:
E → id + id * E
Step 5:
E → id + id * id
In a parse tree:
Ambiguity
A grammar G is said to be ambiguous if it has more than one parse tree (left or right
derivation) for at least one string.
Example
E → E + E
E → E – E
E → id
For the string id + id – id, the above grammar generates two parse trees:
The language generated by an ambiguous grammar is said to be inherently
ambiguous. Ambiguity in grammar is not good for a compiler construction. No
method can detect and remove ambiguity automatically, but it can be removed by
either re-writing the whole grammar without ambiguity, or by setting and following
associativity and precedence constraints.
Associativity
If an operand has operators on both sides, the side on which the operator takes this
operand is decided by the associativity of those operators. If the operation is left-
associative, then the operand will be taken by the left operator or if the operation is
right-associative, the right operator will take the operand.
Example
Operations such as Addition, Multiplication, Subtraction, and Division are left
associative. If the expression contains:
id op id op id
Precedence
If two different operators share a common operand, the precedence of operators
decides which will take the operand. That is, 2+3*4 can have two different parse
trees, one corresponding to (2+3)*4 and another corresponding to 2+(3*4). By
setting precedence among operators, this problem can be easily removed. As in the
previous example, mathematically * (multiplication) has precedence over +
(addition), so the expression 2+3*4 will always be interpreted as:
2 + (3 * 4)
Left Recursion
A grammar becomes left-recursive if it has any non-terminal ‘A’ whose derivation
contains ‘A’ itself as the left-most symbol. Left-recursive grammar is considered to
be a problematic situation for top-down parsers. Top-down parsers start parsing
from the Start symbol, which in itself is non-terminal. So, when the parser
encounters the same non-terminal in its derivation, it becomes hard for it to judge
when to stop parsing the left non-terminal and it goes into an infinite loop.
Example:
(1) A => Aα | β
(2) S => Aα | β
A => Sd
A top-down parser will first parse the A, which in-turn will yield a string consisting of
A itself and the parser may go into a loop forever.
A => Aα | β
A => βA'
A'=> αA' | ε
This does not impact the strings derived from the grammar, but it removes
immediate left recursion.
Second method is to use the following algorithm, which should eliminate all direct
and indirect left recursions.
START
Example
The production set
S => Aα | β
A => Sd
S => Aα | β
A => Aαd | βd
and then, remove immediate left recursion using the first technique.
A => βdA'
A' => αdA' | ε
Now none of the production has either direct or indirect left recursion.
Left Factoring
If more than one grammar production rules has a common prefix string, then the
top-down parser cannot make a choice as to which of the production it should take
to parse the string in hand.
Example
If a top-down parser encounters a production like
A ⟹ αβ | α
Then it cannot determine which production to follow to parse the string as both
productions are starting from the same terminal (or non-terminal). To remove this
confusion, we use a technique called left factoring.
Left factoring transforms the grammar to make it useful for top-down parsers. In this
technique, we make one production for each common prefixes and the rest of the
derivation is added by new productions.
Example
The above productions can be written as
A => αA'
A'=> β |
Now the parser has only one production per prefix which makes it easier to take
decisions.
α → t β
if α is a non-terminal and α →
First set can be seen as:
Follow Set
Likewise, we calculate what terminal symbol immediately follows a non-terminal α in
production rules. We do not consider what the non-terminal can generate but
instead, we see what would be the next terminal symbol that follows the productions
of a non-terminal.
if α is a start symbol, then FOLLOW() = $
if α is a non-terminal and has a production α → AB, then FIRST(B) is in
FOLLOW(A) except ℇ.
if α is a non-terminal and has a production α → AB, where B ℇ, then
FOLLOW(A) is in FOLLOW(α).
Follow set can be seen as: FOLLOW(α) = { t | S *αt*}
Top-down Parsing
When the parser starts constructing the parse tree from the start symbol and then
tries to transform the start symbol to the input, it is called top-down parsing.
Recursive descent parsing : It is a common form of top-down parsing. It is
called recursive as it uses recursive procedures to process the input.
Recursive descent parsing suffers from backtracking.
Backtracking : It means, if one derivation of a production fails, the syntax
analyzer restarts the process using different rules of same production. This
technique may process the input string more than once to determine the right
production.
Bottom-up Parsing
As the name suggests, bottom-up parsing starts with the input symbols and tries to
construct the parse tree up to the start symbol.
Example:
Input string : a + b * c
Production rules:
S → E
E → E + T
E → E * T
E → T
T → id
Read the input and check if any production matches with the input:
a + b * c
T + b * c
E + b * c
E + T * c
E * c
E * T
E
S
Back-tracking
Top- down parsers start from the root node (start symbol) and match the input string
against the production rules to replace them (if matched). To understand this, take
the following example of CFG:
S → rXd | rZd
X → oa | ea
Z → ai
For an input string: read, a top-down parser, will behave like this:
It will start with S from the production rules and will match its yield to the left-most
letter of the input, i.e. ‘r’. The very production of S (S → rXd) matches with it. So the
top-down parser advances to the next input letter (i.e. ‘e’). The parser tries to
expand non-terminal ‘X’ and checks its production from the left (X → oa). It does not
match with the next input symbol. So the top-down parser backtracks to obtain the
next production rule of X, (X → ea).
Now the parser matches all the input letters in an ordered manner. The string is
accepted.
Predictive Parser
Predictive parser is a recursive descent parser, which has the capability to predict
which production is to be used to replace the input string. The predictive parser does
not suffer from backtracking.
To accomplish its tasks, the predictive parser uses a look-ahead pointer, which
points to the next input symbols. To make the parser back-tracking free, the
predictive parser puts some constraints on the grammar and accepts only a class of
grammar known as LL(k) grammar.
Predictive parsing uses a stack and a parsing table to parse the input and generate
a parse tree. Both the stack and the input contains an end symbol $ to denote that
the stack is empty and the input is consumed. The parser refers to the parsing table
to take any decision on the input and stack element combination.
In recursive descent parsing, the parser may have more than one production to
choose from for a single instance of input, whereas in predictive parser, each step
has at most one production to choose. There might be instances where there is no
production matching the input string, making the parsing procedure to fail.
LL Parser
An LL Parser accepts LL grammar. LL grammar is a subset of context-free grammar
but with some restrictions to get the simplified version, in order to achieve easy
implementation. LL grammar can be implemented by means of both algorithms
namely, recursive-descent or table-driven.
LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from left to
right, the second L in LL(k) stands for left-most derivation and k itself represents the
number of look aheads. Generally k = 1, so LL(k) may also be written as LL(1).
LL Parsing Algorithm
We may stick to deterministic LL(1) for parser explanation, as the size of table grows
exponentially with the value of k. Secondly, if a given grammar is not LL(1), then
usually, it is not LL(k), for any given k.
Given below is an algorithm for LL(1) Parsing:
Input:
string ω
parsing table M for grammar G
Output:
If ω is in L(G) then left-most derivation of ω,
error otherwise.
Initial State : $S on stack (with S being start symbol)
ω$ in the input buffer
repeat
let X be the top stack symbol and a the symbol pointed by ip.
if X∈ Vt or $
if X = a
POP X and advance ip.
else
error()
endif
else /* X is non-terminal */
if M[X,a] = X → Y1, Y2,... Yk
POP X
PUSH Yk, Yk-1,... Y1 /* Y1 on top */
Output the production X → Y1, Y2,... Yk
else
error()
endif
endifuntil X = $ /* empty stack */
at most one of α and β can derive empty string.
if β → t, then α does not derive any string beginning with a terminal in
FOLLOW(A).
Shift-Reduce Parsing
Shift-reduce parsing uses two unique steps for bottom-up parsing. These steps are
known as shift-step and reduce-step.
Shift step: The shift step refers to the advancement of the input pointer to
the next input symbol, which is called the shifted symbol. This symbol is
pushed onto the stack. The shifted symbol is treated as a single node of the
parse tree.
Reduce step : When the parser finds a complete grammar rule (RHS) and
replaces it to (LHS), it is known as reduce-step. This occurs when the top of
the stack contains a handle. To reduce, a POP function is performed on the
stack which pops off the handle and replaces it with LHS non-terminal
symbol.
LR Parser
The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a wide
class of context-free grammar which makes it the most efficient syntax analysis
technique. LR parsers are also known as LR(k) parsers, where L stands for left-to-
right scanning of the input stream; R stands for the construction of right-most
derivation in reverse, and k denotes the number of lookahead symbols to make
decisions.
There are three widely used algorithms available for constructing an LR parser:
LR(1) – LR Parser:
Slow construction
LR Parsing Algorithm
Here we describe a skeleton algorithm of an LR parser:
token = next_token()
repeat forever
s = top of stack
else
error()
LL vs. LR
LL LR
Starts with the root nonterminal on the Ends with the root nonterminal on the stack.
stack.
Uses the stack for designating what is still Uses the stack for designating what is
to be expected. already seen.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Continuously pops a nonterminal off the Tries to recognize a right hand side on the
stack, and pushes the corresponding right stack, pops it, and pushes the corresponding
hand side. nonterminal.
Reads the terminals when it pops one off Reads the terminals while it pushes them on
the stack. the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
Lexical : name of some identifier typed incorrectly
Syntactical : missing semicolon or unbalanced parenthesis
Semantical : incompatible value assignment
Logical : code not reachable, infinite loop
There are four common error-recovery strategies that can be implemented in the
parser to deal with errors in the code.
Panic mode
When a parser encounters an error anywhere in the statement, it ignores the rest of
the statement by not processing input from erroneous input to delimiter, such as
semi-colon. This is the easiest way of error-recovery and also, it prevents the parser
from developing infinite loops.
Statement mode
When a parser encounters an error, it tries to take corrective measures so that the
rest of inputs of statement allow the parser to parse ahead. For example, inserting a
missing semicolon, replacing comma with a semicolon etc. Parser designers have to
be careful here because one wrong correction may lead to an infinite loop.
Error productions
Some common errors are known to the compiler designers that may occur in the
code. In addition, the designers can create augmented grammar to be used, as
productions that generate erroneous constructs when these errors are encountered.
Global correction
The parser considers the program in hand as a whole and tries to figure out what
the program is intended to do and tries to find out a closest match for it, which is
error-free. When an erroneous input (statement) X is fed, it creates a parse tree for
some closest error-free statement Y. This may allow the parser to make minimal
changes in the source code, but due to the complexity (time and space) of this
strategy, it has not been implemented in practice yet.
If watched closely, we find most of the leaf nodes are single child to their parent
nodes. This information can be eliminated before feeding it to the next phase. By
hiding extra information, we can obtain a tree as shown below:
ASTs are important data structures in a compiler with least unnecessary information.
ASTs are more compact than a parse tree and can be easily used by a compiler.
The above CFG production has no semantic rule associated with it, and it cannot
help in making any sense of the production.
Semantics
Semantics of a language provide meaning to its constructs, like tokens and syntax
structure. Semantics help interpret symbols, their types, and their relations with each
other. Semantic analysis judges whether the syntax structure constructed in the
source program derives any meaning or not.
CFG + semantic rules = Syntax Directed Definitions
For example:
int a = “value”;
should not issue an error in lexical and syntax analysis phase, as it is lexically and
structurally correct, but it should generate a semantic error as the type of the
assignment differs. These rules are set by the grammar of the language and
evaluated in semantic analysis. The following tasks should be performed in
semantic analysis:
Scope resolution
Type checking
Array-bound checking
Semantic Errors
We have mentioned some of the semantics errors that the semantic analyzer is
expected to recognize:
Type mismatch
Undeclared variable
The right part of the CFG contains the semantic rules that specify how the grammar
should be interpreted. Here, the values of non-terminals E and T are added together
and the result is copied to the non-terminal E.
Semantic attributes may be assigned to their values from their domain at the time of
parsing and evaluated at the time of assignment or conditions. Based on the way
the attributes get their values, they can be broadly divided into two categories :
synthesized attributes and inherited attributes.
Synthesized attributes
These attributes get values from the attribute values of their child nodes. To
illustrate, assume the following production:
S → ABC
If S is taking values from its child nodes (A,B,C), then it is said to be a synthesized
attribute, as the values of ABC are synthesized to S.
As in our previous example (E → E + T), the parent node E gets its value from its
child node. Synthesized attributes never take values from their parent nodes or any
sibling nodes.
Inherited attributes
In contrast to synthesized attributes, inherited attributes can take values from parent
and/or siblings. As in the following production,
S → ABC
A can get values from S, B and C. B can take values from S, A, and C. Likewise, C
can take values from S, A, and B.
Expansion : When a non-terminal is expanded to terminals as per a grammatical
rule
Reduction : When a terminal is reduced to its corresponding non-terminal according
to grammar rules. Syntax trees are parsed top-down and left to right. Whenever
reduction occurs, we apply its corresponding semantic rules (actions).
Semantic analysis uses Syntax Directed Translations to perform the above tasks.
Semantic analyzer receives AST (Abstract Syntax Tree) from its previous stage
(syntax analysis).
Semantic analyzer attaches attribute information with AST, which are called
Attributed AST.
Attributes are two tuple value, <attribute name, attribute value>
For example:
int value = 5;<type, “integer”><presentvalue, “5”>
S-attributed SDT
If an SDT uses only synthesized attributes, it is called as S-attributed SDT. These
attributes are evaluated using S-attributed SDTs that have their semantic actions
written after the production (right hand side).
L-attributed SDT
This form of SDT uses both synthesized and inherited attributes with restriction of
not taking values from right siblings.
In L-attributed SDTs, a non-terminal can get values from its parent, child, and sibling
nodes. As in the following production
S → ABC
S can take values from A, B, and C (synthesized). A can take values from S only. B
can take values from S and A. C can get values from S, A, and B. No non-terminal
can get values from the sibling to its right.
Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right parsing
manner.
Activation Trees
A program is a sequence of instructions combined into a number of procedures.
Instructions in a procedure are executed sequentially. A procedure has a start and
an end delimiter and everything inside it is called the body of the procedure. The
procedure identifier and the sequence of finite instructions inside it make up the
body of the procedure.
The execution of a procedure is called its activation. An activation record contains all
the necessary information required to call a procedure. An activation record may
contain the following units (depending upon the source language used).
Machine Status Stores machine status such as Registers, Program Counter etc.,
before the procedure is called.
Control Link Stores the address of activation record of the caller procedure.
Access Link Stores the information of data which is outside the local scope.
Actual Parameters Stores actual parameters, i.e., parameters which are used to send
input to the called procedure.
Whenever a procedure is executed, its activation record is stored on the stack, also
known as control stack. When a procedure calls another procedure, the execution of
the caller is suspended until the called procedure finishes execution. At this time, the
activation record of the called procedure is stored on the stack.
We assume that the program control flows in a sequential manner and when a
procedure is called, its control is transferred to the called procedure. When a called
procedure is executed, it returns the control back to the caller. This type of control
flow makes it easier to represent a series of activations in the form of a tree, known
as the activation tree.
To understand this concept, we take a piece of code as an example:
. . .
printf(“Enter Your Name: “);
scanf(“%s”, username);
show_data(username);
printf(“Press any key to continue…”);. . .int show_data(char *user)
{
printf(“Your name is %s”, username);
return 0;
}. . .
Storage Allocation
Runtime environment manages runtime memory requirements for the following
entities:
Code : It is known as the text part of a program that does not change at
runtime. Its memory requirements are known at the compile time.
Procedures : Their text part is static but they are called in a random manner.
That is why, stack storage is used to manage procedure calls and
activations.
Variables : Variables are known at the runtime only, unless they are global or
constant. Heap memory allocation scheme is used for managing allocation
and de-allocation of memory for variables in runtime.
Static Allocation
In this allocation scheme, the compilation data is bound to a fixed location in the
memory and it does not change when the program executes. As the memory
requirement and storage locations are known in advance, runtime support package
for memory allocation and de-allocation is not required.
Stack Allocation
Procedure calls and their activations are managed by means of stack memory
allocation. It works in last-in-first-out (LIFO) method and this allocation strategy is
very useful for recursive procedure calls.
Heap Allocation
Variables local to a procedure are allocated and de-allocated only at runtime. Heap
allocation is used to dynamically allocate memory to the variables and claim it back
when the variables are no more required.
Except statically allocated memory area, both stack and heap memory can grow and
shrink dynamically and unexpectedly. Therefore, they cannot be provided with a
fixed amount of memory in the system.
As shown in the image above, the text part of the code is allocated a fixed amount of
memory. Stack and heap memory are arranged at the extremes of total memory
allocated to the program. Both shrink and grow against each other.
Parameter Passing
The communication medium among procedures is known as parameter passing.
The values of the variables from a calling procedure are transferred to the called
procedure by some mechanism. Before moving ahead, first go through some basic
terminologies pertaining to the values in a program.
r-value
The value of an expression is called its r-value. The value contained in a single
variable also becomes an r-value if it appears on the right-hand side of the
assignment operator. r-values can always be assigned to some other variable.
l-value
The location of memory (address) where an expression is stored is known as the l-
value of that expression. It always appears at the left hand side of an assignment
operator.
For example:
day = 1;
week = day * 7;
month = 1;
year = month * 12;
From this example, we understand that constant values like 1, 7, 12, and variables
like day, week, month and year, all have r-values. Only variables have l-values as
they also represent the memory location assigned to them.
For example:
7 = x + y;
is an l-value error, as the constant 7 does not represent any memory location.
Formal Parameters
Variables that take the information passed by the caller procedure are called formal
parameters. These variables are declared in the definition of the called function.
Actual Parameters
Variables whose values or addresses are being passed to the called procedure are
called actual parameters. These variables are specified in the function call as
arguments.
Example:
fun_one(){
int actual_parameter = 10;
call fun_two(int actual_parameter);}
fun_two(int formal_parameter){
print formal_parameter;}
Formal parameters hold the information of the actual parameter, depending upon
the parameter passing technique used. It may be a value or an address.
Pass by Value
In pass by value mechanism, the calling procedure passes the r-value of actual
parameters and the compiler puts that into the called procedure’s activation record.
Formal parameters then hold the values passed by the calling procedure. If the
values held by the formal parameters are changed, it should have no impact on the
actual parameters.
Pass by Reference
In pass by reference mechanism, the l-value of the actual parameter is copied to the
activation record of the called procedure. This way, the called procedure now has
the address (memory location) of the actual parameter and the formal parameter
refers to the same memory location. Therefore, if the value pointed by the formal
parameter is changed, the impact should be seen on the actual parameter as they
should also point to the same value.
Pass by Copy-restore
This parameter passing mechanism works similar to ‘pass-by-reference’ except that
the changes to actual parameters are made when the called procedure ends. Upon
function call, the values of actual parameters are copied in the activation record of
the called procedure. Formal parameters if manipulated have no real-time effect on
actual parameters (as l-values are passed), but when the called procedure ends, the
l-values of formal parameters are copied to the l-values of actual parameters.
Example:
int y;
calling_procedure() {
y = 10;
copy_restore(y); //l-value of y is passed
printf y; //prints 99 }
copy_restore(int x) {
x = 99; // y still has value 10 (unaffected)
y = 0; // y is now 0 }
When this function ends, the l-value of formal parameter x is copied to the actual
parameter y. Even if the value of y is changed before the procedure ends, the l-
value of x is copied to the l-value of y making it behave like call by reference.
Pass by Name
Languages like Algol provide a new kind of parameter passing mechanism that
works like preprocessor in C language. In pass by name mechanism, the name of
the procedure being called is replaced by its actual body. Pass-by-name textually
substitutes the argument expressions in a procedure call for the corresponding
parameters in the body of the procedure so that it can now work on actual
parameters, much like pass-by-reference.
To store the names of all entities in a structured form at one place.
To verify if a variable has been declared.
To implement type checking, by verifying assignments and expressions in
the source code are semantically correct.
To determine the scope of a name (scope resolution).
A symbol table is simply a table which can be either linear or a hash table. It
maintains an entry for each name in the following format:
<symbol name, type, attribute>
For example, if a symbol table has to store information about the following variable
declaration:
static int interest;
Implementation
If a compiler is to handle a small amount of data, then the symbol table can be
implemented as an unordered list, which is easy to code, but it is only suitable for
small tables only. A symbol table can be implemented in one of the following ways:
Hash table
Among all, symbol tables are mostly implemented as hash tables, where the source
code symbol itself is treated as a key for the hash function and the return value is
the information about the symbol.
Operations
A symbol table, either linear or hash, should provide the following operations.
insert()
This operation is more frequently used by analysis phase, i.e., the first half of the
compiler where tokens are identified and names are stored in the table. This
operation is used to add information in the symbol table about unique names
occurring in the source code. The format or structure in which the names are stored
depends upon the compiler in hand.
An attribute for a symbol in the source code is the information associated with that
symbol. This information contains the value, state, scope, and type about the
symbol. The insert() function takes the symbol and its attributes as arguments and
stores the information in the symbol table.
For example:
int a;
lookup()
lookup() operation is used to search a name in the symbol table to determine:
This method returns 0 (zero) if the symbol does not exist in the symbol table. If the
symbol exists in the symbol table, it returns its attributes stored in the table.
Scope Management
A compiler maintains two types of symbol tables: a global symbol table which can
be accessed by all the procedures and scope symbol tables that are created for
each scope in the program.
To determine the scope of a name, symbol tables are arranged in hierarchical
structure as shown in the example below:
. . . int value=10;
void pro_one()
{
int one_1;
int one_2;
{ \
int one_3; |_ inner scope 1
int one_4; |
} /
int one_5;
{ \
int one_6; |_ inner scope 2
int one_7; |
} /
}
void pro_two()
{
int two_1;
int two_2;
{ \
int two_3; |_ inner scope 3
int two_4; |
} /
int two_5;
}. . .
The global symbol table contains names for one global variable (int value) and two
procedure names, which should be available to all the child nodes shown above.
The names mentioned in the pro_one symbol table (and all its child tables) are not
available for pro_two symbols and its child tables.
This symbol table data structure hierarchy is stored in the semantic analyzer and
whenever a name needs to be searched in a symbol table, it is searched using the
following algorithm:
first a symbol will be searched in the current scope, i.e. current symbol table.
if a name is found, then search is completed, else it will be searched in the
parent symbol table until,
either the name is found or global symbol table has been searched for the
name.
If a compiler translates the source language to its target machine language
without having the option for generating intermediate code, then for each
new machine, a full native compiler is required.
Intermediate code eliminates the need of a new full compiler for every unique
machine by keeping the analysis portion same for all the compilers.
The second part of compiler, synthesis, is changed according to the target
machine.
It becomes easier to apply the source code modifications to improve code
performance by applying code optimization techniques on the intermediate
code.
Intermediate Representation
Intermediate codes can be represented in a variety of ways and they have their own
benefits.
High Level IR - High-level intermediate code representation is very close to
the source language itself. They can be easily generated from the source
code and we can easily apply code modifications to enhance performance.
But for target machine optimization, it is less preferred.
Low Level IR - This one is close to the target machine, which makes it
suitable for register and memory allocation, instruction set selection, etc. It is
good for machine-dependent optimizations.
Intermediate code can be either language specific (e.g., Byte Code for Java) or
language independent (three-address code).
Three-Address Code
Intermediate code generator receives input from its predecessor phase, semantic
analyzer, in the form of an annotated syntax tree. That syntax tree then can be
converted into a linear representation, e.g., postfix notation. Intermediate code tends
to be machine independent code. Therefore, code generator assumes to have
unlimited number of memory storage (register) to generate code.
For example:
a = b + c * d;
The intermediate code generator will try to divide this expression into sub-
expressions and then generate the corresponding code.
r1 = c * d;
r2 = b + r1;
a = r2
* c d r1
+ b r1 r2
+ r2 r1 r3
= r3 a
Triples
Each instruction in triples presentation has three fields : op, arg1, and arg2.The
results of respective sub-expressions are denoted by the position of expression.
Triples represent similarity with DAG and syntax tree. They are equivalent to DAG
while representing expressions.
Op arg1 arg2
* c d
+ b (0)
+ (1) (0)
= (2)
Triples face the problem of code immovability while optimization, as the results are
positional and changing the order or position of an expression may cause problems.
Indirect Triples
This representation is an enhancement over triples representation. It uses pointers
instead of position to store results. This enables the optimizers to freely re-position
the sub-expression to produce an optimized code.
Declarations
A variable or procedure has to be declared before it can be used. Declaration
involves allocation of space in memory and entry of type and name in the symbol
table. A program may be coded and designed keeping the target machine structure
in mind, but it may not always be possible to accurately convert a source code to its
target language.
Taking the whole program as a collection of procedures and sub-procedures, it
becomes possible to declare all the names local to the procedure. Memory
allocation is done in a consecutive manner and names are allocated to memory in
the sequence they are declared in the program. We use offset variable and set it to
zero {offset = 0} that denote the base address.
The source programming language and the target machine architecture may vary in
the way names are stored, so relative addressing is used. While the first name is
allocated memory starting from the memory location 0 {offset=0}, the next name
declared later, should be allocated memory next to the first one.
Example:
We take the example of C programming language where an integer variable is
assigned 2 bytes of memory and a float variable is assigned 4 bytes of memory.
int a;float b;
Allocation process:{offset = 0}
int a;
id.type = int
id.width = 2
float b;
id.type = float
id.width = 4
To enter this detail in a symbol table, a procedure enter can be used. This method
may have the following structure:
enter(name, type, offset)
This procedure should create an entry in the symbol table, for variable name, having
its type set to type and relative address offset in its data area.
Interior nodes represent operators.
Interior nodes also represent the results of expressions or the
identifiers/name where the values are to be stored or assigned.
Example:
t0 = a + b
t1 = t0 + c
d = t0 + t1
[t0 = a + b]
[t1 = t0 + c]
[d = t0 + t1]
Peephole Optimization
This optimization technique works locally on the source code to transform it into an
optimized code. By locally, we mean a small portion of the code block at hand.
These methods can be applied on intermediate codes as well as on target codes. A
bunch of statements is analyzed and are checked for the following possible
optimization:
MOV x, R0
MOV R0, R1
We can delete the first instruction and re-write the sentence as:
MOV x, R1
Unreachable code
Unreachable code is a part of the program code that is never accessed because of
programming constructs. Programmers may have accidently written a piece of code
that can never be reached.
Example:
void add_ten(int x){
return x + 10;
printf(“value of x is %d”, x);}
In this code segment, the printf statement will never be executed as the program
control returns back before it can execute, hence printf can be removed.
Code Generator
A code generator is expected to have an understanding of the target machine’s
runtime environment and its instruction set. The code generator should take the
following things into consideration to generate the code:
Target language : The code generator has to be aware of the nature of the
target language for which the code is to be transformed. That language may
facilitate some machine-specific instructions to help the compiler generate
the code in a more convenient way. The target machine can have either
CISC or RISC processor architecture.
IR Type : Intermediate representation has various forms. It can be in
Abstract Syntax Tree (AST) structure, Reverse Polish Notation, or 3-address
code.
Selection of instruction : The code generator takes Intermediate
Representation as input and converts (maps) it into target machine’s
instruction set. One representation can have many ways (instructions) to
convert it, so it becomes the responsibility of the code generator to choose
the appropriate instructions wisely.
Register allocation : A program has a number of values to be maintained
during the execution. The target machine’s architecture may not allow all of
the values to be kept in the CPU memory or registers. Code generator
decides what values to keep in the registers. Also, it decides the registers to
be used to keep these values.
Ordering of instructions : At last, the code generator decides the order in
which the instruction will be executed. It creates schedules for instructions to
execute them.
Descriptors
The code generator has to track both the registers (for availability) and addresses
(location of values) while generating the code. For both of them, the following two
descriptors are used:
Register descriptor : Register descriptor is used to inform the code
generator about the availability of registers. Register descriptor keeps track
of values stored in each register. Whenever a new register is required during
code generation, this descriptor is consulted for register availability.
Address descriptor : Values of the names (identifiers) used in the program
might be stored at different locations while in execution. Address descriptors
are used to keep track of memory locations where the values of identifiers
are stored. These locations may include CPU registers, heaps, stacks,
memory or a combination of the mentioned locations.
Code generator keeps both the descriptor updated in real-time. For a load
statement, LD R1, x, the code generator:
updates the Address Descriptor (x) to show that one instance of x is in R1.
Code Generation
Basic blocks comprise of a sequence of three-address instructions. Code generator
takes these sequence of instructions as input.
Note : If the value of a name is found at more than one place (register, cache, or
memory), the register’s value will be preferred over the cache and main memory.
Likewise cache’s value will be preferred over the main memory. Main memory is
barely given any preference.
getReg : Code generator uses getReg function to determine the status of available
registers and the location of name values. getReg works as follows:
If variable Y is already in register R, it uses that register.
Else if some register R is available, it uses that register.
Else if both the above options are not possible, it chooses a register that
requires minimal number of load and store instructions.
For an instruction x = y OP z, the code generator may perform the following actions.
Let us assume that L is the location (preferably register) where the output of y OP z
is to be saved:
Call function getReg, to decide the location of L.
Determine the present location (register or memory) of y by consulting the
Address Descriptor of y. If y is not presently in register L, then generate the
following instruction to copy the value of y to L:
MOV y’, L
where y’ represents the copied value of y.
Determine the present location of z using the same method used in step 2
for y and generate the following instruction:
OP z’, L
where z’ represents the copied value of z.
Now L contains the value of y OP z, that is intended to be assigned to x. So,
if L is a register, update its descriptor to indicate that it contains the value
of x. Update the descriptor of x to indicate that it is stored at location L.
If y and z has no further use, they can be given back to the system.
Other code constructs like loops and conditional statements are transformed into
assembly language in general assembly way.
The output code must not, in any way, change the meaning of the program.
Optimization should increase the speed of the program and if possible, the
program should demand less number of resources.
Optimization should itself be fast and should not delay the overall compiling
process.
Efforts for an optimized code can be made at various levels of compiling the
process.
At the beginning, users can change/rearrange the code or use better
algorithms to write the code.
After generating intermediate code, the compiler can modify the intermediate
code by address calculations and improving loops.
While producing the target machine code, the compiler can make use of
memory hierarchy and CPU registers.
Optimization can be categorized broadly into two types : machine independent and
machine dependent.
Machine-independent Optimization
In this optimization, the compiler takes in the intermediate code and transforms a
part of the code that does not involve any CPU registers and/or absolute memory
locations. For example:
do{
item = 10;
value = value + item; } while(value<100);
This code involves repeated assignment of the identifier item, which if we put this
way:
Item = 10;do{
value = value + item; } while(value<100);
should not only save the CPU cycles, but can be used on any processor.
Machine-dependent Optimization
Machine-dependent optimization is done after the target code has been generated
and when the code is transformed according to the target machine architecture. It
involves CPU registers and may have absolute memory references rather than
relative references. Machine-dependent optimizers put efforts to take maximum
advantage of memory hierarchy.
Basic Blocks
Source codes generally have a number of instructions, which are always executed
in sequence and are considered as the basic blocks of the code. These basic blocks
do not have any jump statements among them, i.e., when the first instruction is
executed, all the instructions in the same basic block will be executed in their
sequence of appearance without losing the flow control of the program.
A program can have various constructs as basic blocks, like IF-THEN-ELSE,
SWITCH-CASE conditional statements and loops such as DO-WHILE, FOR, and
REPEAT-UNTIL, etc.
Search header statements of all the basic blocks from where a basic block
starts:
Header statements and the statements following them form a basic block.
A basic block does not include any header statement of any other basic
block.
Basic blocks are important concepts from both code generation and optimization
point of view.
Basic blocks play an important role in identifying variables, which are being used
more than once in a single basic block. If any variable is being used more than
once, the register memory allocated to that variable need not be emptied unless the
block finishes execution.
Control Flow Graph
Basic blocks in a program can be represented by means of control flow graphs. A
control flow graph depicts how the program control is being passed among the
blocks. It is a useful tool that helps in optimization by help locating any unwanted
loops in the program.
Loop Optimization
Most programs run as a loop in the system. It becomes necessary to optimize the
loops in order to save CPU cycles and memory. Loops can be optimized by the
following techniques:
Invariant code : A fragment of code that resides in the loop and computes the same
value at each iteration is called a loop-invariant code. This code can be moved out of
the loop by saving it to be computed only once, rather than with each iteration.
Induction analysis : A variable is called an induction variable if its value is
altered within the loop by a loop-invariant value.
Strength reduction : There are expressions that consume more CPU
cycles, time, and memory. These expressions should be replaced with
cheaper expressions without compromising the output of expression. For
example, multiplication (x * 2) is expensive in terms of CPU cycles than (x <<
1) and yields the same result.
Dead-code Elimination
Dead code is one or more than one code statements, which are:
The above control flow graph depicts a chunk of program where variable ‘a’ is used
to assign the output of expression ‘x * y’. Let us assume that the value assigned to
‘a’ is never used inside the loop.Immediately after the control leaves the loop, ‘a’ is
assigned the value of variable ‘z’, which would be used later in the program. We
conclude here that the assignment code of ‘a’ is never used anywhere, therefore it is
eligible to be eliminated.
Likewise, the picture above depicts that the conditional statement is always false,
implying that the code, written in true case, will never be executed, hence it can be
removed.
Partial Redundancy
Redundant expressions are computed more than once in parallel path, without any
change in operands.whereas partial-redundant expressions are computed more
than once in a path, without any change in operands. For example,
[redundant expression] [partially redundant expression]
We assume that the values of operands (y and z) are not changed from assignment
of variable a to variable c. Here, if the condition statement is true, then y OP z is
computed twice, otherwise once. Code motion can be used to eliminate this
redundancy, as shown below:
If (condition){
...
tmp = y OP z;
a = tmp;
...}else{
...
tmp = y OP z;}
c = tmp;
Here, whether the condition is true or false; y OP z should be computed only once