Compiler Notes
Compiler Notes
An assembler then translates the assembly program into machine code (object).
A linker tool is used to link all the parts of the program together for execution (executable
machine code).
A loader loads all of them into memory and then the program is executed.
Preprocessor
A preprocessor, generally considered as a part of compiler, is a tool that
produces input for compilers. It deals with macro-processing,
augmentation, file inclusion, language extension, etc.
Interpreter
An interpreter, like a compiler, translates high-level language into low-level
machine language. The difference lies in the way they read the source code
or input. A compiler reads the whole source code at once, creates tokens,
checks semantics, generates intermediate code, executes the whole
program and may involve many passes. In contrast, an interpreter reads a
statement from the input, converts it to an intermediate code, executes it,
then takes the next statement in sequence. If an error occurs, an
interpreter stops execution and reports it. whereas a compiler reads the
whole program even if it encounters several errors.
Assembler
An assembler translates assembly language programs into machine
code.The output of an assembler is called an object file, which contains a
combination of machine instructions as well as the data required to place
these instructions in memory.
Linker
Linker is a computer program that links and merges various object files
together in order to make an executable file. All these files might have been
compiled by separate assemblers. The major task of a linker is to search
and locate referenced module/routines in a program and to determine the
memory location where these codes will be loaded, making the program
instruction to have absolute references.
Loader
Loader is a part of operating system and is responsible for loading
executable files into memory and execute them. It calculates the size of a
program (instructions and data) and creates memory space for it. It
initializes various registers to initiate execution.
Cross-compiler
A compiler that runs on platform (A) and is capable of generating
executable code for platform (B) is called a cross-compiler.
Source-to-source Compiler
A compiler that takes the source code of one programming language and
translates it into the source code of another programming language is called
a source-to-source compiler.
A compiler can broadly be divided into two phases based on the way they
compile.
Analysis Phase
Known as the front-end of the compiler, the analysis phase of the compiler
reads the source program, divides it into core parts and then checks for
lexical, grammar and syntax errors.The analysis phase generates an
intermediate representation of the source program and symbol table, which
should be fed to the Synthesis phase as input.
Synthesis Phase
Known as the back-end of the compiler, the synthesis phase generates the
target program with the help of intermediate source code representation
and symbol table.
A compiler can have many phases and passes.
Pass : A pass refers to the traversal of a compiler through the entire program.
Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous
stage, processes and yields output that can be used as input for the next stage. A pass can
have more than one phase.
Lexical Analysis
The first phase of scanner works as a text scanner. This phase scans
the source code as a stream of characters and converts it into
meaningful lexemes. Lexical analyzer represents these lexemes in the
form of tokens as:
<token-name, attribute-value>
Syntax Analysis
The next phase is called the syntax analysis or parsing. It takes the
token produced by lexical analysis as input and generates a parse tree
(or syntax tree). In this phase, token arrangements are checked
against the source code grammar, i.e. the parser checks if the
expression made by the tokens is syntactically correct.
Semantic Analysis
Semantic analysis checks whether the parse tree constructed follows
the rules of language. For example, assignment of values is between
compatible data types, and adding string to an integer. Also, the
semantic analyzer keeps track of identifiers, their types and
expressions; whether identifiers are declared before use or not etc.
The semantic analyzer produces an annotated syntax tree as an
output.
Code Optimization
The next phase does code optimization of the intermediate code.
Optimization can be assumed as something that removes unnecessary
code lines, and arranges the sequence of statements in order to speed
up the program execution without wasting resources (CPU, memory).
Code Generation
In this phase, the code generator takes the optimized representation
of the intermediate code and maps it to the target machine language.
The code generator translates the intermediate code into a sequence
of (generally) re-locatable machine code. Sequence of instructions of
machine code performs the task as the intermediate code would do.
Symbol Table
It is a data-structure maintained throughout all the phases of a
compiler. All the identifier's names along with their types are stored
here. The symbol table makes it easier for the compiler to quickly
search the identifier record and retrieve it. The symbol table is also
used for scope management.
Tokens
Lexemes are said to be a sequence of characters (alphanumeric) in a token.
There are some predefined rules for every lexeme to be identified as a valid
token. These rules are defined by grammar rules, by means of a pattern. A
pattern explains what can be a token, and these patterns are defined by
means of regular expressions.
In programming language, keywords, constants, identifiers, strings,
numbers, operators and punctuations symbols can be considered as tokens.
For example, in C language, the variable declaration linecontains the
tokens:
Specifications of Tokens
Let us understand how the language theory undertakes the following terms:
Alphabets
Any finite set of symbols {0,1} is a set of binary alphabets,
{0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F} is a set of Hexadecimal alphabets, {a-z,
A-Z} is a set of English language alphabets.
Strings
Any finite sequence of alphabets is called a string. Length of the string is
the total number of occurrence of alphabets, e.g., the length of the string
tutorialspoint is 14 and is denoted by |tutorialspoint| = 14. A string having
no alphabets, i.e. a string of zero length is known as an empty string and is
denoted by ε (epsilon).
Special Symbols
A typical high-level language contains the following symbols:-
Assignment =
Preprocessor #
Language
A language is considered as a finite set of strings over some finite set of
alphabets. Computer languages are considered as finite sets, and
mathematically set operations can be performed on them. Finite languages
can be described by means of regular expressions.
Longest Match Rule
When the lexical analyzer read the source-code, it scans the code letter by
letter; and when it encounters a whitespace, operator symbol, or special
symbols, it decides that a word is completed.
For example:
intintvalue;
While scanning both lexemes till ‘int’, the lexical analyzer cannot determine
whether it is a keyword int or the initials of identifier int value.
The Longest Match Rule states that the lexeme scanned should be
determined based on the longest match among all the tokens available.
The lexical analyzer also follows rule priority where a reserved word, e.g.,
a keyword, of a language is given priority over user input. That is, if the
lexical analyzer finds a lexeme that matches with any existing reserved
word, it should generate an error.
Compiler Design –
Regular Expressions
The lexical analyzer needs to scan and identify only a finite set of valid
string/token/lexeme that belongs to the language in hand. It searches for
the pattern defined by the language rules.
Regular expressions have the capability to express finite languages by
defining a pattern for finite strings of symbols. The grammar defined by
regular expressions is known as regular grammar. The language defined
by regular grammar is known as regular language.
Regular expression is an important notation for specifying patterns. Each
pattern matches a set of strings, so regular expressions serve as names for
a set of strings. Programming language tokens can be described by regular
languages. The specification of regular expressions is an example of a
recursive definition. Regular languages are easy to understand and have
efficient implementation.
• Regular expressions are a notation to represent lexeme patterns for a token.
• They are used to represent the language for lexical analyzer.
• They assist in finding the type of token that accounts for a particular lexeme.
Strings and Languages
Alphabets are finite, non-empty set of input symbols.
Σ = {0, 1} - binary alphabets
String represents the collection of alphabets.
w = {0,1, 00, 01, 10, 11, 001, 010, ... }
w indicates the set of possible strings for the given binary alphabet Σ
Language (L) is the collection of strings which are accepted by finite automata.
L = {0n1 I n >= 0}
Operations
The various operations on languages are:
Union of two languages L and M is written as
L U M = {s | s is in L or s is in M}
LM = {st | s is in L and t is in M}
Notations
If r and s are regular expressions denoting the languages L(r) and L(s),
then
Union : (r)|(s) is a regular expression denoting L(r) U L(s)
Start state : The state from where the automata starts, is known as the start state. Start
state has an arrow pointed towards it.
Intermediate states : All intermediate states have at least two arrows; one pointing to and
another pointing out from them.
Final state : If the input string is successfully parsed, the automata is expected to be in this
state. Final state is represented by double circles. It may have any odd number of arrows
pointing to it and even number of arrows pointing out from it. The number of odd arrows are
one greater than even, i.e. odd = even+1.
Transition : The transition from one state to another state happens when a desired symbol
in the input is found. Upon transition, automata can either move to the next state or stay in
the same state. Movement from one state to another is shown as a directed arrow, where the
arrows points to the destination state. If automata stays on the same state, an arrow
pointing from a state to itself is drawn.
It implies that every Regular Grammar is also context-free, but there exists
some problems, which are beyond the scope of Regular Grammar. CFG is a
helpful tool in describing the syntax of programming languages.
Context-Free Grammar
In this section, we will first see the definition of context-free grammar and
introduce terminologies used in parsing technology.
A context-free grammar has four components:
A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of strings.
The non-terminals define sets of strings that help define the language generated by the
grammar.
A set of tokens, known as terminal symbols (Σ). Terminals are the basic symbols from
which strings are formed.
A set of productions (P). The productions of a grammar specify the manner in which the
terminals and non-terminals can be combined to form strings. Each production consists of
a non-terminal called the left side of the production, an arrow, and a sequence of tokens
and/or on- terminals, called the right side of the production.
One of the non-terminals is designated as the start symbol (S); from where the production
begins.
The strings are derived from the start symbol by repeatedly replacing a
non-terminal (initially the start symbol) by the right side of a production,
for that non-terminal.
Example
We take the problem of palindrome language, which cannot be described by
means of Regular Expression. That is, L = { w | w = w R } is not a regular
language. But it can be described by means of CFG, as illustrated below:
G = ( V, Σ, P, S )
Where:
V = { Q, Z, N }
Σ = { 0, 1 }
P = { Q → Z | Q → N | Q → ℇ | Z → 0Q0 | N → 1Q1 }
S = { Q }
Syntax Analyzers
A syntax analyzer or parser takes the input from a lexical analyzer in the
form of token streams. The parser analyzes the source code (token stream)
against the production rules to detect any errors in the code. The output of
this phase is a parse tree.
This way, the parser accomplishes two tasks, i.e., parsing the code, looking
for errors and generating a parse tree as the output of the phase.
Parsers are expected to parse the whole code even if some errors exist in
the program. Parsers use error recovering strategies, which we will learn
later in this chapter.
Derivation
A derivation is basically a sequence of production rules, in order to get the
input string. During parsing, we take two decisions for some sentential form
of input:
Left-most Derivation
If the sentential form of an input is scanned and replaced from left to right,
it is called left-most derivation. The sentential form derived by the left-most
derivation is called the left-sentential form.
Right-most Derivation
If we scan and replace the input with production rules, from right to left, it
is known as right-most derivation. The sentential form derived from the
right-most derivation is called the right-sentential form.
Example
Production rules:
E → E + E
E → E * E
E → id
Input string: id + id * id
The left-most derivation is:
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
E → E + E
E → E + E * E
E → E + E * id
E → E + id * id
E → id + id * id
Parse Tree
A parse tree is a graphical depiction of a derivation. It is convenient to see
how strings are derived from the start symbol. The start symbol of the
derivation becomes the root of the parse tree. Let us see this by an
example from the last topic.
We take the left-most derivation of a + b * c
The left-most derivation is:
E → E * E
E → E + E * E
E → id + E * E
E → id + id * E
E → id + id * id
Step 1:
E→E*E
Step 2:
E→E+E*E
Step 3:
E → id + E * E
Step 4:
E → id + id * E
Step 5:
E → id + id * id
In a parse tree:
Ambiguity
A grammar G is said to be ambiguous if it has more than one parse tree
(left or right derivation) for at least one string.
Example
E → E + E
E → E – E
E → id
For the string id + id – id, the above grammar generates two parse trees:
Associativity
If an operand has operators on both sides, the side on which the operator
takes this operand is decided by the associativity of those operators. If the
operation is left-associative, then the operand will be taken by the left
operator or if the operation is right-associative, the right operator will take
the operand.
Example
Operations such as Addition, Multiplication, Subtraction, and Division are
left associative. If the expression contains:
id op id op id
(id op id) op id
id op (id op id)
Precedence
If two different operators share a common operand, the precedence of
operators decides which will take the operand. That is, 2+3*4 can have two
different parse trees, one corresponding to (2+3)*4 and another
corresponding to 2+(3*4). By setting precedence among operators, this
problem can be easily removed. As in the previous example, mathematically
* (multiplication) has precedence over + (addition), so the expression
2+3*4 will always be interpreted as:
2+(3*4)
Left Recursion
A grammar becomes left-recursive if it has any non-terminal ‘A’ whose
derivation contains ‘A’ itself as the left-most symbol. Left-recursive
grammar is considered to be a problematic situation for top-down parsers.
Top-down parsers start parsing from the Start symbol, which in itself is
non-terminal. So, when the parser encounters the same non-terminal in its
derivation, it becomes hard for it to judge when to stop parsing the left
non-terminal and it goes into an infinite loop.
Example:
(1) A => Aα | β
(2) S => Aα | β
A =>Sd
This does not impact the strings derived from the grammar, but it removes
immediate left recursion.
Second method is to use the following algorithm, which should eliminate all
direct and indirect left recursions.
START
END
Example
The production set
S => Aα | β
A =>Sd
and then, remove immediate left recursion using the first technique.
A => βdA'
A' => αdA' | ε
Now none of the production has either direct or indirect left recursion.
Left Factoring
If more than one grammar production rules has a common prefix string,
then the top-down parser cannot make a choice as to which of the
production it should take to parse the string in hand.
Example
If a top-down parser encounters a production like
A ⟹ αβ | α𝜸 | …
Now the parser has only one production per prefix which makes it easier to
take decisions.
First Set
This set is created to know what terminal symbol is derived in the first
position by a non-terminal. For example,
α → t β
Follow Set
Likewise, we calculate what terminal symbol immediately follows a non-
terminal α in production rules. We do not consider what the non-terminal
can generate but instead, we see what would be the next terminal symbol
that follows the productions of a non-terminal.
Top-down Parsing
When the parser starts constructing the parse tree from the start symbol
and then tries to transform the start symbol to the input, it is called top-
down parsing.
Recursive descent parsing : It is a common form of top-down parsing. It is called recursive
as it uses recursive procedures to process the input. Recursive descent parsing suffers from
backtracking.
Backtracking : It means, if one derivation of a production fails, the syntax analyzer restarts
the process using different rules of same production. This technique may process the input
string more than once to determine the right production.
Bottom-up Parsing
As the name suggests, bottom-up parsing starts with the input symbols and
tries to construct the parse tree up to the start symbol.
Example:
Input string : a + b * c
Production rules:
S → E
E → E + T
E → E * T
E → T
T → id
Let us start bottom-up parsing
a + b * c
Read the input and check if any production matches with the input:
a + b * c
T + b * c
E + b * c
E + T * c
E * c
E * T
Back-tracking
Top- down parsers start from the root node (start symbol) and match the
input string against the production rules to replace them (if matched). To
understand this, take the following example of CFG:
S →rXd|rZd
X →oa|ea
Z →ai
For an input string: read, a top-down parser, will behave like this:
It will start with S from the production rules and will match its yield to the
left-most letter of the input, i.e. ‘r’. The very production of S (S →rXd)
matches with it. So the top-down parser advances to the next input letter
(i.e. ‘e’). The parser tries to expand non-terminal ‘X’ and checks its
production from the left (X →oa). It does not match with the next input
symbol. So the top-down parser backtracks to obtain the next production
rule of X, (X →ea).
Now the parser matches all the input letters in an ordered manner. The
string is accepted.
Predictive Parser
Predictive parser is a recursive descent parser, which has the capability to
predict which production is to be used to replace the input string. The
predictive parser does not suffer from backtracking.
To accomplish its tasks, the predictive parser uses a look-ahead pointer,
which points to the next input symbols. To make the parser back-tracking
free, the predictive parser puts some constraints on the grammar and
accepts only a class of grammar known as LL(k) grammar.
Predictive parsing uses a stack and a parsing table to parse the input and
generate a parse tree. Both the stack and the input contains an end
symbol $to denote that the stack is empty and the input is consumed. The
parser refers to the parsing table to take any decision on the input and
stack element combination.
In recursive descent parsing, the parser may have more than one
production to choose from for a single instance of input, whereas in
predictive parser, each step has at most one production to choose. There
might be instances where there is no production matching the input string,
making the parsing procedure to fail.
LL Parser
An LL Parser accepts LL grammar. LL grammar is a subset of context-free
grammar but with some restrictions to get the simplified version, in order to
achieve easy implementation. LL grammar can be implemented by means
of both algorithms namely, recursive-descent or table-driven.
LL parser is denoted as LL(k). The first L in LL(k) is parsing the input from
left to right, the second L in LL(k) stands for left-most derivation and k itself
represents the number of look aheads. Generally k = 1, so LL(k) may also
be written as LL(1).
LL Parsing Algorithm
We may stick to deterministic LL(1) for parser explanation, as the size of
table grows exponentially with the value of k. Secondly, if a given grammar
is not LL(1), then usually, it is not LL(k), for any given k.
Given below is an algorithm for LL(1) Parsing:
Input:
stringω
Output:
error otherwise.
repeat
let X be the top stack symbol and a the symbol pointed byip.
if X∈Vtor $
if X = a
else
error()
endif
else /* X is non-terminal */
POP X
else
error()
endif
endif
if β → t, then α does not derive any string beginning with a terminal in FOLLOW(A).
Reduce step : When the parser finds a complete grammar rule (RHS) and replaces it to
(LHS), it is known as reduce-step. This occurs when the top of the stack contains a handle.
To reduce, a POP function is performed on the stack which pops off the handle and replaces it
with LHS non-terminal symbol.
LR Parser
The LR parser is a non-recursive, shift-reduce, bottom-up parser. It uses a
wide class of context-free grammar which makes it the most efficient
syntax analysis technique. LR parsers are also known as LR(k) parsers,
where L stands for left-to-right scanning of the input stream; R stands for
the construction of right-most derivation in reverse, and k denotes the
number of lookahead symbols to make decisions.
There are three widely used algorithms available for constructing an LR
parser:
o Slow construction
LR Parsing Algorithm
Here we describe a skeleton algorithm of an LR parser:
token=next_token()
repeat forever
s = top of stack
PUSH token
PUSH si
token=next_token()
s = top of stack
PUSH A
PUSH goto[s,A]
return
else
error()
LL vs. LR
LL LR
Starts with the root nonterminal on Ends with the root nonterminal on the
the stack. stack.
Uses the stack for designating what is Uses the stack for designating what is
still to be expected. already seen.
Builds the parse tree top-down. Builds the parse tree bottom-up.
Continuously pops a nonterminal off Tries to recognize a right hand side on
the stack, and pushes the the stack, pops it, and pushes the
corresponding right hand side. corresponding nonterminal.
Reads the terminals when it pops one Reads the terminals while it pushes
off the stack. them on the stack.
Pre-order traversal of the parse tree. Post-order traversal of the parse tree.
Panic mode
When a parser encounters an error anywhere in the statement, it ignores
the rest of the statement by not processing input from erroneous input to
delimiter, such as semi-colon. This is the easiest way of error-recovery and
also, it prevents the parser from developing infinite loops.
Statement mode
When a parser encounters an error, it tries to take corrective measures so
that the rest of inputs of statement allow the parser to parse ahead. For
example, inserting a missing semicolon, replacing comma with a semicolon
etc. Parser designers have to be careful here because one wrong correction
may lead to an infinite loop.
Error productions
Some common errors are known to the compiler designers that may occur
in the code. In addition, the designers can create augmented grammar to
be used, as productions that generate erroneous constructs when these
errors are encountered.
Global correction
The parser considers the program in hand as a whole and tries to figure out
what the program is intended to do and tries to find out a closest match for
it, which is error-free. When an erroneous input (statement) X is fed, it
creates a parse tree for some closest error-free statement Y. This may allow
the parser to make minimal changes in the source code, but due to the
complexity (time and space) of this strategy, it has not been implemented
in practice yet.
If watched closely, we find most of the leaf nodes are single child to their
parent nodes. This information can be eliminated before feeding it to the
next phase. By hiding extra information, we can obtain a tree as shown
below:
Abstract tree can be represented as:
The above CFG production has no semantic rule associated with it, and it
cannot help in making any sense of the production.
Semantics
Semantics of a language provide meaning to its constructs, like tokens and
syntax structure. Semantics help interpret symbols, their types, and their
relations with each other. Semantic analysis judges whether the syntax
structure constructed in the source program derives any meaning or not.
CFG + semantic rules =SyntaxDirectedDefinitions
For example:
int a =“value”;
Scope resolution
Type checking
Array-bound checking
Semantic Errors
We have mentioned some of the semantics errors that the semantic
analyzer is expected to recognize:
Type mismatch
Undeclared variable
Attribute Grammar
Attribute grammar is a special form of context-free grammar where some
additional information (attributes) are appended to one or more of its non-
terminals in order to provide context-sensitive information. Each attribute
has well-defined domain of values, such as integer, float, character, string,
and expressions.
Attribute grammar is a medium to provide semantics to the context-free
grammar and it can help specify the syntax and semantics of a
programming language. Attribute grammar (when viewed as a parse-tree)
can pass values or information among the nodes of a tree.
Example:
E → E + T {E.value=E.value+T.value}
The right part of the CFG contains the semantic rules that specify how the
grammar should be interpreted. Here, the values of non-terminals E and T
are added together and the result is copied to the non-terminal E.
Semantic attributes may be assigned to their values from their domain at
the time of parsing and evaluated at the time of assignment or conditions.
Based on the way the attributes get their values, they can be broadly
divided into two categories : synthesized attributes and inherited attributes.
Synthesized attributes
These attributes get values from the attribute values of their child nodes. To
illustrate, assume the following production:
S → ABC
Inherited attributes
In contrast to synthesized attributes, inherited attributes can take values
from parent and/or siblings. As in the following production,
S → ABC
A can get values from S, B and C. B can take values from S, A, and C.
Likewise, C can take values from S, A, and B.
Expansion : When a non-terminal is expanded to terminals as per a
grammatical rule
Reduction : When a terminal is reduced to its corresponding non-terminal
according to grammar rules. Syntax trees are parsed top-down and left to
right. Whenever reduction occurs, we apply its corresponding semantic
rules (actions).
Semantic analysis uses Syntax Directed Translations to perform the above
tasks.
Semantic analyzer receives AST (Abstract Syntax Tree) from its previous
stage (syntax analysis).
Semantic analyzer attaches attribute information with AST, which are called
Attributed AST.
Attributes are two tuple value, <attribute name, attribute value>
For example:
int value =5;
<type,“integer”>
<presentvalue,“5”>
S-attributed SDT
If an SDT uses only synthesized attributes, it is called as S-attributed SDT.
These attributes are evaluated using S-attributed SDTs that have their
semantic actions written after the production (right hand side).
As depicted above, attributes in S-attributed SDTs are evaluated in bottom-
up parsing, as the values of the parent nodes depend upon the values of
the child nodes.
L-attributed SDT
This form of SDT uses both synthesized and inherited attributes with
restriction of not taking values from right siblings.
In L-attributed SDTs, a non-terminal can get values from its parent, child,
and sibling nodes. As in the following production
S → ABC
S can take values from A, B, and C (synthesized). A can take values from S
only. B can take values from S and A. C can get values from S, A, and B. No
non-terminal can get values from the sibling to its right.
Attributes in L-attributed SDTs are evaluated by depth-first and left-to-right
parsing manner.
Activation Trees
A program is a sequence of instructions combined into a number of
procedures. Instructions in a procedure are executed sequentially. A
procedure has a start and an end delimiter and everything inside it is called
the body of the procedure. The procedure identifier and the sequence of
finite instructions inside it make up the body of the procedure.
The execution of a procedure is called its activation. An activation record
contains all the necessary information required to call a procedure. An
activation record may contain the following units (depending upon the
source language used).
Temporaries Stores temporary and intermediate values of an expression.
Actual Parameters Stores actual parameters, i.e., parameters which are used
to send input to the called procedure.
printf(“EnterYourName:“);
scanf(“%s”, username);
show_data(username);
...
intshow_data(char*user)
return0;
...
Storage Allocation
Runtime environment manages runtime memory requirements for the
following entities:
Code : It is known as the text part of a program that does not change at runtime. Its
memory requirements are known at the compile time.
Procedures : Their text part is static but they are called in a random manner. That is why,
stack storage is used to manage procedure calls and activations.
Variables : Variables are known at the runtime only, unless they are global or constant.
Heap memory allocation scheme is used for managing allocation and de-allocation of memory
for variables in runtime.
Static Allocation
In this allocation scheme, the compilation data is bound to a fixed location
in the memory and it does not change when the program executes. As the
memory requirement and storage locations are known in advance, runtime
support package for memory allocation and de-allocation is not required.
Stack Allocation
Procedure calls and their activations are managed by means of stack
memory allocation. It works in last-in-first-out (LIFO) method and this
allocation strategy is very useful for recursive procedure calls.
Heap Allocation
Variables local to a procedure are allocated and de-allocated only at
runtime. Heap allocation is used to dynamically allocate memory to the
variables and claim it back when the variables are no more required.
Except statically allocated memory area, both stack and heap memory can
grow and shrink dynamically and unexpectedly. Therefore, they cannot be
provided with a fixed amount of memory in the system.
As shown in the image above, the text part of the code is allocated a fixed
amount of memory. Stack and heap memory are arranged at the extremes
of total memory allocated to the program. Both shrink and grow against
each other.
Parameter Passing
The communication medium among procedures is known as parameter
passing. The values of the variables from a calling procedure are
transferred to the called procedure by some mechanism. Before moving
ahead, first go through some basic terminologies pertaining to the values in
a program.
r-value
The value of an expression is called its r-value. The value contained in a
single variable also becomes an r-value if it appears on the right-hand side
of the assignment operator. r-values can always be assigned to some other
variable.
l-value
The location of memory (address) where an expression is stored is known
as the l-value of that expression. It always appears at the left hand side of
an assignment operator.
For example:
day=1;
month=1;
From this example, we understand that constant values like 1, 7, 12, and
variables like day, week, month and year, all have r-values. Only variables
have l-values as they also represent the memory location assigned to them.
For example:
7= x + y;
Formal Parameters
Variables that take the information passed by the caller procedure are
called formal parameters. These variables are declared in the definition of
the called function.
Actual Parameters
Variables whose values or addresses are being passed to the called
procedure are called actual parameters. These variables are specified in the
function call as arguments.
Example:
fun_one()
{
intactual_parameter=10;
callfun_two(intactual_parameter);
fun_two(intformal_parameter)
printformal_parameter;
Pass by Value
In pass by value mechanism, the calling procedure passes the r-value of
actual parameters and the compiler puts that into the called procedure’s
activation record. Formal parameters then hold the values passed by the
calling procedure. If the values held by the formal parameters are changed,
it should have no impact on the actual parameters.
Pass by Reference
In pass by reference mechanism, the l-value of the actual parameter is
copied to the activation record of the called procedure. This way, the called
procedure now has the address (memory location) of the actual parameter
and the formal parameter refers to the same memory location. Therefore, if
the value pointed by the formal parameter is changed, the impact should be
seen on the actual parameter as they should also point to the same value.
Pass by Copy-restore
This parameter passing mechanism works similar to ‘pass-by-reference’
except that the changes to actual parameters are made when the called
procedure ends. Upon function call, the values of actual parameters are
copied in the activation record of the called procedure. Formal parameters if
manipulated have no real-time effect on actual parameters (as l-values are
passed), but when the called procedure ends, the l-values of formal
parameters are copied to the l-values of actual parameters.
Example:
int y;
calling_procedure()
{
y =10;
copy_restore(y);//l-value of y is passed
printf y;//prints 99
copy_restore(int x)
y =0;// y is now 0
When this function ends, the l-value of formal parameter x is copied to the
actual parameter y. Even if the value of y is changed before the procedure
ends, the l-value of x is copied to the l-value of y making it behave like call
by reference.
Pass by Name
Languages like Algol provide a new kind of parameter passing mechanism
that works like preprocessor in C language. In pass by name mechanism,
the name of the procedure being called is replaced by its actual body. Pass-
by-name textually substitutes the argument expressions in a procedure call
for the corresponding parameters in the body of the procedure so that it
can now work on actual parameters, much like pass-by-reference.
To implement type checking, by verifying assignments and expressions in the source code are
semantically correct.
To determine the scope of a name (scope resolution).
A symbol table is simply a table which can be either linear or a hash table.
It maintains an entry for each name in the following format:
For example, if a symbol table has to store information about the following
variable declaration:
staticint interest;
Implementation
If a compiler is to handle a small amount of data, then the symbol table can
be implemented as an unordered list, which is easy to code, but it is only
suitable for small tables only. A symbol table can be implemented in one of
the following ways:
Operations
A symbol table, either linear or hash, should provide the following
operations.
insert()
This operation is more frequently used by analysis phase, i.e., the first half
of the compiler where tokens are identified and names are stored in the
table. This operation is used to add information in the symbol table about
unique names occurring in the source code. The format or structure in
which the names are stored depends upon the compiler in hand.
An attribute for a symbol in the source code is the information associated
with that symbol. This information contains the value, state, scope, and
type about the symbol. The insert() function takes the symbol and its
attributes as arguments and stores the information in the symbol table.
For example:
int a;
lookup()
lookup() operation is used to search a name in the symbol table to
determine:
This method returns 0 (zero) if the symbol does not exist in the symbol
table. If the symbol exists in the symbol table, it returns its attributes
stored in the table.
Scope Management
A compiler maintains two types of symbol tables: a global symbol
tablewhich can be accessed by all the procedures and scope symbol
tables that are created for each scope in the program.
To determine the scope of a name, symbol tables are arranged in
hierarchical structure as shown in the example below:
...
int value=10;
voidpro_one()
int one_1;
int one_2;
{ \
int one_4;|
}/
int one_5;
{ \
int one_7;|
}/
voidpro_two()
int two_1;
int two_2;
{ \
int two_4;|
}/
int two_5;
...
if a name is found, then search is completed, else it will be searched in the parent symbol
table until,
either the name is found or global symbol table has been searched for the name.
Intermediate code eliminates the need of a new full compiler for every unique machine by
keeping the analysis portion same for all the compilers.
The second part of compiler, synthesis, is changed according to the target machine.
It becomes easier to apply the source code modifications to improve code performance by
applying code optimization techniques on the intermediate code.
Intermediate Representation
Intermediate codes can be represented in a variety of ways and they have
their own benefits.
High Level IR - High-level intermediate code representation is very close to the source
language itself. They can be easily generated from the source code and we can easily apply
code modifications to enhance performance. But for target machine optimization, it is less
preferred.
Low Level IR - This one is close to the target machine, which makes it suitable for register
and memory allocation, instruction set selection, etc. It is good for machine-dependent
optimizations.
Intermediate code can be either language specific (e.g., Byte Code for Java)
or language independent (three-address code).
Three-Address Code
Intermediate code generator receives input from its predecessor phase,
semantic analyzer, in the form of an annotated syntax tree. That syntax
tree then can be converted into a linear representation, e.g., postfix
notation. Intermediate code tends to be machine independent code.
Therefore, code generator assumes to have unlimited number of memory
storage (register) to generate code.
For example:
a = b + c * d;
The intermediate code generator will try to divide this expression into sub-
expressions and then generate the corresponding code.
r1 = c * d;
r2 = b + r1;
a = r2
Quadruples
Each instruction in quadruples presentation is divided into four fields:
operator, arg1, arg2, and result. The above example is represented below
in quadruples format:
* c d r1
+ b r1 r2
+ r2 r1 r3
= r3 a
Triples
Each instruction in triples presentation has three fields : op, arg1, and
arg2.The results of respective sub-expressions are denoted by the position
of expression. Triples represent similarity with DAG and syntax tree. They
are equivalent to DAG while representing expressions.
Op arg1 arg2
* c d
+ b (0)
+ (1) (0)
= (2)
Indirect Triples
This representation is an enhancement over triples representation. It uses
pointers instead of position to store results. This enables the optimizers to
freely re-position the sub-expression to produce an optimized code.
Declarations
A variable or procedure has to be declared before it can be used.
Declaration involves allocation of space in memory and entry of type and
name in the symbol table. A program may be coded and designed keeping
the target machine structure in mind, but it may not always be possible to
accurately convert a source code to its target language.
Taking the whole program as a collection of procedures and sub-
procedures, it becomes possible to declare all the names local to the
procedure. Memory allocation is done in a consecutive manner and names
are allocated to memory in the sequence they are declared in the program.
We use offset variable and set it to zero {offset = 0} that denote the base
address.
The source programming language and the target machine architecture
may vary in the way names are stored, so relative addressing is used. While
the first name is allocated memory starting from the memory location 0
{offset=0}, the next name declared later, should be allocated memory next
to the first one.
Example:
We take the example of C programming language where an integer variable
is assigned 2 bytes of memory and a float variable is assigned 4 bytes of
memory.
int a;
float b;
Allocation process:
{offset=0}
int a;
id.type=int
id.width=2
{offset=2}
float b;
id.type=float
id.width=4
{offset=6}
To enter this detail in a symbol table, a procedure enter can be used. This
method may have the following structure:
Interior nodes also represent the results of expressions or the identifiers/name where the
values are to be stored or assigned.
Example:
t0= a + b
t1= t0+ c
d = t0+ t1
[t0 = a + b]
[t1 = t0 + c]
[d = t0 + t1]
Peephole Optimization
This optimization technique works locally on the source code to transform it
into an optimized code. By locally, we mean a small portion of the code
block at hand. These methods can be applied on intermediate codes as well
as on target codes. A bunch of statements is analyzed and are checked for
the following possible optimization:
Redundant instruction elimination
At source code level, the following can be done by the user:
{ { { {
z = x + y; y = x + y; }
return z; return y;
} }
MOV x, R0
MOV R0, R1
We can delete the first instruction and re-write the sentence as:
MOV x, R1
Unreachable code
Unreachable code is a part of the program code that is never accessed
because of programming constructs. Programmers may have accidently
written a piece of code that can never be reached.
Example:
voidadd_ten(int x)
return x +10;
In this code segment, the printf statement will never be executed as the
program control returns back before it can execute, hence printf can be
removed.
Flow of control optimization
There are instances in a code where the program control jumps back and
forth without performing any significant task. These jumps can be removed.
Consider the following chunk of code:
...
MOV R1, R2
GOTO L1
...
L1 : GOTO L2
L2 : INC R1
...
MOV R1, R2
GOTO L2
...
L2 : INC R1
Strength reduction
There are operations that consume more time and space. Their ‘strength’
can be reduced by replacing them with other operations that consume less
time and space, but produce the same result.
For example, x * 2 can be replaced by x << 1, which involves only one left
shift. Though the output of a * a and a2 is same, a2 is much more efficient
to implement.
IR Type : Intermediate representation has various forms. It can be in Abstract Syntax Tree
(AST) structure, Reverse Polish Notation, or 3-address code.
Ordering of instructions : At last, the code generator decides the order in which the
instruction will be executed. It creates schedules for instructions to execute them.
Descriptors
The code generator has to track both the registers (for availability) and
addresses (location of values) while generating the code. For both of them,
the following two descriptors are used:
Register descriptor : Register descriptor is used to inform the code generator about the
availability of registers. Register descriptor keeps track of values stored in each register.
Whenever a new register is required during code generation, this descriptor is consulted for
register availability.
Address descriptor : Values of the names (identifiers) used in the program might be stored
at different locations while in execution. Address descriptors are used to keep track of
memory locations where the values of identifiers are stored. These locations may include CPU
registers, heaps, stacks, memory or a combination of the mentioned locations.
Code generator keeps both the descriptor updated in real-time. For a load
statement, LD R1, x, the code generator:
Else if both the above options are not possible, it chooses a register that requires minimal
number of load and store instructions.
MOV y’, L
Determine the present location of z using the same method used in step 2 for y and generate
the following instruction:
OP z’, L
If y and z has no further use, they can be given back to the system.
Other code constructs like loops and conditional statements are transformed
into assembly language in general assembly way.
Compiler Design - Code Optimization
Optimization is a program transformation technique, which tries to improve
the code by making it consume less resources (i.e. CPU, Memory) and
deliver high speed.
In optimization, high-level general programming constructs are replaced by
very efficient low-level programming codes. A code optimizing process must
follow the three rules given below:
The output code must not, in any way, change the meaning of the program.
Optimization should increase the speed of the program and if possible, the program should
demand less number of resources.
Optimization should itself be fast and should not delay the overall compiling process.
Efforts for an optimized code can be made at various levels of compiling the
process.
At the beginning, users can change/rearrange the code or use better algorithms to write the
code.
After generating intermediate code, the compiler can modify the intermediate code by address
calculations and improving loops.
While producing the target machine code, the compiler can make use of memory hierarchy
and CPU registers.
Machine-independent Optimization
In this optimization, the compiler takes in the intermediate code and
transforms a part of the code that does not involve any CPU registers
and/or absolute memory locations. For example:
do
item=10;
}while(value<100);
}while(value<100);
should not only save the CPU cycles, but can be used on any processor.
Machine-dependent Optimization
Machine-dependent optimization is done after the target code has been
generated and when the code is transformed according to the target
machine architecture. It involves CPU registers and may have absolute
memory references rather than relative references. Machine-dependent
optimizers put efforts to take maximum advantage of memory hierarchy.
Basic Blocks
Source codes generally have a number of instructions, which are always
executed in sequence and are considered as the basic blocks of the code.
These basic blocks do not have any jump statements among them, i.e.,
when the first instruction is executed, all the instructions in the same basic
block will be executed in their sequence of appearance without losing the
flow control of the program.
A program can have various constructs as basic blocks, like IF-THEN-ELSE,
SWITCH-CASE conditional statements and loops such as DO-WHILE, FOR,
and REPEAT-UNTIL, etc.
Header statements and the statements following them form a basic block.
A basic block does not include any header statement of any other basic block.
Basic blocks are important concepts from both code generation and
optimization point of view.
Basic blocks play an important role in identifying variables, which are being
used more than once in a single basic block. If any variable is being used
more than once, the register memory allocated to that variable need not be
emptied unless the block finishes execution.
Induction analysis : A variable is called an induction variable if its value is altered within the
loop by a loop-invariant value.
Strength reduction : There are expressions that consume more CPU cycles, time, and
memory. These expressions should be replaced with cheaper expressions without
compromising the output of expression. For example, multiplication (x * 2) is expensive in
terms of CPU cycles than (x << 1) and yields the same result.
Dead-code Elimination
Dead code is one or more than one code statements, which are:
The above control flow graph depicts a chunk of program where variable ‘a’
is used to assign the output of expression ‘x * y’. Let us assume that the
value assigned to ‘a’ is never used inside the loop.Immediately after the
control leaves the loop, ‘a’ is assigned the value of variable ‘z’, which would
be used later in the program. We conclude here that the assignment code of
‘a’ is never used anywhere, therefore it is eligible to be eliminated.
Likewise, the picture above depicts that the conditional statement is always
false, implying that the code, written in true case, will never be executed,
hence it can be removed.
Partial Redundancy
Redundant expressions are computed more than once in parallel path,
without any change in operands.whereas partial-redundant expressions are
computed more than once in a path, without any change in operands. For
example,
If(condition)
a = y OP z;
else
...
c = y OP z;
We assume that the values of operands (y and z) are not changed from
assignment of variable a to variable c. Here, if the condition statement is
true, then y OP z is computed twice, otherwise once. Code motion can be
used to eliminate this redundancy, as shown below:
If(condition)
...
tmp= y OP z;
a =tmp;
...
else
...
tmp= y OP z;
c =tmp;