Lexical Analysis1
Lexical Analysis1
Lexical Analysis1
Lexical Analysis
Error reporting Model using regular expressions Recognize using Finite State Automata
1
Sentences consist of string of tokens (a syntactic category) for example number, identifier, keyword, string Sequences of characters in a token is lexeme for example 100.01, counter, const, How are you? Rule of description is pattern for example letter(letter/digit)* Discard whatever does not contribute to parsing like white spaces (blanks, tabs, newlines) and comments construct constants: convert numbers to token num and pass number as its attribute for example integer 31 becomes <num, 31> recognize keyword and identifiers for example counter = counter + increment becomes id = id + id /*check if id is a keyword*/
2
Lexical Analysis
Token
Syntax Analyzer
Push back is required due to lookahead for example > = and > It is implemented through a buffer
Keep input in a buffer Move pointers over the input
Approaches to implementation
Use assembly language Most efficient but most difficult to implement Use high level languages like C Efficient but difficult to implement Use tools like lex, flex Easy to implement but not as efficient as the first two cases
4
#include <stdio.h> #include <ctype.h> int lineno = 1; int tokenval = NONE; int lex() { int t; while (1) { t = getchar (); if (t == || t == \t); else if (t == \n) lineno = lineno + 1; else if (isdigit (t) ) { tokenval = t 0 ; t = getchar (); while (isdigit(t)) { tokenval = tokenval * 10 + t 0 ; t = getchar(); } ungetc(t,stdin); return num; } else { tokenval = NONE; return t; }
} }
6
Problems
Scans text character by character Look ahead character determines what kind of token to read and when the current token ends First character cannot determine what kind of token we are going to read
7
Symbol Table
Stores information for subsequent phases Interface to the symbol table
Insert(s,t): save lexeme s and token t and return pointer Lookup(s): return index of entry for lexeme s or 0 if s is not found
Fixed amount of space to store lexemes. Not advisable as it waste space. Store lexemes in a separate array. Each lexeme is separated by eos. Symbol table has pointers to lexemes.
8
Usually 32 bytes
Usually 4 bytes
Other attributes
lexeme1
eos
lexeme2
eos
lexeme3
Is it as simple as it sounds?
Lexemes in a fixed position. Fix format vs. free format languages Handling of blanks
in Pascal blanks separate identifiers in Fortran blanks are important only in literal strings for example variable counter is same as count er Another example DO 10 I = 1.25 DO 10 I = 1,25
DO10I=1.25 DO10I=1,25
11
The first line is variable assignment DO10I=1.25 second line is beginning of a Do loop Reading from left to right one can not distinguish between the two until the ; or . is reached
Fortran white space and fixed format rules came into force due to punch cards and errors in punching
12
13
14
PL/1 Problems
Keywords are not reserved in PL/1 if then then then = else else else = then if if then then = then + 1 PL/1 declarations Declare(arg1,arg2,arg3,.,argn) Can not tell whether Declare is a keyword or array reference until after ) Requires arbitrary lookahead and very large buffers. Worse, the buffers may have to be reloaded.
15
C++ stream syntax: cin >> var; Nested templates: Foo<Bar<Bazz>> Can these problems be resolved by lexical analyzers alone?
16
17
Regular languages have been discussed in great detail in the Theory of Computation course
18
Operations on languages
L U M = {s | s is in L or s is in M} LM = {st | s is in L and t is in M} L* = Union of Li such that 0 i Where L0 = and Li = L i-1 L
19
Example
Let L = {a, b, .., z} and D = {0, 1, 2, 9} then LUD is set of letters and digits LD is set of strings consisting of a letter followed by a digit L* is a set of all strings of letters including L(LUD)* is set of all strings of letters and digits beginning with a letter D+ is the set of strings of one or more digits
20
Notation
Let be a set of characters. A language over is a set of strings of characters belonging to A regular expression r denotes a language L(r) Rules that define the regular expressions over
is a regular expression that denotes {} the set containing the empty string If a is a symbol in then a is a regular expression that denotes {a}
21
If r and s are regular expressions denoting the languages L(r) and L(s) then (r)|(s) is a regular expression denoting L(r) U L(s) (r)(s) is a regular expression denoting L(r)L(s) (r)* is a regular expression denoting (L(r))* (r) is a regular expression denoting L(r)
22
Let = {a, b} The regular expression a|b denotes the set {a, b} The regular expression (a|b)(a|b) denotes {aa, ab, ba, bb} The regular expression a* denotes the set of all strings {, a, aa, aaa, } The regular expression (a|b)* denotes the set of all strings containing and all strings of as and bs The regular expression a|a*b denotes the set containing the string a and all strings consisting of zero or more as followed by a b
23
Precedence and associativity *, concatenation, and | are left associative * has the highest precedence Concatenation has the second highest precedence | has the lowest precedence
24
Examples
My fax number 91-(512)-259-7586 = digits U {-, (, ) } Country digit+ Area ( digit+ ) Exchange digit+ Phone digit+ Number country - area - exchange - phone
26
Examples
My email address ska@iitk.ac.in = letter U {@, . } Letter a| b| | z| A| B| | Z Name letter+ Address name @ name . name . name
27
Examples
Identifier letter a| b| |z| A| B| | Z digit 0| 1| | 9 identifier letter(letter|digit)* Unsigned number in Pascal digit 0| 1| |9 digits digit+ fraction . digits | exponent (E ( + | - | ) digits) | number digits fraction exponent
28
1. 1.
1. 2.
x1xi L(R) x1xi L(Rj) for some j Write a regular expression for lexemes of each token
number digit+ identifier letter(letter|digit)+
1.
x1xi L(R) x1xj L(R) Pick up the longest possible string in L(R) The principle of maximal munch
Regular expressions provide a concise and useful notation for string patterns Good algorithms require single pass over the input
31
Regular expressions alone are not enough Normally longest match wins Ties are resolved by prioritizing tokens Lexical definitions consist of regular definitions, priority rules and maximal munch principle
32
Finite Automata
Regular expression are declarative specifications Finite automata is implementation A finite automata consists of
An input alphabet belonging to A set of states S input A set of transitions statei statej A set of final states F A start state n
Pictorial notation
A state A final state Transition Transition from state i to state j on a input a i j
34
<
other
> = > = =
other
token is relop, lexeme is <> token is relop, lexeme is <= token is relop, lexeme is = token is relop, lexeme is >=
digit
delim
other
37
.
digit
digit
+ -
digit digit
others
E
digit
digit
.
digit
digit
others
digit
others
*
38
The lexeme for a given token must be the longest possible Assume input to be 12.34E56 Starting in the third diagram the accept state will be reached after 12 Therefore, the matching should always start with the first transition diagram If failure occurs in one transition diagram then retract the forward pointer to the start state and activate the next diagram If failure occurs in all diagrams then a lexical error has occurred
39
40
digit
+ -
digit digit
others
E
others others
A more complex transition diagram is difficult to implement and may give rise to errors during coding
41
LEX
C Compiler
Object code
Input program
Lexical analyzer
tokens