Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
75 views

System Programming: Assignments-No. 2 & 3

This document discusses assignments 2 and 3 for a system programming course. Assignment 2 involves a case study on LEX and YACC tools. LEX is a lexical analyzer generator that recognizes regular expressions in an input stream and partitions it accordingly. YACC is a parser generator that provides structure to a program's input based on grammar rules specified by the user. Assignment 3 asks about debugging techniques, defining debugging as finding and reducing bugs to make a program behave as expected. It lists various debugging techniques like print debugging, remote debugging, and post-mortem debugging.

Uploaded by

James Manning
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

System Programming: Assignments-No. 2 & 3

This document discusses assignments 2 and 3 for a system programming course. Assignment 2 involves a case study on LEX and YACC tools. LEX is a lexical analyzer generator that recognizes regular expressions in an input stream and partitions it accordingly. YACC is a parser generator that provides structure to a program's input based on grammar rules specified by the user. Assignment 3 asks about debugging techniques, defining debugging as finding and reducing bugs to make a program behave as expected. It lists various debugging techniques like print debugging, remote debugging, and post-mortem debugging.

Uploaded by

James Manning
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 10

System Programming

Assignments- No. 2 & 3


Submitted by :-
Name Asif Jamil
Branch - B.tech(CSE) 4
th
Sem
Roll no. - 1253060
Submitted to:-
Mrs. Sukmeet Kaur
(System Programming Lecturer)
Assignment 2
Q. Case study on LEX & YACC.
Ans.
1.LEX
(A Lexical Analyzer Generator)
Lex is a program generator designed for lexical processing of character input
streams. It accepts a high-level, problem oriented specification for character
string matching, and produces a program in a general purpose language which
recognizes regular expressions. The regular expressions are specified by the user in
the source specifications given to Lex. The Lex written code recognizes these
expressions in an input stream and partitions the input stream into strings matching
the expressions. At the boundaries between strings program sections provided by
the user are executed. The Lex source file associates the regular expressions and
the program fragments. As each expression appears in the input to the program
written by Lex, the corresponding fragment is executed.
The user supplies the additional code beyond expression matching needed to
complete his tass, possibly including code written by other generators. The
program that recognizes the expressions is generated in the general purpose
programming language employed for the user!s program fragments. Thus, a high
level expression language is provided to write the string expressions to be matched
while the user!s freedom to write actions is unimpaired. This avoids forcing the
user who wishes to use a string manipulation language for input analysis to write
processing programs in the same and often inappropriate string handling language.
The general format of Lex source is"
{definitions}
%%
{rules}
%%
{user subroutines}
where the definitions and the user subroutines are often omitted. The
second ## is optional, but the first is re$uired to mar the beginning of
the rules. The absolute minimum Lex program is thus
%%
%no definitions, no rules& which translates into a program which copies
the input to the output unchanged.
In the outline of Lex programs shown above, the rules represent the
user!s control decisions' they are a table, in which the left column
contains regular expressions %see section (& and the right column
contains actions, program fragments to be executed when the
expressions are recognized. Thus an individual rule might appear
integer printf("found keyword INT");
to loo for the string integer in the input stream and print the message
))found eyword I*T!! whenever it appears. In this example the host
procedural language is + and the + library function printf is used to print
the string. The end of the expression is indicated by the first blan or tab
character. If the action is merely a single + expression, it can ,ust be
given on the right side of the line' if it is compound, or taes more than a
line, it should be enclosed in braces. As a slightly more useful example,
suppose it is desired to change a number of words from -ritish to
American spelling. Lex rules such as
colour printf("color");
mechnise printf("mechni!e");
petrol printf("gs");
would be a start. These rules are not $uite enough, since the word
petroleum would become gaseum.
2.A!!
(et Anot"er !om#iler-!om#iler )
.acc provides a general tool for imposing structure on the input to a
computer program. The .acc user prepares a specification of the input
process' this includes rules describing the input structure, code to be
invoed when these rules are recognized, and a low-level routine to do
the basic input. .acc then generates a function to control the input
process. This function, called a parser, calls the user-supplied low-level
input routine %the lexical analyzer& to pic up the basic items %called
toens& from the input stream. These toens are organized according to
the input structure rules, called grammar rules' when one of these rules
has been recognized, then user code supplied for this rule, an action, is
invoed' actions have the ability to return values and mae use of the
values of other actions.
.acc is written in a portable dialect of +/01 and the actions, and output
subroutine, are in + as well. 2oreover, many of the syntactic
conventions of .acc follow +.
The heart of the input specification is a collection of grammar rules.
3ach rule describes an allowable structure and gives it a name. 4or
example, one grammar rule might be
dte " month#nme dy $%$ yer ;
5ere, date, month6name, day, and year represent structures of interest in
the input process' presumably, month6name, day, and year are defined
elsewhere. The comma )),!! is enclosed in single $uotes' this implies that
the comma is to appear literally in the input. The colon and semicolon
merely serve as punctuation in the rule, and have no significance in
controlling the input. Thus, with proper definitions, the input
&uly '% ())*
might be matched by the above rule.
An important part of the input process is carried out by the lexical
analyzer. This user routine reads the input stream, recognizing the lower
level structures, and communicates these toens to the parser. 4or
historical reasons, a structure recognized by the lexical analyzer is called
a terminal symbol, while the structure recognized by the parser is called
a nonterminal symbol. To avoid confusion, terminal symbols will
usually be referred to as toens.
There is considerable leeway in deciding whether to recognize structures
using the lexical analyzer or grammar rules. 4or example, the rules
month#nme " $&$ $$ $n$ ;
month#nme " $+$ $e$ $b$ ;
, , ,
month#nme " $-$ $e$ $c$ ;
might be used in the above example. The lexical analyzer would only
need to recognize individual letters, and month6name would be a
nonterminal symbol. 7uch low-level rules tend to waste time and space,
and may complicate the specification beyond .acc!s ability to deal with
it. 8sually, the lexical analyzer would recognize the month names, and
return an indication that a month6name was seen' in this case,
month6name would be a toen.
Literal characters such as )),!! must also be passed through the lexical
analyzer, and are also considered toens.
7pecification files are very flexible. It is realively easy to add to the
above example the rule
dte " month $.$ dy $.$ yer ;
allowing
) . ' . ())*
as a synonym for
&uly '% ())*
In most cases, this new rule could be ))slipped in!! to a woring system
with minimal effort, and little danger of disrupting existing input.
The input being read may not conform to the specifications. These input
errors are detected as early as is theoretically possible with a left-to-right
scan' thus, not only is the chance of reading and computing with bad
input data substantially reduced, but the bad data can usually be $uicly
found. 3rror handling, provided as part of the input specifications,
permits the reentry of bad data, or the continuation of the input process
after sipping over the bad data.
a full specification file loos lie
declrtions
%%
rules
%%
progrms
The declaration section may be empty. 2oreover, if the programs
section is omitted, the second ## mar may be omitted also'
thus, the smallest legal .acc specification is
%%
rules
-lans, tabs, and newlines are ignored except that they may not appear
in names or multi-character reserved symbols. +omments may appear
wherever a name is legal' they are enclosed in 9: . . . :9, as in + and ;L9I.
The rules section is made up of one or more grammar rules. A grammar
rule has the form"
/ " 01-2 ;
A represents a nonterminal name, and -<=. represents a se$uence of
zero or more names and literals. The colon and the semicolon are .acc
punctuation.
*ames may be of arbitrary length, and may be made up of letters, dot
)).!!, underscore ))6!!, and non-initial digits. 8pper and lower case letters
are distinct. The names used in the body of a grammar rule may
represent toens or nonterminal symbols.
- >>>>> -
Assignment 3

Q.What is Debugging? What is a debugger? List
various Debugging techniques.
Ans. $ebugging is a methodical process of finding and reducing
the number of bugs, or defects, in a computer program or a
piece of electronic hardware, thus making it behave as expected.
Debugging tends to be harder when various subsystems are
tightly coupled, as changes in one may cause bugs to emerge in
another.
A debugger or debugging tool is a computer program that is
used to test and debug other programs (the "target" program).
Techniques:-
rint !e"ugging %or tracing& is the act of watching %live or
recorded& trace statements, or print statements, that indicate the
flow of execution of a process. This is sometimes called printf
de!""in"# due to the use of the printf function in +. This ind of
debugging was turned on by the command T?<* in the original
versions of the novice-oriented A!"# programming language.
T?<* stood for, @Trace <n.@ T?<* caused the line numbers of
each -A7I+ command line to print as the program ran.
Remote !e"ugging is the process of debugging a program running
on a system different from the debugger. To start remote
debugging, a debugger connects to a remote system over a
networ. The debugger can then control the execution of the
program on the remote system and retrieve information about its
state.
ost-mortem !e"ugging is debugging of the program after it has
already crashed. ?elated techni$ues often include various tracing
techni$ues %for example& and9or analysis of memory dump (or
core dump) of the crashed process. The dump of the process
could be obtained automatically by the system %for example, when
process has terminated due to an unhandled exception&, or by a
programmer-inserted instruction, or manually by the interactive
user.
$elta $ebugging - techni$ue of automating test case
simplification.
Sta$$ S%uee&e - techni$ue of isolating failure within the test using
progressive inclining of parts of the failing test.
Q. What is a bug, error & deect ?
Ans. Error: A discrepancy between a computed, observed, or
measured value or condition and the true, specified, or
theoretically correct value or condition. $his can be a
misunderstanding of the internal state of the software, an
oversight in terms of memory management, confusion about the
proper way to calculate a value, etc.
%ug% A fault in a program which causes the program to perform in
an unintended or unanticipated manner. !ee% anomaly, defect,
error, exception, and fault. ug is terminology of $ester.
$e&ect: #ommonly refers to several troubles with the software
products, with its external behavior or with its internal features.
Q. What is !acro?
Ans. A macro in computer science is a rule or pattern that
specifies how a certain input se&uence (often a se&uence of
characters) should be mapped to a replacement output se&uence
(also often a se&uence of characters) according to a defined
procedure. $he mapping processes that instantiates (transforms)
a macro use into a specific se&uence is known as macro
expansion.
'acros are used to make a se&uence of computing instructions
available to the programmer as a single program statement,
making the programming task less tedious and less error(prone.
($hus, they are called "macros" because a big block of code can
be expanded from a small se&uence of characters). 'acros often
allow positional or keyword parameters that dictate what the
conditional assembler program generates and have been used to
create entire programs or program suites according to such
variables as operating system, platform or other factors. $he term
derives from "macro instruction", and such expansions were
originally used in generating assembly language code.

You might also like