Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

MCSE 204: Adina Institute of Science & Technology

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16
At a glance
Powered by AI
The key takeaways are about language processors like compilers and interpreters and their different phases. It also discusses access matrix models and multiprocessing systems.

The different phases of a compiler are lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, code generation, and linking.

An access matrix model is a formal security model that characterizes the rights of each subject with respect to every object in a system using a rectangular array with subjects as rows and objects as columns to indicate access permissions.

Adina Institute of Science & Technology

Department of Computer Science & Engg.


M.Tech CSE-II Sem
Lab Manuals
MCSE 204
INDEX
List of Practicals:MCSE 204
Sr. No. Object Signature
1 To Study Of Language processors.

2 To Study of phases of compiler.

3 To Study of Dynamic storage


allocation technique.
4 To Study of Register allocation
techniques.

5 To Study of Parallelizing compiler.

6 To Study of Parallelizing detection .


7 To Study of data flow analysis.

8 To Study of program representation


for optimization.

9 To Study of Distributed operating


system.

10 To Study Ofmultiprocessor system


architecture and process
synchronization.
11 To Study of access matrix model.
AIM: To Study of Language processors.
By a language processor, we mean a program that processes programs written in a programming
language (source language). All or part of a language processor is a language translator, which
translates the program from the source language into machine code, assembly language, or
some other language. The machine code can be for an actual computer or for a virtual
(hypothetical) computer. If it is for a virtual computer, then a simulator for the virtual computer
is needed in order to execute the translated program.

If a language processor is a translator that produces machine or assembly code as output (in
object code or executable code) then it is called a compiler. If the language processor executes
the translated program (output from the translator) then it is called an interpreter.

In a typical programming language implementation, source program components (files or


modules) are first translated into machine language to produce components called object
modules or object files. Following the translation step, a linkage editor (or linker) combines
multiple object components for a program with components from libraries to produce an
executable program. This can occur either as an intermediate step, or in some cases it may occur
as the program executes, loading each component as it is needed. The execution of a program
may be done by an actual computer or by a simulator for a virtual computer.

Program components in languages such as C are normally compiled into object files, which are
combined into an executable file by a linkage editor or linking loader. The linkage editor adjusts
addresses as needed when it combines the object modules, and it also puts in the addresses
where a module references a location in another module (such as for a function call). If an
executable file is produced, then there will also be a loader program that loads an executable file
into memory so that it can execute. The loader may also do some final adjustments on addresses
to correspond to the actual locations in memory where the executing program will reside.
AIM:To Study of phases of compiler.
• Compilation of a program proceeds through a fixed series of phases
-Each phase use an (intermediate) form of the program produced by an earlier phase
-Subsequent phases operate on lower-level code representations
• Each phase may consist of a number of passes over the program representation
-Pascal, FORTRAN, C languages designed for one-pass compilation, which explains
the need for function prototypes
-Single-pass compilers need less memory to operate
-Java and ADA are multi-pass
• Lexical analysis breaks up a program into tokens
-Grouping characters into non-separatable units (tokens)
-Changing a stream to characters to a stream of tokens
Syntax Analysis
Checks whether the token stream meets the grammatical specification of the language
and generates the syntax tree.
 A syntax error is produced by the compiler when the program does not meet the
grammatical specification.
 For grammatically correct program, this phase generates an internal representation
that is easy to manipulate in later phases
 Typically a syntax tree (also called a parse tree).
A grammar of a programming language is typically described by a context free grammer,
which also defines the structure of the parse tree.
Semantic Analysis
 Semantic analysis is applied by a compiler to discover the meaning of a program by
analyzing its parse tree or abstract syntax tree.
 Static semantic checks (done by the compiler) are performed at compile time
 Dynamic semantic checks are performed at run time, and the compiler produces code that
performs these checks
Code Generation and Intermediate Code Forms
A typical intermediate form of code produced by the semantic analyzer is an abstract syntax tree
(AST)
The AST is annotated with useful information such as pointers to the symbol table entry of
identifiers
AIM: To Study of Dynamic storage allocation
technique.
Memory management is the act of managing computer memory. The essential requirement of
memory management is to provide ways to dynamically allocate portions of memory to
programs at their request, and free it for reuse when no longer needed. This is critical to any
advanced computer system where more than a single process might be underway at any time.[1]

Several methods have been devised that increase the effectiveness of memory management.
Virtual memory systems separate the memory addresses used by a process from actual physical
addresses, allowing separation of processes and increasing the effectively available amount of
RAM using paging or swapping to secondary storage. The task of fulfilling an allocation
request consists of locating a block of unused memory of sufficient size. Memory requests are
satisfied by allocating portions from a large pool of memory called the heap or free store. At
any given time, some parts of the heap are in use, while some are "free" (unused) and thus
available for future allocations.

Several issues complicate the implementation, such as external fragmentation, which arises
when there are many small gaps between allocated memory blocks, which invalidates their use
for an allocation request.

Fixed-size blocks allocation

Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size
blocks of memory (often all of the same size). This works well for simple embedded systems
where no large objects need to be allocated, but suffers from fragmentation, especially with
long memory addresses. However, due to the significantly reduced overhead this method can
substantially improve performance for objects that need frequent allocation / de-allocation and
is often used in video games.

Buddy blocks
In this system, memory is allocated into several pools of memory instead of just one, where
each pool represents blocks of memory of a certain power of two in size. All blocks of a
particular size are kept in a sorted linked list or tree and all new blocks that are formed during
allocation are added to their respective memory pools for later use. If a smaller size is requested
than is available, the smallest available size is selected and halved. One of the resulting halves
is selected, and the process repeats until the request is complete. When a block is allocated, the
allocator will start with the smallest sufficiently large block to avoid needlessly breaking
blocks. When a block is freed, it is compared to its buddy. If they are both free, they are
combined and placed in the next-largest size buddy-block list.
AIM:To Study of Register allocation techniques.
In compiler optimization, register allocation is the process of assigning a large number of target
program variables onto a small number of CPUregisters. Register allocation can happen over a
basic block (local register allocation), over a whole function/procedure (global register
allocation), or across function boundaries traversed via call-graph (interprocedural register
allocation). When done per function/procedure the calling convention may require insertion of
save/restore around each call-site.

In many programming languages, the programmer has the illusion of allocating arbitrarily many
variables. However, during compilation, the compiler must decide how to allocate these
variables to a small, finite set of registers. Not all variables are in use (or "live") at the same
time, so some registers may be assigned to more than one variable. However, two variables in
use at the same time cannot be assigned to the same register without corrupting its value.
Variables which cannot be assigned to some register must be kept in RAM and loaded in/out for
every read/write, a process called spilling. Accessing RAM is significantly slower than
accessing registers and slows down the execution speed of the compiled program, so an
optimizing compiler aims to assign as many variables to registers as possible. Register pressure
is the term used when there are fewer hardware registers available than would have been
optimal; higher pressure usually means that more spills and reloads are needed.

In addition, programs can be further optimized by assigning the same register to a source and
destination of a move instruction whenever possible. This is especially important if the compiler
is using other optimizations such as SSA analysis, which artificially generates additional move
instructions in the intermediate code.

Register allocators have several types, with Iterated Register Coalescing (IRC) being a more
common one. IRC was invented by LAL George and Andrew Appel in 1996, building on earlier
work by Gregory Chaitin. IRC works based on a few principles. First, if there are any non-move
related vertices in the graph with degree less than K the graph can be simplified by removing
those vertices, since once those vertices are added back in it is guaranteed that a color can be
found for them (simplification). Second, two vertices sharing a preference edge whose
adjacency sets combined have a degree less than K can be combined into a single vertex, by the
same reasoning (coalescing). If neither of the two steps can simplify the graph, simplification
can be run again on move-related vertices (freezing). Finally, if nothing else works, vertices can
be marked for potential spilling and removed from the graph (spill). Since all of these steps
reduce the degrees of vertices in the graph, vertices may transform from being high-degree
(degree > K) to low-degree during the algorithm, enabling them to be simplified or coalesced.
Thus, the stages of the algorithm are iterated to ensure aggressive simplification and coalescing.
AIM: Study of Parallelizing compiler.
Automatic parallelization, also auto parallelization, autoparallelization, or parallelization, the
last one of which implies automation when used in context, refers to converting sequential code
into multi-threaded or vectorized (or even both) code in order to utilize multiple processors
simultaneously in a shared-memory multiprocessor (SMP) machine. The goal of automatic
parallelization is to relieve programmers from the tedious and error-prone manual
parallelization process. Though the quality of automatic parallelization has improved in the past
several decades, fully automatic parallelization of sequential programs by compilers remains a
grand challenge due to its need for complex program analysis and the unknown factors (such as
input data range) during compilation.

The programming control structures on which autoparallelization places the most focus are
loops, because, in general, most of the execution time of a program takes place inside some
form of loop. There are two main approaches to parallelization of loops: pipelined multi-
threading and cyclic multi-threading.

For example, consider a loop that on each iteration applies a hundred operations, runs for a
thousand iterations. This can be thought of as a grid of 100 columns by 1000 rows, a total of
100,000 operations. Cyclic multi-threading assigns each row to a different thread. Pipelined
multi-threading assigns each column to a different thread.

Compiler parallelization analysis

The compiler usually conducts two passes of analysis before actual parallelization in order to
determine the following:

• Is it safe to parallelize the loop? Answering this question needs accurate dependence
analysis and alias analysis
• Is it worthwhile to parallelize it? This answer requires a reliable estimation (modeling) of
the program workload and the capacity of the parallel system.

The first pass of the compiler performs a data dependence analysis of the loop to determine
whether each iteration of the loop can be executed independently of the others. Data
dependence can sometimes be dealt with, but it may incur additional overhead in the form of
message passing, synchronization of shared memory, or some other method of processor
communication.

The second pass attempts to justify the parallelization effort by comparing the theoretical
execution time of the code after parallelization to the code's sequential execution time.
Somewhat counterintuitively, code does not always benefit from parallel execution. The extra
overhead that can be associated with using multiple processors can eat into the potential
speedup of parallelized code.

Historical parallelizing compilers

Most research compilers for automatic parallelization consider Fortran programs, because
Fortran makes stronger guarantees about aliasing than languages such as C. Typical examples
are:

• Paradigm compiler
• Polaris compiler
• Rice Fortran D compiler
• SUIF compiler
• Vienna Fortran compiler
AIM: Study of Parallelism detection.
Detecting Polaris includes the program analysis and transformation techniques that were found
to be most important in a prior manual parallelization project [2]. At the core of any
autoparallelizer is a data dependence detection mechanism. Data dependences prevent
parallelism, making dependence-removing techniques essential parts of an autoparallelizer’s
arsenal. To this end, Polaris includes passes for data privatization, reduction recog-nition, and
induction variable substitution. The compiler focuses on detecting fully parallel loops, which
have independent iterations and can thus be executed simultaneously by multiple processors.

For the basics of the following techniques, see “Autoparallelization”parallelism

Data-dependence test:Typical data dependence tests detect whether or not two accesses to a data
array in two different loop iterations could reference the same array element. The detection
works well where array subscripts are linear – of the form a∗i+b∗j, wherea,bare integer
constants andi,jare index variables of enclosing loops. The Polaris project developed new
dependence analysis techniques that are able to detect parallelism in the presence of symbolic
and non-linear array subscript expressions.

For example, in the above expression, if a is a variable, the subscript is considered symbolic; if
the term I 2appears, the subscript is non-linear. If the compiler cannot determine the value of a
symbolic term, it cannot assume it is linear.

Hence,symbolic and non-linear expressions are related. In real programs it is common for
expressions, including array subscripts, to contain symbolic terms other than the loop indices.
Through non-linear, symbolic data-dependence testing, Polaris was able to parallelize several
important programs that previous compilers could

Privatization:Data privatization [8] is a key enabler of improved parallelism detection. A


privatization pattern can be viewed as one where a variable, say
t, is being used as a temporary storage during a loopiteration. The compiler recognizes this
pattern in that t is first defined (written) before used (read) in the loop iteration. By giving each
iteration a separate copy of the storage space fort, accesses tot in multiple iterations do not
conflict. Polaris extended the basic technique so that it could detect entire arrays that could be
privatized
Reduction recognition:This transformation is another important enabler of parallelization.
Similar to the way Polaris extended the privatization technique from scalars to arrays, it
extended reduction recog-nition [9].
The following loop shows an example array reduction(sometimes referred to as irregular
orhistogram reduction).
AIM: Study of Data-flow analysis
Data-flow analysis is a technique for gathering information about the possible set of values
calculated at various points in a computer program. A program's control flow graph (CFG) is
used to determine those parts of a program to which a particular value assigned to a variable
might propagate. The information gathered is often used by compilers when optimizing a
program. A canonical example of a data-flow analysis is reaching definitions.

A simple way to perform data-flow analysis of programs is to set up data-flow equations for
each node of the control flow graph and solve them by repeatedly calculating the output from
the input locally at each node until the whole system stabilizes, i.e., it reaches a fixpoint. This
general approach was developed by Gary Kildall while teaching at the Naval Postgraduate
School

Examples

The following are examples of properties of computer programs that can be calculated by data-
flow analysis. Note that the properties calculated by data-flow analysis are typically only
approximations of the real properties. This is because data-flow analysis operates on the
syntactical structure of the CFG without simulating the exact control flow of the program.
However, to be still useful in practice, a data-flow analysis algorithm is typically designed to
calculate an upper respectively lower approximation of the real program properties.
Data-flow analysis is inherently flow-sensitive. Data-flow analysis is typically path-insensitive,
though it is possible to define data-flow equations that yield a path-sensitive analysis.

• A flow-sensitive analysis takes into account the order of statements in a program. For
example, a flow-insensitive pointer alias analysis may determine "variables x and y may
refer to the same location", while a flow-sensitive analysis may determine "after
statement 20, variables x and y may refer to the same location".
• A path-sensitive analysis computes different pieces of analysis information dependent on
the predicates at conditional branch instructions. For instance, if a branch contains a
condition x>0, then on the fall-through path, the analysis would assume that x<=0 and on
the target of the branch it would assume that indeed x>0 holds.
• A context-sensitive analysis is an interprocedural analysis that considers the calling
context when analyzing the target of a function call. In particular, using context
information one can jump back to the original call site, whereas without that information,
the analysis information has to be propagated back to all possible call sites, potentially
losing precision.

List of data-flow analyses


Reaching definitions

Liveness analysis

Definite assignment analysis

Available expression

Constant propagation
AIM: Study of program representation for optimization
The goal of program optimisation is to discover, at compilation time,information about the run-
time behaviour of the program, and use that information to improve the generated
code.Whatimprovingmeans depends on the situation: often it implies reducing the execution
time, but it can also imply reducing the size of the generated code, or the consumed memory,
etc.In this course, we will concentrate on the optimisation of execution time.
Correctness of optimization
The most important feature of any optimisation is that it is
correct, in the sense that it preserves the behaviour of the original program.This implies in
particular that if the original program would have failed during execution, the optimised one
must also fail, and for the same reason – a property that is often forgotten
Two kinds of optimisations can be distinguished:
•machine-independent optimisations, which decrease the amount of work that the program has
to perform – e.g.dead code elimination,
•machine-dependent optimisations, which take advantage of characteristics of the target
machine – e.g.instruction scheduling.
Optimisation examples
Machine-dependent optimisations include:
•instruction scheduling, which rearranges instructions to avoid processor stalls,
•register allocation, which tries to use registers instead
of memory as much as possible,
•peephole optimisation, which replaces given instruction sequences by faster alternatives.
Program representation
The representation used for the program plays a crucial role for optimisation. It must be at the
right level of abstraction to ensure that:
•the analysis is as easy as possible,
•no opportunities are lost – e.g.some common sub-expressions only appear after high-level
constructs like array access have been translated to more basic instructions.
AIM:Study of Distributed operating system.
The word distributed in terms such as "distributed system", "distributed programming", and
"distributed algorithm" originally referred to computer networks where individual computers
were physically distributed within some geographical area.[5] The terms are nowadays used in a
much wider sense, even referring to autonomous processes that run on the same physical
computer and interact with each other by message passing.[4] While there is no single
definition of a distributed system,[6] the following defining properties are commonly used:

There are several autonomous computational entities, each of which has its own local
memory.The entities communicate with each other by message passing.In this article, the
computational entities are called computers or nodes.A distributed system may have a common
goal, such as solving a large computational problem.Alternatively, each computer may have its
own user with individual needs, and the purpose of the distributed system is to coordinate the
use of shared resources or provide communication services to the users.Other typical properties
of distributed systems include the following:The system has to tolerate failures in individual
computer

The structure of the system (network topology, network latency, number of computers) is not
known in advance, the system may consist of different kinds of computers and network links,
and the system may change during the execution of a distributed program. Each computer has
only a limited, incomplete view of the system. Each computer may know only one part of the
input.

(a) A distributed system.


(b) A parallel system.

Architecture

Client/Server System : The Client-server architecture is a way to dispense a service from a


central source. There is a single server that provides a service, and many clients that
communicate with the server to consume its products. In this architecture, clients and servers
have different jobs. The server's job is to respond to service requests from clients, while a
client's job is to use the data provided in response in order to perform some task.
AIM: Study of Access matrix model.
In computer science, an Access Control Matrix or Access Matrix is an abstract, formal security
model of protection state in computer systems, that characterizes the rights of each subject with
respect to every object in the system. It was first introduced by Butler W. Lampson in 1971.[1]

An access matrix can be envisioned as a rectangular array of cells, with one row per subject and
one column per object. The entry in a cell - that is, the entry for a particular subject-object pair -
indicates the access mode that the subject is permitted to exercise on the object. Each column is
equivalent to an access control list for the object; and each row is equivalent to an access profile
for the subject

According to the model, the protection state of a computer system can be abstracted as a set of
objects , that is the set of entities that needs to be protected (e.g. processes, files, memory
pages) and a set of subjects , that consists of all active entities (e.g. users, processes). Further
there exists a set of rights of the form , where , and . A right
thereby specifies the kind of access a subject is allowed to process object.

Example

in this matrix example there exists two processes, a file and a device. The first process has the
ability to execute the second, read the file and write some information to the device, while the
second process can only send information to the first.

Asset 1 Asset 2 file device


Role 1 read, write, execute, own execute read write
Role 2 read read, write, execute, own
AIM: Study of Multiprocessor system architecture and
process synchronization
Multiprocessing is the use of two or more central processing units (CPUs) within a single
computer system.[1][2] The term also refers to the ability of a system to support more than one
processor and/or the ability to allocate tasks between them.[3] There are many variations on this
basic theme, and the definition of multiprocessing can vary with context, mostly as a function
of how CPUs are defined (multiple cores on one die, multiple dies in one package, multiple
packages in one system unit, etc.). At the operating system level, multiprocessing is sometimes
used to refer to the execution of multiple concurrent processes in a system as opposed to a
single process at any one instant. When used with this definition, multiprocessing is sometimes
contrasted with multitasking, which may use just a single processor but switch it in time slices
between tasks (i.e. a time-sharing system).

Multiprocessing however means true parallel execution of multiple processes using more than
one processor. Multiprocessing doesn't necessarily mean that a single process or task uses more
than one processor simultaneously; the term parallel processing is generally used to denote that
scenario. Other authors prefer to refer to the operating system techniques as multiprogramming
and reserve the term multiprocessing for the hardware aspect of having more than one
processor. The remainder of this article discusses multiprocessing only in this hardware sense.In
Flynn's taxonomy, multiprocessors as defined above are MIMD machines. As they are normally
construed to be tightly coupled (share memory), multiprocessors are not the entire class of
MIMD machines, which also contains message passing multicomputer systems.

Processor-In a multiprocessing system, all CPUs may be equal, or some may be reserved for
special purposes. A combination of hardware and operating system software design
considerations determine the symmetry (or lack thereof) in a given system. For example,
hardware or software considerations may require that only one particular CPU respond to all
hardware interrupts, whereas all other work in the system may be distributed equally among
CPUs; or execution of kernel-mode code may be restricted to only one particular CPU, whereas
user-mode code may be executed in any combination of processors. Multiprocessing systems
are often easier to design if such restrictions are imposed, but they tend to be less efficient than
systems in which all CPUs are utilized.

You might also like