Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Computer Architecture and Organization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

AMBALA COLLEGE OF ENGINEERING AND APPLIED RESEARCH

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


CSE 307N: Computer Organization and Architecture
Sessional Exam I
B.Tech 5th Semester
Time: 1 hr. Max. Marks: 30
Note: Attempt two questions from respective set.

SET A (ODD ROLL NO.)

Q1. (a) Differentiate between Von Newman Stored Program Concept and Harvard Stored 8
Program concept.
Ans There are two types of digital computer architectures that describe the functionality
and implementation of computer systems. One is the Von Neumann architecture
that was designed by the renowned physicist and mathematician John Von
Neumann in the late 1940s, and the other one is the Harvard architecture which was
based on the original Harvard Mark I relay-based computer which employed
separate memory systems to store data and instructions.
The original Harvard architecture used to store instructions on punched tape and
data in electro-mechanical counters. The Von Neumann architecture forms the basis
of modern computing and is easier to implement. Difference between the two is given
below:

Von Neumann Harvard Architecture

(b) What do you mean by Flynn’s classification of computers? Describe. 7


Ans Parallel computing is a computing where the jobs are broken into discrete parts that
can be executed concurrently. Each part is further broken down to a series of
instructions. Instructions from each part execute simultaneously on different CPUs.
Parallel systems deal with the simultaneous use of multiple computer resources that
can include a single computer with multiple processors, a number of computers
connected by a network to form a parallel processing cluster or a combination of
both. Parallel systems are more difficult to program than computers with a single
processor because the architecture of parallel computers varies accordingly and the
processes of multiple CPUs must be coordinated and synchronized. The crux of
parallel processing are CPUs. Based on the number of instruction and data streams
that can be processed simultaneously, computing systems are classified into four
major categories:

Flynn’s classification –
1. Single-instruction, single-data (SISD) systems: An SISD computing system
is a uniprocessor machine which is capable of executing a single instruction,
operating on a single data stream. In SISD, machine instructions are
processed in a sequential manner and computers adopting this model are
popularly called sequential computers. Most conventional computers have
SISD architecture. All the instructions and data to be processed have to be
stored in primary memory.

The speed of the processing element in the SISD model is limited (dependent)
by the rate at which the computer can transfer information internally.
Dominant representative SISD systems are IBM PC, workstations.
2. Single-instruction, multiple-data (SIMD) systems: An SIMD system is a
multiprocessor machine capable of executing the same instruction on all the
CPUs but operating on different data streams. Machines based on an SIMD
model are well suited to scientific computing since they involve lots of vector
and matrix operations. So that the information can be passed to all the
processing elements (PEs) organized data elements of vectors can be divided
into multiple sets (N-sets for N PE systems) and each PE can process one data
set.
Dominant representative SIMD systems is Cray’s vector processing machine.
3. Multiple-instruction, single-data (MISD) systems: An MISD computing
system is a multiprocessor machine capable of executing different instructions
on different PEs but all of them operating on the same dataset .

Example Z= sin(x)+cos(x)+tan(x) The system performs different operations on


the same data set. Machines built using the MISD model are not useful in most
of the application, a few machines are built, but none of them are available
commercially.
4. Multiple-instruction, multiple-data (MIMD) systems: An MIMD system is a
multiprocessor machine which is capable of executing multiple instructions on
multiple data sets. Each PE in the MIMD model has separate instruction and
data streams; therefore machines built using this model are capable to any
kind of application. Unlike SIMD and MISD machines, PEs in MIMD machines
work asynchronously.

MIMD machines are broadly categorized into shared-memory


MIMD and distributed-memory MIMD based on the way PEs are coupled to
the main memory. In the shared memory MIMD model (tightly coupled
multiprocessor systems), all the PEs are connected to a single global memory
and they all have access to it. The communication between PEs in this model
takes place through the shared memory, modification of the data stored in the
global memory by one PE is visible to all other PEs. Dominant representative
shared memory MIMD systems are Silicon Graphics machines and Sun/IBM’s
SMP (Symmetric Multi-Processing). In Distributed memory MIMD machines
(loosely coupled multiprocessor systems) all PEs have a local memory. The
communication between PEs in this model takes place through the
interconnection network (the inter process communication channel, or IPC).
The network connecting PEs can be configured to tree, mesh or in accordance
with the requirement. The shared-memory MIMD architecture is easier to
program but is less tolerant to failures and harder to extend with respect to the
distributed memory MIMD model. Failures in a shared-memory MIMD affect
the entire system, whereas this is not the case of the distributed model, in
which each of the PEs can be easily isolated. Moreover, shared memory MIMD
architectures are less likely to scale because the addition of more PEs leads to
memory contention. This is a situation that does not happen in the case of
distributed memory, in which each PE has its own memory. As a result of
practical outcomes and user’s requirement, distributed memory MIMD
architecture is superior to the other existing models.
Q2. (a) Define computer architecture, computer organization and computer design in brief. 6
Ans Computer Organization: Computer Organization is concerned with the way the
hardware components operate and the way they are connected together to form
computer system. It includes Hardware details transparent to the programmer such
as control signal and peripheral. It describes how the computer performs. Example:
circuit design, control signals, memory types this all are under Computer
Organization.
Computer Architecture: Computer Architecture is concerned with the structure and
behavior of computer system as seen by the user. It includes information, formats,
instruction set and techniques for addressing memory. It describes what the
computer does. Computer Organization is realization of what is specified by the
computer architecture. It deals with how operational attributes are linked together
to meet the requirements specified by computer architecture. Some organizational
attributes are hardware details, control signals, peripherals. Computer Architecture
deals with giving operational attributes of the computer or Processor to be specific.
It deals with details like physical memory, ISA of the processor, the no of bits used
to represent the data types, Input Output mechanism and technique for addressing
memories.
Example: Say you are constructing a house, design and all low-level details come
under computer architecture while building it brick by brick, connecting together
keeping basic architecture in mind comes under Computer Organization.
Computer design: is concerned with the hardware design of the computer. Once
the computer specifications are formulated, it is the task of the designer to develop
hardware for the system. Computer design is concerned with the determination of
what hardware should be used and how the parts should be connected. This aspect
of computer hardware is sometimes referred to as computer implementation.

(b) What do you mean by array multiplier? Design a 4x4 array multiplier. 9
Ans The logic circuit for the 4× 4 binary multiplication can be implemented by using
three binary full adders along with AND gates.
In the above operation the first partial product is obtained by multiplying B0 with
A3A2 A1A0, the second partial product is formed by multiplying B1 with A3A2
A1A0, likewise for 3rd and 4th partial products. So these partial products can be
implemented with AND gates as shown in figure. These partial products are then
added by using 4 bit parallel adder. The three most significant bits of first partial
product with carry (considered as zero) are added with second partial term in the
first full adder. Then the result is added to the next partial product with carry out
and it goes on till the final partial product, finally it produces 8 bit sum which
indicates the multiplication value of the two binary numbers.

Q3. (a) Explain Booth multiplication algorithm. Show step by step multiplication process 8
using Booth algorithm when following binary numbers are multiplied:
(+15) x (+13)
Ans Booth's multiplication algorithm is a multiplication algorithm that multiplies two
signed binary numbers in two's complement notation. The algorithm was invented
by Andrew Donald Booth in 1950 while doing research on crystallography at
Birkbeck College in Bloomsbury, London. Flowchart is given below:

Step by step multiplication process using Booth algorithm when following binary
numbers (+15) x (+13) are multiplied:

(b) Give the flowchart for addition and subtraction of two floating point numbers. 7
Ans Flowchart for addition and subtraction of two floating point numbers:
SET B (EVEN ROLL NOs)

Q1. (a) What is multilevel computer architecture? Discuss in some detail. 7


Ans Multilevel viewpoint of a computer consists of seven layers:
Level 6: The User Level:
i. Program execution and user interface level.
ii. The level with which we are most familiar.
iii. Composed of application programs such as Word Processor, Paint etc.
iv. The implementation of the application is hidden completely from the user
Level 5: High-Level Language Level
i. The level with which we interact when we write programs in languages
such as C, Pascal, Lisp and Java T
ii. The level allows users to write their own application with languages such
as C, Java and many more.
iii. High-level languages are easier to read, write, and maintain
iv. User at this level sees very little of the lower level
Level 4: Assembly Language Level
i. Acts upon assembly language produced from Level 5, as well as
instructions programmed directly at this level.
ii. Lowest human readable form before dealing with 1s and 0s (machine
language)
iii. Assembler converts assembly to machine language
Level 3: System Software Level
i. Controls executing processes on the system.
ii. Protects system resources.
iii. Assembly language instructions often pass through Level 3 without
modification.
iv. Operating System software supervises other programs, Controls execution
of multiple programs, Protects system resources. E.g. Memory and I/O
devices
v. Other utilities: Compilers, Interpreters, Linkers, Library etc.
vi. The software can be written in both assembly and high-level language.
High-level is much more portable i.e. easier to modify to work on other
machines
Level 2: Machine Level
i. Also known as the Instruction Set Architecture (ISA) Level.
ii. Consists of instructions that are particular to the architecture of the
machine.
iii. Programs written in machine language need no compilers, interpreters, or
assemblers.
iv. Also known as the Instruction Set Architecture (ISA) Level.
v. Consists of instructions that are particular to the architecture of the machine
vi. Programs written in machine language (0s and 1s) need no compilers,
interpreters, or assemblers
Level 1: Control Level
i. A control unit decodes and executes instructions and moves data through
the system.
ii. Control units can be microprogrammed or hardwired.
iii. A microprogram is a program written in a low-level language that is
implemented by the hardware.
iv. Hardwired control units consist of hardware that directly executes machine
instructions.
v. Detailed organization of a processor implementation: How the control unit
interprets machine instructions (from fetch thru execute stages).
vi. There can be different implementations of a single ISA
Level 0: Digital Logic Level
i. This level is where we find digital circuits (the chips).
ii. Digital circuits consist of gates and wires.
iii. These components implement the mathematical logic of all other levels.
iv. This level is where we view physical devices as just switches (On/Off)
v. Instead of viewing their physical behavior (i.e. in terms of voltages and
currents) we use two value logic i.e. 0 (off) and 1(on)
vi. We will briefly look at the physical electronic components – mainly the
transistor technology
(b) Differentiate between fixed point and floating point representation of numbers. 8
Ans
Q2. (a) Discuss general system architecture in detail. 5
Ans The architecture of a contemporary universal computer consists of a processor,
external devices, controllers of these devices, an operating memory, a cash
memory, a memory controller and DMA common system bus. Each device
controller is in charge of a specific group of devices. It also supervises their work
and mediates in sending information to and from devices. This data is also
buffered by the controller in its own memory. This function allows for a fast
transfers through the system bus independently of a device speed. Controllers can
also be attached to the system bus. Their task is to link the main bus with lower
lever busses (like PCI, ISA) which also can have controllers, etc. External devices
are also often connected through a specific bus (e.g. SCSI) to a controller. So, in a
system typically we have many busses. All devices attached to the bus can work
simultaneously. Their tasks must be synchronized only, when the bus will be
used, so mainly during a transfer of data to or from a device. Such parallel
operations boost system efficiency.
(b) Differentiate between restoring and nonrestoring algorithms for division with
10
examples.
Ans
Q3. (a) Explain Booth multiplication algorithm. Show step by step multiplication process
using Booth algorithm when following binary numbers are multiplied: 8
(+15) x (-13)
Ans Booth's multiplication algorithm is a multiplication algorithm that multiplies two
signed binary numbers in two's complement notation. The algorithm was
invented by Andrew Donald Booth in 1950 while doing research on
crystallography at Birkbeck College in Bloomsbury, London. Flowchart is given
below:
Step by step multiplication process using Booth algorithm when following binary
numbers (+15) x (-13) are multiplied:
(b) Give the flowchart for multiplication of two floating point numbers. 7
Ans Flowchart for multiplication of two floating point numbers:

You might also like