Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture 1-2 Computer Organization and Architecture

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

Computer Organization and Architecture

CSEN 2202
Module 1
Lecture 1 – 2
15/01/2020

Dr. Debranjan Sarkar


Stored Program Computer (Von Neumann concept)

• Early computers were


• Mostly not reprogrammable
• Executed a single hardwired program
• No program instructions -> no program storage
• Some computers were programmable
• But stored their programs on punched tapes
• These were physically fed into the machine as and when needed.
• In late 1940s, John von Neumann gave the concept of storing
instructions in computer memory
• This enabled a computer to perform a variety of tasks in
sequence or intermittently
Von Neumann concept
• A program may be electronically stored in binary-number format in a
memory device
• Now instructions may be modified by the computer
• A computer with a von Neumann architecture stores program and
data in the same memory
• Stored-program computer is sometimes used as a synonym for von
Neumann architecture, or IAS computer (as it was first developed at
the Institute for Advanced Studies)
• The von Neumann architecture is also known as Princeton architecture
Von Neumann concept
• Stored program concept
• Both Data and instructions are stored in a single read-write memory
• Arithmetic and Logic Unit (ALU) is capable of operating on binary data
• The contents of the memory are addressable by location
• The computer executes the program in sequential fashion from one
instruction to the next, unless explicitly modified.
• A program can modify itself when the computer executes the
program
General structure of von Neumann Architecture
What is von Neumann bottleneck?
• Von Neumann architecture requires memory access for instruction
fetch and for data movement (from and to memory)
• Memory access is very slow compared to the speed of the CPU
• So CPU has to wait for a long to obtain instruction / data from memory
• This greatly degrades the performance of the Von Neumann computer
• Also, an instruction fetch and a data operation cannot occur at the
same time because they share a common bus. This often limits the
performance of the system
• The degradation of performance due to CPU-memory speed disparity
and due to sharing the same bus for instruction fetch and data
read/write is referred to as Von Neumann bottleneck
How von Neumann bottleneck can be overcome?
• The performance is improved by using a special type of faster memory (called
cache memory) between the CPU and the main memory. The access time of the
cache memory is of the order of the speed of the CPU and hence there is almost
no waiting time by the CPU for the required data

• By using separate instruction cache and data cache

• By moving some data into cache before it is requested (pre-fetching) to speed


access in the event of a request

• By using RISC (Reduced Instruction Set Computer) architecture to limit access to


main memory to a few load and store instructions. Other instructions have their
operands in CPU registers (not in memory).
Difference between
Von Neumann architecture and Harvard architecture
• The von Neumann architecture is a stored program concept
• It consists of a single memory for both program and data storage
• The system performance degrades as program and data cannot be fetched in one
cycle
• Example: EDVAC (Electronic Discrete Variable Automatic Computer)
whereas
• Harvard architecture is also a stored program concept
• It has separate program and data memories
• Data memory and program memory can be of different width, type etc.
• Program and data can be fetched in one cycle – by using separate control signals:
‘program memory read’ and ‘data memory read’
• Example: Harvard Mark 1 computer
Start

Any
Instruction to
be executed?

Yes
Fetch next Instruction

Decode Instruction

Operand to
Yes Fetch
be
operand
fetched?
No
Execute Instruction

Interrupt Yes
to be Transfer control to
processed? Interrupt Service Routine (ISR)

No
Yes No Wait for
More? next A Simple Instruction Cycle
program
Instruction Execution Mechanism
• A program is a set of instructions stored in memory
• The program is executed in the computer by going through a cycle for
each instruction
• After the program is loaded onto the memory, the CPU fetches the
first instruction
• Then the instruction is decoded to understand what actions the
instruction dictates
• If required, it fetches the operand from the memory
• Then the CPU carries out those actions i.e. executes the instruction
Contd….
Instruction Execution Mechanism
• If no interrupt is pending to be serviced, the control is transferred to
the next instruction
• In case some interrupt is pending to be serviced, the CPU transfers
control to the Interrupt Service Routine (ISR)
• After execution of the ISR, control is transferred to the next instruction
(from where it came to ISR)
• This cycle is repeated continuously by a computer's CPU, from boot up
to shut down.
• The fetch–decode–execute cycle (also known as instruction cycle) is the
basic operational process of a computer
Thank you

You might also like