Computer Architecture and Organization Learning Module 1
Computer Architecture and Organization Learning Module 1
Architecture and
Organization
This is a property of
PRESIDENT RAMON MAGSAYSAY STATE UNIVERSITY
NOT FOR SALE
CpE 413 – Computer Architecture and Organization
First Edition, 2022
Copyright. Republic Act 8293 Section 176 provides that “No copyright shall subsist in any work of
the Government of the Philippines. However, prior approval of the government agency or office
wherein the work is created shall be necessary for exploitation of such work for profit. Such agency or
office may, among other things, impose as a condition the payment of royalties.
Borrowed materials included in this module are owned by their respective copyright holders. Every
effort has been exerted to reach and seek permission to use these materials from their respective
copyright owners. The University and authors do not claim ownership over them.
Assigned
Title Author
Chapter
Chapter 1: History and Overview of Computer Architecture
Chapter 2: Instruction Set Architecture
Chapter 3: Measuring Performance
Chapter 4: Computer Arithmetic
Dionisio M. Martin Jr.
Chapter 5: Processor Organization
Chapter 6: Memory System Organization and Architectures
Chapter 7: Input/Output Interfacing and Communication
Chapter 8: Peripheral Subsystems
Chapter 9: Multi/Many-Core Architectures
Chapter 10: Distributed System Architectures
Evaluators:
This course of study is also intended to discuss the measuring performance, processor
organization, memory organization, I/O interfacing and communication, and multi/many-core
architecture and distributed system architecture.
At the end of the semester, 85% of the students have attained 90% level of understanding for
being aware in the computer engineering, locally and globally.
1. Understand the basic concepts of computer architecture and organization.
2. Apply the principles of computer architecture in a digital computer design through
development of Simple-As-Possible-1 system.
3. Understand the hierarchical level of computer architecture specifically on the SAP
family design.
Course Details:
• Course Code: CpE 413
• Course Title: Computer Architecture and Organization
• No. of Units: 4-unit lecture
• Classification: Lecture-Laboratory-based
• Pre-requisite / Co-Requisite: CpE 313
• Semester and Academic Year: 1st Semester, AY 2022-2023
• Schedule: BSCpE 2A – Tuesday and Thursday, 7:30AM-9:00 AM)
• Name of Faculty: Dionisio M. Martin Jr.
• Contact Details
Email: dmmartinjr@prmsu.edu.com
Mobile Number: 0939-906-0585
FB Account: Dionisio Martin Jr.
• Consultation
Day: MWF
Time: 2:00-3:00PM
Students will be assessed in a regular basis thru quizzes, assignments, individual/group outputs
using synchronous and/or asynchronous modalities or submission of SLM exercises. Rubrics
are also provided for evaluation of individual/group outputs.
Major examinations will be given as scheduled. The scope and coverage of the examination
will be based on the lessons/topics as plotted in the course syllabus.
0323
Module Overview
Introduction
This module is intended for 4th year Computer Engineering students whose concern is the
study of computer architecture and organization. The chapters include the discussion on the
computer architecture evolution and also the effects of instruction set to its organization for
processing an operation. It discusses also the design performance of computer system and the
processor organization in relation to the architecture of the system.
Memory/I/O addressing and interfacing is integrated to the designing process as well as the
peripheral identification and connectivity for the computer system.
Design studies are also included in the later part of the course outline as to understand the
design of SAP-1 computer system. Practical approach to the development of the said system
using the basic gates and programming.
Table of Contents
Chapter 1
In describing computers, a distinction is often made between computer architecture and computer
organization. Although it is difficult to give precise definitions for these terms, a consensus exists about
the general areas covered by each purpose, the nature and characteristics of modern-day computers.
The evolution of computers has been characterized by increasing processor speed, decreasing
component size, increasing memory size, and increasing I/O capacity and speed. A critical issue in
computer system design is balancing the performance of the various elements so that gains in
performance in one area are not handicapped by a lag in other areas. In particular, processor speed has
increased more rapidly than memory access time. A variety of techniques is used to compensate for this
mismatch, including caches, wider data paths from memory to processor, and more intelligent memory
chips.
Specific Objectives
Duration
_____________________________________________
Computer Organization – refers to the operational units and their interconnections that realize the
architectural specifications. Examples are things that are transparent to the programmer:
- control signals
- interfaces between computer and peripherals
- the memory technology being used
Operating environment
(source and destination of data)
Data
movement
apparatus
Control
mechanism
Data Data
storage processing
facility facility
System Bus – is consisting of a number of conducting wires to which all the other components attach.
Computer Evolution
I. The First Generation: Vacuum Tubes
Computers developed between 1946 – 1959, are the first generation of computers. They were large and
limited to basic calculations. They consisted of large devices like the vacuum tubes. The input method of
these computers was a machine language known as the 1GL or the first generation language. The
physical methods of using punch cards, paper tape, and magnetic tape were used to enter data into these
computers.
Examples of the first generation computers include ENIAC, EDVAC, UNIVAC, IAS, IBM-701, and
IBM-650. These computers were large and very unreliable. They would heat up and frequently shut down
and could only be used for very basic computations.
ENIAC (Electronic Numerical Integrator And Computer) – was designed and constructed at the
University of Pennsylvania and the world’s first general-purpose electronic digital computer.
– is a project used to response the U.S. needs during World War II wherein the Army’s
Ballistics Research Laboratory (BRL), an agency responsible for developing range and trajectory
tables for new weapons, was having difficulty supplying these tables accurately and within a
reasonable time frame.
– was proposed by John Mauchly, a professor of electrical engineering at the University of
Pennsylvania, and John Eckert, one of his graduate students, using vacuum tubes for the BRL’s
application in 1943 and completed in 1946.
– was enormous, weighing 30 tons, occupying 1500 square feet of floor space, and containing
more than 18,000 vacuum tubes.
– consumed 140 kilowatts of power when operates.
Source: https://commons.wikimedia.org/wiki/
– substantially faster than any electromechanical computer, capable of 5,000 additions per
second.
– is decimal rather than a binary machine in which numbers were represented in decimal
form, and arithmetic was performed in the decimal system.
– has memory consisted of 20 accumulators, each capable of holding a 10-digit decimal
number.
– has major drawback that it had to be programmed manually by setting switches and
plugging and unplugging cables.
– was too late to be used in the war effort, instead, its first task was to perform a series of
complex calculations that were used to help determine the feasibility of the hydrogen bomb.
– continued to operate under BRL management until 1955, when it was disassembled.
EDVAC (Electronic Discrete Variable Automatic Computer) – was introduced in 1945 with a
stored-program concept as proposed by John von Neumann, a most notably the mathematician,
who was also a consultant on the ENIAC project.
– implemented the idea of storing instruction codes as well as data in an electrically alterable
memory.
Source: https://www.pinterest.ph/
IAS computer – began in 1946 by von Neumann and his colleagues the design as a new stored-
program computer at the Princeton Institute for Advanced Studies.
Source: https://fudzilla.com/media/k2/items/
– was not completed until 1952 and become the prototype of all subsequent general-purpose
computers.
– has general structure consists of:
• A main memory, which stores both data and instructions
• An arithmetic and logic unit (ALU) capable of operating on binary data
• A control unit, which interprets the instructions in memory and causes them to be
executed
• Input and output (I/O) equipment operated by the control unit
Registers – are storage locations both on the control unit and the ALU and defined as follows:
• Memory buffer register (MBR): Contains a word to be stored in memory or sent to the I/O
unit, or is used to receive a word from memory or from the I/O unit.
• Memory address register (MAR): Specifies the address in memory of the word to be
written from or read into the MBR.
• Instruction register (IR): Contains the 8-bit opcode instruction being executed.
• Instruction buffer register (IBR): Employed to hold temporarily the right hand instruction
from a word in memory.
• Program counter (PC): Contains the address of the next instruction-pair to be fetched
from memory.
• Accumulator (AC) and multiplier quotient (MQ): Employed to hold temporarily operands
and results of ALU operations. The most significant 40 bits are stored in the AC and the
least significant in the MQ.
Instruction Cycle – is the processing performed by a CPU to execute a single instruction. Each
instruction cycle consists of two sub-cycles:
a. Fetch Cycle – is a portion of the instruction cycle during which the CPU fetches from
memory the instruction to be executed. The opcode of the next instruction is loaded into
the IR and the address portion is loaded into the MAR. This instruction may be taken
from the IBR, or it can be obtained from memory by loading a word into the MBR, and
then down to the IBR, IR, and MAR.
b. Execute Cycle – is a portion of the instruction cycle during which the CPU performs the
operation specified by the instruction opcode. The control circuitry interprets the opcode
and executes the instruction by sending out the appropriate control signals to cause data to
be moved or an operation to be performed by the ALU. And these can be grouped as
follows:
• Data transfer: Move data between memory and ALU registers or between two
ALU registers.
• Unconditional branch: Normally, the control unit executes instructions in
sequence from memory. This sequence can be changed by a branch instruction,
which facilitates repetitive operations.
• Conditional branch: The branch can be made dependent on a condition, thus
allowing decision points.
• Arithmetic: Operations performed by the ALU.
• Address modify: Permits addresses to be computed in the ALU and then inserted
into instructions stored in memory. This allows a program considerable
addressing flexibility.
Source: https://media.sciencephoto.com/image/
– was manufactured in 1947 by the Eckert-Mauchly Computer Corporation and later became
part of the UNIVAC division of Sperry-Rand Corporation, which went on to build a series of
successor machines.
– could perform matrix algebraic computations, statistical problems, premium billings for a
life insurance company, and logistical problems.
UNIVAC II – was delivered in the late 1950s with greater memory capacity and higher performance
than the UNIVAC I.
– illustrates several trends that have remained characteristic of the computer industry today:
1. Advances in technology allow companies to continue to build larger, more powerful
computers.
2. Each company tries to make its new machines backward compatible with the older
machines. This means that the programs written for the older machines can be
executed on the new machine.
Source: https://digital.hagley.org/
UNIVAC 1100 series – is a computer development began by the UNIVAC division, which was to be
its major source of revenue.
– has first model named the UNIVAC 1103 and its successors were primarily intended for
scientific applications, involving long and complex calculations.
IBM – is then major manufacturer of punched-card processing equipment, delivered its first electronic
stored-program computer, the 701, in 1953 and intended primarily for scientific applications.
– introduced the companion 702 product in 1955, which had a number of hardware features
that suited it to business applications.
– established the long series of 700/7000 computers as the overwhelmingly dominant
computer manufacturer.
PDP-1 – delivered by Digital Equipment Corporation (DEC) in 1957 as its first computer
manufactured.
– was become remarkable for DEC in second generation because they began the
minicomputer phenomenon that would become so prominent in the third generation.
Source: https://static.wikia.nocookie.net/althistory/images/
IBM 7094 – includes an Instruction Backup Register, used to buffer the next instruction.
– is included in the following IBM 700/7000 series:
CPU Memory Cycle Number Number Hardwired I/O Instruction Speed
Model First Memory
Techno- Techno- Time of of Index Floating- Overlap Fetch (relative
Number Delivery Size (K)
logy logy (μs) Opcodes Registers Point (Channels) Overlap to 701)
701 1952 Vacuum Electrostatic 30 2–4 24 0 no no no 1
tubes tubes
704 1955 Vacuum Core 12 4–32 80 3 yes no no 2.5
tubes
709 1958 Vacuum Core 12 32 140 3 yes yes no 4
tubes
7090 1960 Transistor Core 2.18 32 169 3 yes yes no 25
7094 I 1962 Transistor Core 2 32 185 7 yes (double yes yes 30
precision)
7094 II 1964 Transistor Core 1.4 32 185 7 yes (double yes yes 50
precision)
Source: https://lastfm.freetls.fastly.net/
Data channel – is an independent I/O module with its own processor and its own instruction set. The
CPU initiates an I/O transfer by sending a control signal to the data channel, instructing it to
execute a sequence of instructions in memory. The data channel performs its task independently
of the CPU and signals the CPU when the operation is complete. This arrangement relieves the
CPU of a considerable processing burden.
Multiplexor – is the central termination point for data channels, the CPU, and memory.
– schedules access to the memory from the CPU and data channels, allowing these devices to
act independently.
The incremental figure of the number of transistors that could be put on a single chip reflects the
famous Moore’s law, which was propounded by Gordon Moore, cofounder of Intel, in 1965. The
consequences of Moore’s law are profound:
1. The cost of a chip has remained virtually unchanged during this period of rapid growth in
density. This means that the cost of computer logic and memory circuitry has fallen at a
dramatic rate.
2. Because logic and memory elements are placed closer together on more densely packed chips,
the electrical path length is shortened, increasing operating speed.
3. The computer becomes smaller, making it more convenient to place in a variety of
environments.
4. There is a reduction in power and cooling requirements.
5. The interconnections on the integrated circuit are much more reliable than solder connections.
With more circuitry on each chip, there are fewer interchip connections.
Source: https://www.ibm.com/ibm/history/ibm100/images/icp/
mainframe – is larger, most powerful computers other than supercomputers. Typical characteristics of
are:
a. supports a large database
b. elaborate I/O hardware
c. used a central data processing facility
The System/360 indicates some of the key characteristics of the various models in 1965 (each member
of the family is distinguished by a model number) as follows:
Microprocessor – is a component that performs the instructions and tasks involved in computer
processing.
– is the central unit that executes and manages the logical instructions passed to it.
– may also be called a processor or central processing unit, but it is actually more advanced in
terms of architectural design and is built over a silicon microchip.
– is the most important unit within a computer system and is responsible for processing the
unique set of instructions and processes.
– is designed to execute logical and computational tasks with typical operations such as
addition/subtraction, interprocess and device communication, input/output management, etc.
– is composed of integrated circuits that hold thousands of transistors; exactly how many
depends on its relative computing power.
– are generally classified according to the number of instructions they can process within a
given time, their clock speed measured in megahertz and the number of bits used per instruction.
– was born when Intel developed the 4004 in 1971, the first chip to contain all of the
components of a CPU on a single chip.
– has data bus width that defines the number of bits of data that can be brought into or sent
out of the processor at a time.
V. Later Generations
This is the present generation of computers and is the most advanced one. The generation began
somewhere around 1981 and is the present generation of computers. The methods of input include the
modern high-level languages like Python, R, C#, Java etc. These are extremely reliable and employ the
ULSI or the Ultra Large Scale Integration technology. These computers are at the frontiers of the modern
scientific calculations and are used to develop the Artificial Intelligence or AI components that will have
the ability to think for themselves.
Examples include: Intel P 4, i 3 – i10, AMD Athlon, etc.
Computer Generations
Generation Approximate Dates Technology
1 1946-1959 Vacuum tube
2 1959-1965 Transistor
3 1965-1971 Integrated circuit
4 1971-1980 Very large scale integration
5 1981-Onward Ultra large scale integration
_____________________________________________
References/Additional Resources/Readings
https://witscad.com/course/computer-architecture/chapter/introduction-to-functional-computer
https://witscad.com/course/computer-architecture/chapter/fundamentals-of-architectural-design
Activity Sheet
ACTIVITY 1
Direction: Match the items in column A to their descriptions in column B. write only the letter of your
choice on the space provided.
A B
_____ 1. 4004 a. John von Neumann
_____ 2. PDP-8 b. ARM
_____ 3. EDVAC c. Intel
_____ 4. System/360 d. IBM
_____ 5. Cortex e. DEC
Direction: Give the complete terms for the following abbreviated words.
1. ENIAC
2. EDVAC
3. IAS
4. UNIVAC
5. IBM
6. DEC
7. CISC
8. RISC
9. CPU
10. PDP
Direction: Place a Check (✓) mark on the corresponding column if the given computer is either in I-
First Generation, II-Second Generation, III-Third Generation, IV-Fourth Generation, or V-Fifth
Generation.
I II III IV V
1. PDP-8
2. EDVAC 1103
3. ENIAC
4. Core 2
5. 7094
6. System/360
7. Altair 8800
8. PDP-11
9. i3
10. UNIVAC 1108
2. What, in general terms, is the distinction between computer structure and computer
function?
8. At the integrated circuit level, what are the three principal constituents of a computer
system?
LEVEL DESCRIPTION
Minimal effort.
Minimal grammar mechanics.
3 - Fair
Fair presentation.
Few supporting details
Somewhat unclear.
Shows little effort.
2 - Poor Poor grammar mechanics.
Confusing and choppy, incomplete sentences.
No organization of thoughts.
In what particular portion of this learning packet, you feel that you are struggling or lost?
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
To further improve this learning packet, what part do you think should be enhanced?
___________________________________________________________________________
___________________________________________________________________________
___________________________________________________________________________
NOTE: This is an essential part of course module. This must be submitted to the subject
teacher (within the 1st week of the class).