Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter One - Introduction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Chapter One

Introduction

1.1 What is Computer Architecture and Organization?


Computer Architecture
Computer Architecture is a blueprint for design and implementation of a computer system. It
provides the functional details and behavior of a computer system and comes before computer
organization. Computer architecture deals with 'What to do?'

Computer Architecture is a functional description of requirements and design


implementation for the various parts of computer. It deals with functional behavior of
computer system. It comes before the computer organization while designing a
computer.

Computer Organization
Computer Organization is how operational parts of a computer system are linked together. It
implements the provided computer architecture. Computer organization deals with 'How to do?'

Computer Organization comes after the decide of Computer Architecture first. Computer
Organization is how operational attribute are linked together and contribute to realize the
architectural specification. Computer Organization deals with structural relationship.

Following are some of the important differences between Computer Architecture and Computer
Organization.

Sr. No. Key Computer Architecture Computer Organization

1 Purpose Computer architecture Computer organization


explains what a computer explains how a
should do. computer works.

2 Target Computer architecture Computer organization


provides functional provides structural
behavior of computer relationships between
system. parts of computer
system.
3 Design Computer architecture deals Computer organization
with high level design. deals with low level
design.

4 Actors Actors in Computer Actor in computer


architecture are hardware organizaton is
parts. performance.

5 Order Computer architecture is Computer organization


designed first. is started after finalizing
computer architecture.

Computer System Architecture


A computer system is basically a machine that simplifies complicated tasks. It should
maximize performance and reduce costs as well as power consumption.The different
components in the Computer System Architecture are Input Unit, Output Unit, Storage
Unit, Arithmetic Logic Unit, Control Unit etc.

A diagram that shows the flow of data between these units is as follows

The input data travels from input unit to ALU. Similarly, the computed data travels from
ALU to output unit. The data constantly moves from storage unit to ALU and back again.
This is because stored data is computed on before being stored again. The control unit
controls all the other units as well as their data.
A computer organization describes the functions and design of the various units of a
digital system. A general-purpose computer system is the best-known example of a
digital system. Other examples include telephone switching exchanges, digital
voltmeters, digital counters, electronic calculators and digital displays.

Computer architecture deals with the specification of the instruction set and the
hardware units that implement the instructions. Computer hardware consists of
electronic circuits, displays, magnetic and optic storage media and also the
communication facilities.

Functional units are a part of a CPU that performs the operations and calculations called
for by the computer program. Functional units of a computer system are parts of the CPU
(Central Processing Unit) that performs the operations and calculations called for by the
computer program. A computer consists of five main components namely, Input unit,
Central Processing Unit, Memory unit Arithmetic & logical unit, Control unit and an
Output unit

Details about all the computer units are –

❖ Input Unit
The input unit provides data to the computer system from the outside. So, basically it
links the external environment with the computer. It takes data from the input devices,
converts it into machine language and then loads it into the computer system. Keyboard,
mouse etc. are the most commonly used input devices.

Input units are used by the computer to read the data. The most commonly used input
devices are keyboards, mouse, joysticks, trackballs, microphones, etc.

However, the most well-known input device is a keyboard. Whenever a key is pressed,
the corresponding letter or digit is automatically translated into its corresponding binary
code and transmitted over a cable to either the memory or the processor.

❖ Output Unit
The output unit provides the results of computer process to the users i.e it links the
computer with the external environment. Most of the output data is the form of audio or
video. The different output devices are monitors, printers, speakers, headphones etc.
o The primary function of the output unit is to send the processed results to the
user. Output devices display information in a way that the user can understand.
o Output devices are pieces of equipment that are used to generate information or
any other response processed by the computer. These devices display
information that has been held or generated within a computer.
o The most common example of an output device is a monitor.

❖ Storage Unit
Storage unit contains many computer components that are used to store data. It is
traditionally divided into primary storage and secondary storage. Primary storage is also
known as the main memory and is the memory directly accessible by the CPU. Secondary
or external storage is not directly accessible by the CPU. The data from secondary storage
needs to be brought into the primary storage before the CPU can use it. Secondary storage
contains a large amount of data permanently.

o The Memory unit can be referred to as the storage area in which programs are
kept which are running, and that contains data needed by the running programs.
o The Memory unit can be categorized in two ways namely, primary memory and
secondary memory.
o It enables a processor to access running execution applications and services that
are temporarily stored in a specific memory location.
o Primary storage is the fastest memory that operates at electronic speeds. Primary
memory contains a large number of semiconductor storage cells, capable of
storing a bit of information. The word length of a computer is between 16-64 bits.
o It is also known as the volatile form of memory, means when the computer is
shut down, anything contained in RAM is lost.
o Cache memory is also a kind of memory which is used to fetch the data very
soon. They are highly coupled with the processor.
o The most common examples of primary memory are RAM and ROM.
o Secondary memory is used when a large amount of data and programs have to
be stored for a long-term basis.
o It is also known as the Non-volatile memory form of memory, means the data is
stored permanently irrespective of shut down.
o The most common examples of secondary memory are magnetic disks, magnetic
tapes, and optical disks.

Central Processing Unit(CPU)

Central processing unit commonly known as CPU can be referred as an electronic


circuitry within a computer that carries out the instructions given by a computer program
by performing the basic arithmetic, logical, control and input/output (I/O) operations
specified by the instructions.

❖ Arithmetic Logic Unit


All the calculations related to the computer system are performed by the arithmetic logic
unit. It can perform operations like addition, subtraction, multiplication, division etc. The
control unit transfers data from storage unit to arithmetic logic unit when calculations
need to be performed. The arithmetic logic unit and the control unit together form the
central processing unit.

o Most of all the arithmetic and logical operations of a computer are executed in
the ALU (Arithmetic and Logical Unit) of the processor. It performs arithmetic
operations like addition, subtraction, multiplication, division and also the logical
operations like AND, OR, NOT operations.

❖ Control Unit
This unit controls all the other units of the computer system and so is known as its central
nervous system. It transfers data throughout the computer as required including from
storage unit to central processing unit and vice versa. The control unit also dictates how
the memory, input output devices, arithmetic logic unit etc. should behave.

o The control unit is a component of a computer's central processing unit that


coordinates the operation of the processor. It tells the computer's memory,
arithmetic/logic unit and input and output devices how to respond to a program's
instructions.
o The control unit is also known as the nerve center of a computer system.
o Let's us consider an example of addition of two operands by the instruction given
as Add LOCA, RO. This instruction adds the memory location LOCA to the
operand in the register RO and places the sum in the register RO. This instruction
internally performs several steps.

What is Microprocessor? – Types, Application, Evolution


Microprocessor Definition – What do you have knowledge about micro processor? May
be you have but I ensure that you have not detail information about it, if you having then
very well. Here we will fully explain about microprocessor. Microprocessor is a hardware
component of computer, and it works as brain of the computer system as well as use in
computer because without using microprocessor, Computer like as plastic box.
Microprocessor shape as a small chip that is made by silicon and it has to responsible to
all functions of central processing unit.

Microprocessor meaning is a control unit of computer because it is able to manage all


various Arithmetic Logical Unit (ALU) operations. Microprocessor can capable to execute
other operations such as computational activities like as addition/subtraction, internal
processing, device terminals communication, and I/O management.

Block Diagram of Microprocessor

Users can send input to microprocessor with the help of different types of input devices
of computer such as mouse, keyboard, touchpad, and touch screen etc.

Microprocessor manipulates all calculation such as adding/subtracting with using ALU,


control Unit, and Register Array. After executing the instructions store it into memory
area, and finally send those output for displaying on the output devices such as computer
monitor.

Evolution of Microprocessor with its History

History of Microprocessor: First microprocessor was developed by INTEL in 1971, and


its name was Intel 4004. Intel 4004 is based on 4 bit processor; due to this it was not more
popular. Intel 4004 was able to perform only addition/subtraction operation on 4 bit at
once.

Intel was announced new Intel’s 8080 in 1974 for personal computer. It is based on 8 bit
processor.

In 1976, Intel was designed 8085 processor but it was not new invention because 8085
microprocessor was updated version of 8080 microprocessor. In 8085 microprocessor, are
attached two Enable/Disable Instructions, 3 interrupt pins and serial I/O pins.

In 1976, Intel was announced again new 8086 microprocessor. 8086 microprocessor is
better to 8085 because it is based on 16 bit.

Given to all microprocessor were not supportable for Floating point instructions. Floating
point means to decimal number such as (456.23).

Later, Intel was designed other new 8087 microprocessor that was first math co-
processor, and this processor was embedded into IBM PC.

Due to more effort of microprocessor’s companies, other new processor are come in
market such as 8088,80286,80386,80486, Pentium II, Pentium III, Pentium IV and now
Core 2Duo, Dual Core and Quad core processors.

Evolution of Microprocessor

Here, we will discuss about short note on evolution of microprocessor, and further
Microprocessor evolution is divided into five generation, and every generation are
described below.
First Generation:
First generation of microprocessor was introduced by Intel from 1971 – 1972.Those types
of microprocessors are capable only to processed in serially of all instructions. They have
been done their process cycle in three steps like as fetched, decoded and then executed.
When microprocessor completed cycle then to be update the instruction pointer, and
these operations are performed consecutive for every instruction.

Examples are: INTEL 4004 Rockwell, PPS-4 INTEL 8008

Second Generation:

Second generation microprocessor was designed in 1973-1978 with 8 bit processor. In


those microprocessors, used various transistors on the integrated circuit. There are three
steps for processing the instructions like as overlapped fetch, decode and execute. Due to
this, second generation microprocessor is five time increase to first generation in different
area like as in instruction, speed, execution and higher chip densities.

Examples are: MC68000 Motorola microprocessor, Intel 8080, INTEL 8085, Motorola 6800
and 6801

Third Generation:

Third generation microprocessor was developed in 1978 along with 16 bit processor. This
processor is more useful for minicomputer. Third generation microprocessor is used the
HMOS technology and implement RISC-based architectures.

Examples are: MC68020, Intel’s 8086, Zilog Z8000

Four Generation:

Four generation microprocessor was introduced from 1981 to 1995 with 32 bit processor.
This microprocessor is designed with million transistors, and also based on HMOS
technology. Those types of microprocessor are able to process couples of instruction in
per clock cycle.

Examples are: Motorola’s 88100, Intel’s 80960CA, INTEL 80386, Mororola 68020
Fifth Generation:

Fifth generation microprocessor had been come from 1995 along with 64 bit processor. In
this microprocessor, have been embedded 10 million transistors. Due to this PCs are a
low-margin, high-volume-business dominated by a single microprocessor.

Examples are: PENTIUM, Celeron, Dual, quad and octa core processors

Types of Microprocessor with Examples

There are numerous different types of microprocessor those are used in the computer and
their examples as well.

• RISC • SIMD
• CISC • Symbolic Processors
• Superscalar Microprocessor • Bit-Slice Processors
• ASIC • Transputers
• DSP • Graphic Processors
RISC (Reduced Instruction Set Computer):

RISC means to “Reduced Instruction Set Computer”. Main objective of designing RISC is
to decrease execution time that is simplified by the computer’s instructions. RISC has to
use just one clock cycle for producing the result on uniform execution time. RISC needs
more couples of RAM memory to save all instructions, due to this reason decrease the
efficiency for all codes which are used in the lines form.

Examples are:

o DEC’s Alpha 21064, 21164 and 21264 processors


o SUN’s SPARC and ULTRA SPARC;
o PowerPC processors 601, 604, and more
o MIPS: TS (R10000) RISC Processor
o PA-RISC: HP 7100LC

CISC (Reduced Instruction Set Computer):

CISC stands for “Complex Instruction Set Microprocessors”. CISC has to contain the
complex instructions set, and due to that reason CISC take couple to time for executing
all instructions, hence its speed more slow to RISC. Main goal of CISC is developed for
various activities such as download, upload, and swap data between the memory card
and other devices which are connected with computer.
Examples are:

o IBM 370/168
o VAX 11/780
o Intel 386, 486, 80486
o Pentium Pro, Pentium, Pentium II, Pentium III, Pentium 4;
o Motorola’s 68000, 68020, 68030, 68040

Superscalar Microprocessors:

Superscalar Microprocessor is able to execute huge tasks at a one time without delay
because it has to contain the multiple pipelines structure. Mainly, this microprocessor is
designed for the ALU or other types of multipliers as well.

Examples are:

o Pentium, Pentium Pro


o Pentium II, Pentium III

ASIC (Application Specific Integrated Circuit):

Stands for “Application Specific Integrated Circuit”. ASIC microprocessor is not


designed for generic purpose because its motive of developing is to specific points such
as automotive emissions control or personal digital assistants’ computer.

(DSP) Digital Signal Multiprocessors:

DSP is also known as “Digital Signal Multiprocessors”. DSP processor helps to


encode/decode the streaming videos as well as transform the digital signals to analog
signals & analog signals to digital signals.DSP has superb power to calculate the
mathematically instructions.

Examples are:

o Texas instruments’ TMS 320C25,


o Motorola 56000,
o National LM 32900,
o Fujitsu MBB 8764

Applications areas:

❖ RADAR
❖ Home Theaters
❖ SONAR
❖ Audio gears
❖ TV set top boxes
❖ Mobile phones

SIMD Processors:

SIMD (Single Instruction Multiple Data) is also known as “Array processor”. Prime
objective of introducing the SIMD processor is to implement all computations into vector
form. In SIMD, uses various processing elements in parallel form instead of serially. In
that architecture, every processor elements have to use couples of ALUs, and every ALU
contains their local memory for storing computational data.

Symbolic Processors:

Symbolic processor is also known as “LISP processors or PROLOG processors“. Symbolic


processors are introduced for using in different fields such as expert system, machine
intelligence, artificial intelligence, and pattern-recognition. In Symbolic processor, not
need the floating point operations.

Bit-Slice Processors:

Bit-Slice processor is also known as building block because all microprocessors are
designed for specific word length along with building blocks. Building block has to
contain the 4-bit ALUs, micro programs sequencers, and carry look-ahead generators.

Examples are –

o AMD-2900, 2909, 2910, 29300 series,


o Texas instrument’s SN-74AS88XX series

Transputers:

Transputer microprocessor was developed in 1980 as a special type processor for


managing to all component processor, and it includes various internal components such
as FPU, Chip RAM and serial links, communication links etc. Communications links was
helped to make connection between all transputers.

Examples are:

o INMOS T414
o INMOS T800

Graphic Processors:
Intel has been introduced graphic chip and assigned to name 740-3D. With using those
processors, users can use high definition games and movies.

Examples are:

o Intel 82786
o Intel i860
o Intel i750

Microprocessor Function & Working


In this section, we solve your doubt that how does a microprocessor work? as well as
explain numerous microprocessor functions.

• It controls of all components of machine and broadcast the timing signals.


• Microprocessor retrieves all instructions from main memory such as RAM and
ROM.
• These fetched data are decoded, and decide which operation is to be performed
as per condition.
• All instructions are proceed for executing, and apply the arithmetically and
logically operations according to situation.
• It stores the executed programs.
• Make the communication between all I/O devices, and transfer entire data.
• If, it need extra execution of instructions then CPU help to supervision for all I/O
devices.
• In the end, finally outputs after execution of instructions send to memory or I/O
Module.

Advantages of Microprocessor
There are various features and benefits of microprocessor. Describe below

• Microprocessor helps to perform all complex arithmetically and logically


instructions.
• Microprocessor has more power to execute 3-4 billion instructions in one
second, and it measure in Hertz.
• Microprocessor is able to transfer huge data one memory location to other
location.
• Microprocessor can be performed floating point number in few milliseconds.
• Microprocessor is generic product, means it can be used in various electronic
processing devices with pre-programmed for performing to specific task.
• Microprocessor helps to provide accessibility for controlling of couple of
equipments with in time sharing.
• Microprocessor is able to multiprocessing and Parallel Processing.
• Easy to modification.
• Low cost.
• Better Reliability and Versatility
• Microprocessor needs to external controllers for performing huge tasks along
with extra capability.
• Microprocessor has flexible to program in nature.
• Microprocessor is based on the Low Thermal Design Power (TDP) means to it
enables to thinner, lighter, portable notebook devices.
• Microprocessor has to include the “Hyper Threading”.

Disadvantages of Microprocessor
Microprocessor has various limitations (cons) over their advantages as well.

• Microprocessors are getting more heat due to perform task.


• In Microprocessor, small packet of data can be transfer.
• Microprocessor does not contain any internal physical memory such as RAM or
ROM.
• It must not contact any external peripheral due to generate more heat.
• Microprocessor is totally base on the machine language.
• Degrade to 3D performance.

Overview of Computer Language Hierarchy

Machine Language:

Machine language is the only language that computer understands and uses to perform
a various operation. Everything from playing video games to watching your favorite
movies is done by computer using Machine language. Machine Language is a language
with a bunch of 0s and 1s. Every word in computer language is made up of 1s and 0s.
This becomes almost impossible for a programmer to understand Machine language
Since we hardly remember our lines of codes. Remembering numbers instead of
commands would be very intricate for the programmer. Not to mention, Debugging the
program in Machine language would be a nightmare. So, Computer scientists came up
with an upper layer of programming called Assembly Language that translates Easy to
understand (High-level language) to Machine language.

Assembly Language:

Assembly language is easier than Machine Language and lies on an upper layer level
than Machine language. However, Much like computer language, Assembly languages
works on low-level programming and it is unique to a particular processor. Unlike
machine language, assembly language is not entirely on 1s and 0s, In fact, assembly
language allows programmers to use codes or names to as oppose to a bunch of 1s and
0s. But we know that computer only understands machine language. So how does the
assembly language is used to communicate with the computer? Well, a utility program is
used so-called "Assembler" that translate this assembly language to machine language,
the only language computer understand.

High-Level Language:

High-Level Language or HLL are the languages that are designed specifically for the
programmer with much easier syntax. High-level language is much easier to figure out
because there are more English-like Languages in written programs. However, This high-
level language must be converted to machine language, since computer only knows 1s
and 0s. This translation is done by a compiler or interpreter depending on the style.
Compiler/Interpreter takes in the source code as an input, process it by understanding
the system and finally producing executable file meaning the computer readable file. It is
also important to keep in mind that higher level languages mostly focuses on software
development aspect and not on the hardware aspect.

Assembly Language

What is Assembly Language?

Assembly Language is a low-level programming language. It helps in understanding the


programming language to machine code. In computer, there is assembler that helps in
converting the assembly code into machine code executable. Assembly language is
designed to understand the instruction and provide to machine language for further
processing. It mainly depends on the architecture of the system whether it is the operating
system or computer architecture.

Assembly Language mainly consists of mnemonic processor instructions or data, and


other statements or instructions. It is produced with the help of compiling the high-level
language source code like C, C++. Assembly Language helps in fine-tuning the program.

Since most compilers convert source code directly to machine code, software developers
often create programs without using assembly language. However, in some cases,
assembly code can be used to fine-tune a program. For example, a programmer may write
a specific process in assembly language to make sure it functions as efficiently as possible.
While assembly languages differ between processor architectures, they often include
similar instructions and operators. Below are some examples of instructions supported
by x86 processors.

• MOV - move data from one location to another


• ADD - add two values
• SUB - subtract a value from another value
• PUSH - push data onto a stack
• POP - pop data from a stack
• JMP - jump to another location
• INT - interrupt a process

The following assembly language can be used to add the numbers 3 and 4:

mov eax, 3 - loads 3 into the register "eax"

mov ebx, 4 - loads 4 into the register "ebx"

add eax, ebx, ecx - adds "eax" and "ebx" and stores the result (7) in "ecx"

Writing assembly language is a tedious process since each operation must be performed
at a very basic level. While it may not be necessary to use assembly code to create a
computer program, learning assembly language is often part of a Computer Science
curriculum since it provides useful insight into the way processors work.

Why is Assembly Language Useful?

Assembly language helps programmers to write the human-readable code that is almost
similar to machine language. Machine language is difficult to understand and read as it
is just a series of numbers. Assembly language helps in providing full control of what
tasks a computer is performing.

Assembly Language And Machine Language

Machine language is the low-level programming language. It can only be represented by


0s and 1s. Earlier when we have to create pictures or show data on the screen of the
computer then it is very difficult to draw using only binary digits (0s and 1s). For
example: To write 120 in the computer system its representation is 1111000. So, it is very
difficult to learn. To overcome this problem the assembly language is invented.
Assembly language is the more than low level and less than high-level language (such as
C, C++, Java, Python, etc). So it is an intermediary language. Assembly languages use
numbers, symbols, and abbreviations instead of 0s and 1s. For example: For Addition,
Subtraction, and Multiplications it uses symbols likes Add, Sub, and Mul, etc.

Difference between Assembly language and Machine language.

Machine Language Assembly Language


Assembly language is only Machine language is only comprehensible to
comprehensible to human computers.
beings not to computers.
In assembly language data In machine language data only represented with the
can be represented with the help of binary format(0s and 1s), hexadecimal,and
help of mnemonics such as octadecimal.
Mov, Add, Sub, End, etc.
Assembly language is easy to Machine language is very difficult to understand by
understand by human being the human beings.
as compared to machine
language.
Modifications and error Modifications and error fixing cannot be done in
fixing can be done in machine language.
assembly language.
It is easy to memorize the Machine language is very difficult to memorize due
assembly language because to the use of binary format (0s and 1s).
some alphabets and
mnemonics are used.
Execution is slow as Execution is fast in machine language because all
compared to machine data is already present in binary format.
language.
Assembler is used as a There is no need of a translator.The machine
translator to convert language is already in machine-understandable form.
mnemonics into machine-
understandable form.
Assembly language is the Machine language is hardware dependent.
machine-dependent and it is
not portable.

Assembly Language Vs High-Level Languages

1. In assembly language programs written for one processor will not run on another
type of processor. In high-level language programs run independently of processor
type.
2. Performance and accuracy of assembly language code are better than a high-level.
3. High-level languages have to give extra instructions to run code on the computer.
4. Code of assembly language is difficult to understand and debug than a high-level.
5. One or two statements of high-level language expand into many assembly language
codes.
6. Assembly language can communicate better than a high-level Some type of
hardware actions can only be performed by assembly language.
7. In assembly language, we can directly read pointers at a physical address which is
not possible in high-level
8. Working with bits is easier in assembly language.
9. Assembler is used to translate code in assembly language while the compiler is used
to compile code in the high-level.
10. The executable code of high-level language is larger than assembly language code so
it takes a longer time to execute.
11. Due to long executable code, high-level programs are less efficient than assembly
language programs.
12. High-level language programmer does not need to know details about hardware
like registers in the processor as compared to assembly programmers.
13. The most high-level language code is first automatically converted into assembly
code.
Language Translation:
Language translation is the process of translating a program (source code), written in the
higher level language to lower level language. This translation process can occur once or
multiple time depending on the degree of programming. A program called translator is
responsible for this process. Initially, Translator first compiles the program meaning it
scans for any possible error or mistake done by the programmer. If there are any mistakes
then it reports the error to the developer and stops the process of translation since a
program with flaws cannot be executed.
After the compilation of the program, The translator starts the translation of the program
into lower-level language, for example, the translator may convert to assembly language.
And then another interpreter, this time "Assembler" will translate this Assembly
language to machine language.
Instruction fields:

In Machine language, Instructions are the basic command that acts as an interface
between hardware and software. Instructions are the commands that tell the computer
exactly what is to be done. There are two primary Instruction fields, Namely:

Opcode. Operand.

• Opcode: Opcode also known as Operation code, is a set of instructions that tells
the computer what operation are to be performed. Opcodes are a part of machine
language; they are are a set of commands that can be used in computer language.
There are many types of opcode that can be used to perform various operations.

For example: MOV A, 03H

Here, "MOV" is an opcode that manipulates the operands. In the example above, A is
an operand to which 03 hex value is assigned.

• Operand: Operands are the variables or values which are manipulated by the
opcodes. For example MOV AL, 24h

Here, The operand 24h is being assigned to another operand which is a register in the
example above. The opcode "MOV" only means to move the operand 24h to another
operand AL.
Assembler:

Assembler is a translator that translate a program written in Assembly language to


object files. Since computers only understand machine language, It is important the
program must be converted to machine compatible format. Assembler is also used with
higher level languages like Java, C, C++, Python, etc. This high-level language is first
interpreted or compiled into assembly files (C, C++ in this case), then the assembler
comes in and converts these assembly files to object files.
Linkers and Link libraries :

The linker is a computer program that takes a bunch of object files as an input and
combines all these object files as a single executable file. Initially, Programs written by
programmers are compiled by compilers. Once the records are collected, they are
converted to object files which are then fed as an input to the linker. Linker converts
these files to executable or even as a library data for further use.

Link libraries are just a collection of routines, variables that are used to bind or link a
compiled program to an executable file.
Debugger:

A debugger is a software that is used to find bugs in a program. No one is perfect,


Humans are prone to make mistakes, Programmers are no different. A script written for
a software contains lines of codes no less than a thousand. Finding errors from thousand
lines of code is not an easy job. For this matter, a software called debugger was created
that figures out if there is any error in the program and If there is a mistake, the
debugger locates the error for the programmer. Hence, This saves tons of time for
developing the software rather than wasting it on finding errors. It is, however,
Important to note that a debugger is a smart program but it only finds syntax mistakes
and not logical errors. Logical errors are nothing but the logic set by a programmer.
Which is something the only developer can understand and solve it.

Editor:

Editor is a program that allows users to write text, Read text from it and even save it.
We have tons of editors available for us. Editors are used for a various purpose, There
are a lot of editors specifically designed for programmers that actually help in proper
syntax, Syntax coloring and much more. Some of the modern writers are Notepad++,
Sublime Text, Vim, Geany etc. Not to mention, These editors are for free and are used
by many professional programmers. In a nutshell, An editor is a program that lets you
store text in a file.
There are two types of Link libraries: Dynamic-Link libraries (DLL). Statically linked
libraries.

Quick overview of the process :

As we can see from the diagram above , The job of a linker is to link the files to the
library. Ultimately, An executable file is produced to run on the operating system,
Windows.

Computer System Level Hierarchy

Computer System Level Hierarchy is the combination of different levels that connects
the computer with the user and that makes the use of the computer. It also describes
how the computational activities are performed on the computer and it shows all the
elements used in different levels of system. Computer System Level Hierarchy consists
of seven levels:
Level-0:

It is related to digital logic. Digital logic is the basis for digital computing and provides
a fundamental understanding of how circuits and hardware communicate within a
computer. It consists of various circuits and gates etc.

Level-1:

This level is related to control. Control is the level where microcode is used in the
system. Control units are included in this level of the computer system.

Level-2:

This level consists of machines. Different types of hardware are used in the computer
system to perform different types of activities. It contains instruction set architecture.

Level-3:

System software is a part of this level. System software is of various types. System
software mainly helps in operating the process and it establishes the connection
between hardware and user interface. It may consist operating system, library code, etc.

Level-4:

Assembly language is the next level of the computer system. The machine understands
only the assembly language and hence in order, all the high-level languages are
changed in the assembly language. Assembly code is written for it.
Level-5:

This level of the system contains high-level language. High-level language consists of
C++, Java, FORTRAN, and many other languages. This is the language in which the
user gives the command.

Level-6:

This the last level of the computer system hierarchy. This consists of users and
executable programs.

Instruction Set Architecture (ISA)


An instruction set architecture (ISA) defines the set of basic operations a computer must
support. This includes the functional definition of operations and precise descriptions of
how to invoke and access them. An ISA is independent from microarchitecture, which
refers to the implementation of an ISA in a processor. A single ISA can have different
microarchitecture implementations. Typically, an ISA will include instructions for data
handling and memory operations, arithmetic and logic operations, control flow
operations, and coprocessor instructions.

An ISA also defines the maximum bit length for all instructions, as well as how an
instruction is encoded. Having a definition of an ISA allows hardware and software
development to be separated from each other. This allows a company to develop
hardware while multiple other companies can develop software knowing it will run on
that hardware.

There are two major classifications of ISA: CISC and RISC. Complex instruction set
computer, or CISC, types include many specialized instructions that are of use to
particular programs, but not universal. A CISC program will typically use fewer
instructions but each instruction will take more cycles.

Reduced instruction set computer, or RISC, types have a smaller, optimized set of
generalized, simple instructions with separate instructions for load/store (rather than
load/store being part of another instruction). A RISC program will typically use a greater
number of instructions but each instruction will take one clock cycle. Other characteristic
features of RISC processors are simultaneous execution of parts through pipelining and
a large number of registers.
The RISC concept was developed in the 1980s at Stanford University (MIPS) and
University of California, Berkeley (RISC, commercialized as SPARC). The term CISC was
only coined afterward and generally referred to everything not-RISC.

Very long instruction word (VLIW) architectures break instructions into basic operations
that can be performed by the processor in parallel, called instruction-level parallelism
(ILP). Each VLIW instruction encodes multiple operations and the method relies on the
compiler to determine which operations can execute in parallel. The goal is to reduce
hardware complexity, using processors that have relatively simple control logic because
they do not perform any dynamic scheduling or reordering of operations.

Examples of instruction set

 ADD - Add two numbers together.

 COMPARE - Compare numbers.

 IN - Input information from a device, e.g., keyboard.

 JUMP - Jump to designated RAM address.

 JUMP IF - Conditional statement that jumps to a designated RAM address.

 LOAD - Load information from RAM to the CPU.

 OUT - Output information to device, e.g., monitor.

 STORE - Store information to RAM.

Basic Computer Organization


A computer system is basically a machine that simplifies complicated tasks. It should
maximize performance and reduce costs as well as power consumption. The different
components in the Computer System Architecture are Input Unit, Output Unit, Storage
Unit, Arithmetic Logic Unit, Control Unit etc.

A diagram that shows the flow of data between these units is as follows –
The input data travels from input unit to ALU. Similarly, the computed data travels from
ALU to output unit. The data constantly moves from storage unit to ALU and back again.
This is because stored data is computed on before being stored again. The control unit
controls all the other units as well as their data.

Details about all the computer units are −

• Input Unit
The input unit provides data to the computer system from the outside. So,
basically it links the external environment with the computer. It takes data from
the input devices, converts it into machine language and then loads it into the
computer system. Keyboard, mouse etc. are the most commonly used input
devices.

• Output Unit
The output unit provides the results of computer process to the users i.e it links
the computer with the external environment. Most of the output data is the form
of audio or video. The different output devices are monitors, printers, speakers,
headphones etc.

• Storage Unit
Storage unit contains many computer components that are used to store data. It is
traditionally divided into primary storage and secondary storage.Primary storage
is also known as the main memory and is the memory directly accessible by the
CPU. Secondary or external storage is not directly accessible by the CPU. The data
from secondary storage needs to be brought into the primary storage before the
CPU can use it. Secondary storage contains a large amount of data permanently.

• Arithmetic Logic Unit


All the calculations related to the computer system are performed by the
arithmetic logic unit. It can perform operations like addition, subtraction,
multiplication, division etc. The control unit transfers data from storage unit to
arithmetic logic unit when calculations need to be performed. The arithmetic logic
unit and the control unit together form the central processing unit.

• Control Unit
This unit controls all the other units of the computer system and so is known as its
central nervous system. It transfers data throughout the computer as required
including from storage unit to central processing unit and vice versa. The control
unit also dictates how the memory, input output devices, arithmetic logic unit etc.
should behave.

Microprocessor

The microprocessor is the controlling element in a computer system and is sometimes


referred to as the chip. Microprocessor is main hardware that drives the computer. t is a
targe Printed Circuit Board (PCB), Which is used in all electronic systems such as
computer, calculator, digital system etc. The speed of CPU depends upon the type of
microprocessor used.

▪ Intel 40004 was the first microprocessor to contain all of the components of
a CPU on a single chip with a 4-bit bus width.
▪ Some of the popular microprocessors are intel, Dual core, Pentium IV, etc.
Memory Unit

Memory is that part of the computer, WHich holds data and instructions, Memory is an integral
component of the CPU. The memory unit consists of Primary memory and Secondary.

Primary Memory

Primary memory or main memory of the computer is used to store the data and instructions during
execution of the instructions. The primary memory is of two types; Random Access memory(RAM)
and Read Only Memory (ROM).

Random Access Memory (RAM) It directly provides the required information to the processor.
RAM is a volatile memory. It provides temporary storage for data and instructions. RAM is
classified into two categories.

1. Static Random Access Memory (SRAM).


2. Dynamic Random Access Memory(DRAM).

Read only Memory (ROM) It is used for storing standard processing Programs that permanently
reside in the computer. Generally, designers program ROM chips at the time of manufacturing
circuits. ROM is a non-volatile memory. It can only be read not written.

ROM is classified into three categories

1. Programmable ROM(PROM).
2. Erasable Programmable ROM(EPROM).
3. Electrically Erasable Programmable ROM (EEPROM).
Secondary Memory

Secondary memory, also known as secondary storage or auxiliary memory, is used for storage data
and instructions permanently e.g. hard disks, CDs, DVDs, etc.

Tit-Bits

▪ Buffer is a temporary storage where register holds the data for further execution.
▪ Accumulator is a register in a CPU in which intermediate arithmetic and logic results
are stored.
▪ Reduced instruction Set Computer (RISC) and Complex Instruction Set
Computer(CISC) are the two kinds of microprocessors classified on the basis of
instruction set.
▪ The performance of computer is affected by the size of registers, size of RAM, speed
of system clock and size of cache memory.
▪ The speed of processor is measured in millions of cycles per second or megahertz
(MHz).
Interconnection of Units

CPU sends data, instructions and information to the components inside the computer as well as to the
peripherals and devices attached to it. Bus is a set of electronic signal pathways that allows
information and signals to travel between components inside or outside of a computer. The features
and functionality of a bus are as follows

▪ A bus is a set of wires used for interconnection, where each wire can carry one bit of
data.
▪ A computer bus can be divided into two types; internal bus and external bus.
▪ The internal bus connects components inside the motherboard like, CPU and system
memory. It is also called the system bus.
▪ The external bus connects the different external devices; peripherals, expansion slots,
I/O ports and drive connections to the rest of computer. It is also referred to as the
expansion bus.
▪ The command to access the memory or the I/O device is carried by the control bus.
▪ The address of I/O device or memory is carried by the address bus. The data to be
transferred is carried by the data bus.
Motherboard

The main circuit board contained in any computer is called a motherboard it is also known as
the mainboard or logic board or system board or planar board. The biggest piece of silicon housed
in the system unit of a computer is motherboard. All the other electronic devices and circuits of
computer system are attached to this board like, CPU, ROM.

RAM expansion slots, PCI slots and USB ports it also includes controllers for devices like the hard
drive, DVD drive, keyboard and mouse. In other words, motherboard makes everything in a
computer work together.

Instruction Cycle:

The instruction cycle represents the sequence of events that takes place as an instruction is read from
memory and executed.

A simple instruction cycle consists of the following steps

▪ Fetching the instruction from the memory.


▪ Decoding the instruction for operation.
▪ Executing the instruction.
▪ Storing in memory.
Instructions Format

Computer understands instructions only in terms of 0s and 1s, which is called the machine language.
A computer program is a set of instructions that describe the steps to be performed for carrying out a
computational task. The processor must have two inputs; instructions and data. The instructions tell
the processor what actions are needed to be performed on the data. An instruction is divided into two
parts; operation (op-code) and operand.

The op-code represents action that the processor must execute and the operand defines the parameters
of the action and depends on the operation.

Tit-Bits:
▪ Machine cycle is defined by the time, it takes to fetch two operands from registers. It
performs the ALU operation and stores the result in a register.
▪ Pipelining improves execution speed by putting the execution steps of several
instructions into parallel it is also called instruction prefetch.
▪ Sockets are the connecting points of chip on the motherboard.
▪ Generally, word computer refers to the central processing unit plus extemal memory.
▪ Load instruction is used for loading data into CPU accumulator register from memory.
▪ The box that comes along with your desktop computer in which all the electronic
components of your computer are contained is called system unit.
The Need for a Memory Hierarchy
CPU registers are at the top and a fast cache memory is there between the CPU and main
memory. The hard disk is used by the technique of virtual memory to expand the capacity
of main memory. In computer language this kind of hierarchy is known as the memory
hierarchy. This is applied to get larger memory space at low cost.
Not only that the fast memory has low storage capacity but it needs power supply till the

information needs to be stored and are costly. Memories with less cost have high access

time. Thus, the cost versus access time paves the way for memory hierarchy. It is used to

get high access rate at low cost of memory. A computer system uses a memory system

which comprises of these groups of memories:

Internal or CPU memories:

These are internal to the processor. These consist of small set of high speed
registers and are used as temporary locations where actual processing takes
place.

Primary memory:

This is a fast memory but not as fast as the processor’s internal memory. The storage

capacity is small and high cost per bit storage is there. This memory is accessed directly

by the processor. It stores programs and data which are currently needed by the CPU.

Secondary memory:

This memory provides scope of larger data storage. Access time is higher than the main

memory. This is permanent in nature. The following diagram shows memory hierarchy.

The following memories are here used:

• CPU (register)
• Cache memory
• Main memory
• Secondary storage, and
• Mass storage
On top of the memory hierarchy memory has faster access time, less capacity and higher

cost per bit stored. At the bottom there is larger storage capacity, slower access time and

lower cost per bit stored.

The cache memory is used in between the CPU and the main memory to enhance the

speed of main memory. After main memory secondary memory comes, which can be

divided in magnetic disk and magnetic tapes. These are used for long term storage.

Magnetic tapes are usually used for backup purpose.

CPU's Clock Speed

Alternatively referred to as clock rate and processor speed, clock speed is the speed that

the microprocessor executes each instruction or each vibration of the clock. The CPU

requires a fixed number of clock ticks, or cycles, to execute each instruction.

The higher the frequency of the CPU's clock, the more logical operations it can perform

per second. So, as the frequency of the CPU's clock increases, the time required to perform

tasks decreases.

Clock speeds are measured in MHz, 1 MHz representing 1 million cycles per second, or

in GHz, 1 GHz representing 1 thousand million cycles per second. The higher the CPU

speed, the better a computer performs, in a general sense. Other components like RAM,
hard drive, motherboard, and the number of processor cores (e.g., dual-core or quad-

core) can also improve the computer speed.

The CPU speed determines how many calculations it can perform in one second
of time. The higher the speed, the more calculations it can perform, thus making
the computer faster. While several brands of computer processors are available,
including Intel and AMD, they all use the same CPU speed standard, to
determine what speed each of their processors run. If a processor has dual or
quad-cores, the computer's performance can increase even if the CPU speed
remains the same. A dual-core 3.0 GHz processor would be capable of
performing double the number of calculations as a single-core 3.10 GHz
processor.

Disk Access Time

Disk Access Time is the time required for a computer to process data from the processor
and retrieve the data from a storage device like a hard disk.

There are two components in disk access time. The first component is seek time which
occurs when the read and write arm is seeking the desired track. The second component
is latency or wait time which occurs when the head write arm waits for the desired sector
on the track to spin around.

Access time can be calculated as follows:

Access Time = track seek time * sector latency time

Access to the data on disks are measured in terms of milliseconds. However, this is
actually much slower than the processing speeds of CPUs. Although I/O is still slow, they
cannot match the speed improvements of modern processors.

Example:

Disk Parameters:

Transfer size is 8K bytes

Advertised average seek is 12 ms


Disk spins at 7200 RPM

Transfer rate is 4 MB/sec

Controller overhead is 2 ms

Assume that disk is idle so no queuing delay

Question: What is Average Disk Access Time for a Sector?

Ave seek + ave rot delay + transfer time + controller overhead

12 ms + 0.5/(7200 RPM/60) + 8 KB/4 MB/s + 2 ms

12 + 4.15 + 2 + 2 = 20 ms

You might also like