IT Notes Sem1 Bba Unit 1
IT Notes Sem1 Bba Unit 1
Computer
Types of Computer
There are two bases on which we can define the types of computers. We will discuss the type of
computers on the basis of size and data handling capabilities. We will discuss each type of
computer in detail. Let’s see first what are the types of computers.
Super Computer
Mainframe computer
Mini Computer
Workstation Computer
Personal Computer (PC)
Server Computer
Analog Computer
Digital Computer
Hybrid Computer
Tablets and Smartphone
Now, we are going to discuss each of them in detail.
Supercomputer
When we talk about speed, then the first name that comes to mind when thinking of computers
is supercomputers. They are the biggest and fastest computers (in terms of speed of processing
data). Supercomputers are designed such that they can process a huge amount of data, like
processing trillions of instructions or data just in a second. This is because of the thousands of
interconnected processors in supercomputers. It is basically used in scientific and engineering
applications such as weather forecasting, scientific simulations, and nuclear energy research. It
was first developed by Roger Cray in 1976.
Super Computers
Characteristics of Supercomputers
Supercomputers are the computers that are the fastest and they are also very
expensive.
It can calculate up to ten trillion individual calculations per second, this is also the reason
which makes it even faster.
It is used in the stock market or big organizations for managing the online currency world
such as Bitcoin etc.
It is used in scientific research areas for analyzing data obtained from exploring the solar
system, satellites, etc.
Mainframe computer
Mainframe computers are designed in such a way that they can support hundreds or thousands
of users at the same time. It also supports multiple programs simultaneously. So, they can
execute different processes simultaneously. All these features make the mainframe computer
ideal for big organizations like banking, telecom sectors, etc., which process a high volume of
data in general.
Characteristics of Mainframe Computers
It is also an expensive or costly computer.
It has high storage capacity and great performance.
It can process a huge amount of data (like data involved in the banking sector) very
quickly.
It runs smoothly for a long time and has a long life.
Minicomputer
Minicomputer is a medium size multiprocessing computer. In this type of computer, there are
two or more processors, and it supports 4 to 200 users at one time. Minicomputer is similar to
Microcontroller. Minicomputers are used in places like institutes or departments for different
work like billing, accounting, inventory management, etc. It is smaller than a mainframe
computer but larger in comparison to the microcomputer.
Characteristics of Minicomputer
Its weight is low.
Because of its low weight, it is easy to carry anywhere.
less expensive than a mainframe computer.
It is fast.
Workstation Computer
A workstation computer is designed for technical or scientific applications. It consists of a fast
microprocessor, with a large amount of RAM and a high-speed graphic adapter. It is a single-
user computer. It is generally used to perform a specific task with great accuracy.
Characteristics of Workstation Computer
It is expensive or high in cost.
They are exclusively made for complex work purposes.
It provides large storage capacity, better graphics, and a more powerful CPU when
compared to a PC.
It is also used to handle animation, data analysis, CAD, audio and video creation, and
editing.
Personal Computer (PC)-
Personal Computers is also known as a microcomputer. It is basically a general-purpose
computer designed for individual use. It consists of a microprocessor as a central processing
unit(CPU), memory, input unit, and output unit. This kind of computer is suitable for personal
work such as making an assignment, watching a movie, or at the office for office work, etc. For
example, Laptops and desktop computers.
1. Servers : Servers are nothing but dedicated computers which are set-up to offer some
services to the clients. They are named depending on the type of service they offered.
Eg: security server, database server.
2. Workstation : Those are the computers designed to primarily to be used by single user
at a time. They run multi-user operating systems. They are the ones which we use for
our day to day personal / commercial work.
3. Information Appliances : They are the portable devices which are designed to perform
a limited set of tasks like basic calculations, playing multimedia, browsing internet etc.
They are generally referred as the mobile devices. They have very limited memory and
flexibility and generally run on “as-is” basis.
4. Embedded computers : They are the computing devices which are used in other
machines to serve limited set of requirements. They follow instructions from the non-
volatile memory and they are not required to execute reboot or reset. The processing
units used in such device work to those basic requirements only and are different from
the ones that are used in personal computers- better known as workstations.
Less interactive for the users. More interactive for the users.
Humanware is the method of adding a human facet into the development of computer programs.
The main goal of developing humanware is to make hardware and software as functional as
possible.
Over the years, computer manufacturers have been striving toward improving UX continually. In
a business sense, good UX effectively improves customer retention by 5%, which is enough to
rake in a profit of as much as 25%. And since users lie at the center of all business operations,
satisfying their needs should always be a priority.
That is the idea behind developing humanware. Before companies set their goals or start the
design process, they first need to understand what their target users need. For example, when
building a desktop, the humanware factor comes in when manufacturers consider the overall
design. Even the visually or hearing-impaired should be able to use the system. That way, even
the differently abled would be enticed to purchase and use the product. They may even
recommend it to peers. Systems with poor UIs or are hard to operate, however, provide poor UX
and so are not sellable.
To understand what user capabilities must be present in a product, the development team must
first identify their target users. That may require as many details as possible to address many
needs. These details include demographics (age, location, career, familial status, etc.),
motivations (mindset, power, incentivization, fears, etc.), previous product experience, and
everything else that helps developers get to know target users.
This first step is often the most crucial. It takes some time to finish because developers need to
be as exhaustive as possible. It also requires considering how customers will evolve.
Using the target users and their needs as pegs, the development team then needs to come up with
goals. This step requires developers to think of all possible humanware capabilities they can
integrate into the system they are building. If possible, all these objectives must be measurable
for future improvements.
Step 3: Building a Prototype
The next step is building a prototype. This device will allow developers to test if the product can
indeed meet the goals they set. It involves asking actual users to test the prototype. The
development team gets user feedback and incorporates suggestions or recommendations into the
current design to address flaws. The prototype should undergo various testing cycles until the
product passes quality and usability testing.
Operating System Definition and Function
In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a naive
user.
We need a system which can act as an intermediary and manage all the processes and
resources present in the system.
The purpose of an operating system is to provide an environment in which a user can execute
programs in convenient and efficient manner.
In Batch operating system, access is given to more than one person; they submit their
respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then
executes the jobs one by one. The users collect their respective output when all the jobs get
executed.
Jobs were uploaded offline on the punch cards/ magnetic tapes by the operator. Operator
converted similar kind of jobs in to batches. Batch were submitted to the CPU.
The OS will provide the batch to CPU and CPU starts executing these batch one by one. User
has to go to get the output by itself after some period of time(week or more than week).
Major disadvantage was that’s CPU has to sit idle for a long period of type( Non-
preemption)means CPU is not executing multiple jobs together.
The purpose of this operating system was mainly to transfer control from one job to another
as soon as the job was completed. It contained a small set of programs called the resident
monitor that always resided in one part of the main memory. The remaining part is used for
servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very
high, then the other four jobs will never be executed, or they will have to wait for a very long
time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job
requires the input of two numbers from the console, then it will never get it in the batch
processing scenario since the user is not present at the time of execution.
In a multiprogramming environment, when a process does its I/O, The CPU can start the
execution of other processes. Therefore, multiprogramming improves the efficiency of the
system.
e.g---If a user submit a process p1 from RAM to CPU, then the CPU will process first p1 and
then p2and the p3 and so on…but the important point is that CPU will execute the process
completely unless and until process will say on its own that it want to go for some input or
output operation.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are
used efficiently, but they do not provide any user interaction with the computer system.
In Symmetric, One OS control all CPU, each CPU has equal rights. All the CPU are in peer
to peer relationship.
Whereas in Asymmetric , there is master process that gives instruction to all the other
processors. It contains master slave relationship.
Symmetric Multiprocessing provides proper load balancing, improved fault tolerance, and
decreases the possibility of a CPU bottleneck. It is complicated since the entire CPUs share
a memory, and a processor failure in Symmetric Multiprocessing reduces computing
capacity.
Example-
If a master processor fails, one of the slave processors assumes control of the execution.
If one slave processor fails, the other slave processor takes over. It is simple since the data
structure and a single processor controls all system actions. Assume there are four CPUs
named C1, C2, C3, and C4. C4 is the master processor and allocates tasks to the other
CPUs. If C1 is assigned process P1, C2 is assigned process P2, and C3 is assigned process
P3. Each processor will only work on the processes that have been assigned to them.
Basic Each CPU executes the OS The Master processor only carries out
operations. the OS functions.
Processor All processors use a common ready The master processor assigns the
queue, or each may have its private slave processors processes, or they
ready queue. have some predefined tasks.
Failure When a CPU Fails, the system's If a master processor fails, control is
computing capacity decreases. passed to a slave processor. If a slave
processor fails, the task is passed to a
different processor.
The Application of a Real-Time system exists in the case of military applications, if you want
to drop a missile, then the missile is supposed to be dropped with a certain precision.
When humans want to express their feelings, thoughts, and ideas to other humans we
communicate through languages. In the same way, in the case of computers, we need languages
to communicate to the computer and those languages are called programming languages. But we
need a language translator in between because the computer understands only machine language
(in the form of 0s and 1s) and it is hard for us to give instructions directly in machine language,
so we use language processors/translators which are special translator system software used to
convert the programming languages into machine code.
Hence, Compiler, Interpreter, and Assembler are types of language processors that convert
programming languages to machine language (binary code). Compilers and interpreters are used
to convert High-Level Language into machine language. Assemblers are used to convert Low-
level Language or Assembly Language Code into Machine Language (Binary code).
Assembler--
It was the first interface through which communication between machines and humans became
possible.
Assembly language or low-level language is where we use mnemonics(instructions, in place of
machine language, assembly language is dependent on the machine which implies that the
mnemonics are also dependent on the architecture of machines.
We need an Assembler to fill the gap between human and machine so that they can communicate
with each other. code written in assembly language is some sort of mnemonics(instructions) like
ADD, MUL, MUX, SUB, DIV, MOV and so on. and the assembler is basically able to convert
these mnemonics in Binary code. Here, these mnemonics also depend upon the architecture of
the machine.
Assembler work in two different phases over the given input-first phase and second phase.
Advantages of Assembler
Assemblers are used in computer forensics, brute force hacking etc. where it is important
to determine exactly what is going on.
Disadvantages of Assembler
It is difficult to debug.
We have to write different assembly codes for 8-bit, 16-bit, 32-bit, 64-bit machines which
is a tedious task and difficult to maintain.
Interpreter----
It converts source programs written in High-Level Programming language into machine code line
by line, which means it executes one single line at a time into machine language. If there is an
error in the program the interpreter terminates its translation process and proceeds for execution
only when the error is removed. This process continues till the interpreter reaches the end of the
Program.
In the case of an interpreter, each line is converted then executed simultaneously, as a result, it
requires less memory space as compared to the compiler.
Advantages of Interpreter
Interpreters give execution control to programmers as they can see their code running line
by line.
Disadvantages of Interpreter
It takes time for converting and executing the instructions line by line.
It is less secure for privacy because we need to share the actual program.
To run the code on other machine it requires interpreter to be installed on the machine.
COMPILERS---
It is a translator program that converts source code written in a High-Level language like Java,
C++, etc. to equivalent machine language in one go.
It converts the entire program to executable object code if the entire program is Error-free Code.
However, if there are errors in the program the compiler highlights them at the end of the
execution of the entire program, and then the errors must be removed for successful compilation
of source code.
It converts the source program to object code(combination of binary numbers) which can be
stored and we can run it each time we need to execute the program therefore it eliminates
recompilation.
At the same time due to the generation of the intermediate file, it takes up a lot of memory as
compared to other language processors. Compilation as a whole process is done in the following
phases - lexical analyzer, semantic analyzer, syntax analyzer, intermediate code generator, code
optimizer, symbol table, and error handle.
Example:
x = y + 10
Tokens
X Identifier
= Assignment operator
Y Identifier
+ Addition operator
10 Number
Symbol Table – It is a data structure being used and maintained by the compiler, consisting of
all the identifier’s names along with their types. It helps the compiler to function smoothly by
finding the identifiers quickly.
Example
Any identifier/number is an expression
If x is an identifier and y+10 is an expression, then x= y+10 is a statement.
Phase 3-Semantic Analyzer – It verifies the parse tree, whether it’s meaningful or
not. It furthermore produces a verified parse tree. It also does type checking, Label
checking, and Flow control checking.
It generates intermediate code, which is a form that can be readily executed by a machine We
have many popular intermediate codes. Example – Three address codes etc. Intermediate code
is converted to machine language using the last two phases which are platform dependent.
It transforms the code so that it consumes fewer resources and produces more speed. The
meaning of the code being transformed is not altered. Optimization can be categorized into two
Example:
a = b + 60.0
Compiler operates in various phases each phase transforms the source program from one
representation to another
Six phases of compiler design are 1) Lexical analysis 2) Syntax analysis 3) Semantic
analysis 4) Intermediate code generator 5) Code optimizer 6) Code Generator
Lexical Analysis is the first phase when compiler scans the source code
Syntax analysis is all about discovering structure in text
Semantic analysis checks the semantic consistency of the code
Once the semantic analysis phase is over the compiler, generate intermediate code for the
target machine
Code optimization phase removes unnecessary code line and arranges the sequence of
statements
Code generation phase gets inputs from code optimization phase and produces the page
code or object code as a result
A symbol table contains a record for each identifier with fields for the attributes of the
identifier
Error handling routine handles error and reports during many phases
Advantages of Compiler
It produces an executable file that can run without the need for source code.
Using a compiler is more secure because the actual source program can be hidden which
makes it private and secure.
The machine code of the executable file is native to the machine which makes the compiler
well optimized.
Disadvantages of Compiler
The source code needs to be recompiled every time there is a change in the code.
Assembler vs Compiler
Compiler Assembler
Compiler converts program written in a high-level language to The assembler converts assembly
machine-level language. code into machine code.
A compiler works in the following phases: lexical analyzer, An assembler works in two phases
semantic analyzer, syntax analyzer, intermediate code generator, over the given input: the first phase
code optimizer, symbol table, and error handle. and the second phase.
Compiler vs Interpreter
Compiler
Interpreter
It translates a High-Level language into machine It translates a High-Level language into machine
code at once.
code line by line.
It creates and stores an object file. It does not create an object program.
Memory consumption is more in the case of the An interpreter takes relatively less memory than the
compiler. compiler.
Why is Compiler better than Assembler and Interpreter?Compilers are preferred over
assemblers and interpreters because of various reasons:
Compilers translate and execute the code faster than assemblers and interpreters.
The compiler converts the code into Object file which can be used whenever we need to
execute the program, therefore it eliminates the need to re-compile.
All the errors are found and displayed together at the end.
Summing Up
All three of the language processors are used to convert programming languages into equivalent
machine code. Compilers and interpreters convert High-Level languages whereas an Assembler
is used to convert Low-Level language. Nowadays, most of the languages like Java, C++ are
converted using a compiler whereas Python uses an interpreter. The use of assembler is very rare
and it is mostly only used by computer experts and hackers. Most widely used language translator
among the three is compiler.
Features of Memory--
1. Location: It represents the internal or external location of the memory in a computer. The internal
memory is inbuilt in computer memory. It is also known as primary memory. the example of primary
memory are registers, cache and main memory. Whereas, external memory is the separate storage device
from the computer, such as disk, tape, USB pen drive.
2. Capacity: It is the most important feature of computer memory. Storage capacity can vary in external
and internal memory. External devices' storage capacity is measured in terms of bytes, whereas the
internal memory is measured with bytes or words. The storage word length can vary in bits, such as 8,
16 or 32 bits.
3. Access Methods: Memory can be accessed through four modes of memory.
o DMA: As the name specifies, Direct Memory Address (DMA) is a method that allows input/output
(I/O) devices to access or retrieve data directly or from the main memory.
o Sequential Access Method: The sequential access method is used in a data storage device to read
stored data sequentially from the computer memory. Whereas, the data received from random
access memory (RAM) can be in any order.
o Random Access Method: It is a method used to randomly access data from memory. This method
is the opposite of SAM. For example, to go from A to Z in random access, we can directly jump
to any specified location. In the Sequential method, we have to follow all intervening from A to Z
to reach at the particular memory location.
o Associative Access Method: It is a special type of memory that optimizes search performance
through defined data to directly access the stored information based on a memory address.
4. Unit of transfer: As the name suggests, a unit of transfer measures the transfer rate of bits that can be
read or write in or out of the memory devices. The transfer rate of data can be different in external and
internal memory.
o Internal memory: The transfer rate of bits is mostly equal to the word size.
o External memory: The transfer rate of bit or unit is not equal to the word length. It is always
greater than a word or may be referred to as blocks.
5. Performance: The performance of memory is majorly divided into three parts.
o Access Time: In random access memory, it represents the total time taken by memory devices to
perform a read or write operation that an address is sent to memory.
o Memory Cycle Time: Total time required to access memory block and additional required time
before starting second access.
o Transfer rate: It describes the transfer rate of data used to transmit memory to or from an external
or internal memory device. Bit transfer can be different for different external and internal devices.
6. Physical types: It defines the physical type of memory used in a computer such as magnetic,
semiconductor, magneto-optical and optical.
7. Organization: It defines the physical structure of the bits used in memory.
8. Physical characteristics: It specifies the physical behavior of the memory like volatile, non-volatile or
non-erasable memory. Volatile memory is known as RAM, which requires power to retain stored
information, and if any po
9. wer loss has occurred, stored data will be lost. Non-volatile memory is a permanent storage memory that
is used to obtain any stored information, even when the power is off. Non-erasable memory is a type of
memory that cannot be erased after the manufactured like ROM because at the time of manufactured
ROM are programmed.
Classification of Memory
, Auxiliary memory and the Cache memory. Main memory is used to kept programs or data when the processor
is active to use them. When a program or data is activated to execute, the processor first loads instructions or
programs from secondary memory into main memory, and then the processor starts execution. Accessing or
executing of data from primary memory is faster because it has a cache or register memory that provides faster
response, and it is located closer to the CPU
The primary memory is volatile, which means the data in memory can be lost if it is not saved when a power
failure occurs. It is costlier than secondary memory, and the main memory capacity is limited as compared to
secondary memory.
is one of the faster types of main memory accessed directly by the CPU. It is the hardware in a computer device to
temporarily store data, programs or program results. It is used to read/write data in memory until the machine is
working. It is volatile, which means if a power failure occurs or the computer is turned off, the information stored
in RAM
will be lost. All data stored in computer memory can be read or accessed randomly at any time.
There are two types of RAM:
o SRAM
o DRAM
DRAM: DRAM (Dynamic Random-Access Memory) is a type of RAM that is used for the dynamic
storage of data in RAM. In DRAM, each cell carries one-bit information. The cell is made up of two parts:
a capacitor and a transistor. The size of the capacitor and the transistor is so small, requiring millions of
them to store on a single chip. Hence, a DRAM chip can hold more data than an SRAM chip of the same size.
However, the capacitor needs to be continuously refreshed to retain information because DRAM is volatile. If
the power is switched off, the data store in memory is lost.
Characteristics of DRAM
SRAM: SRMA (Static Random-Access Memory) is a type of RAM used to store static data in the
memory. It means to store data in SRAM remains active as long as the computer system has a power
supply. However, data is lost in SRAM when power failures have occurred.
Characteristics of Static Ram
The access time of SRAM is slow. The access time of DRAM is high.
It uses flip-flops to store each bit of information. It uses a capacitor to store each bit of information.
It does not require periodic refreshing to preserve the It requires periodically refreshing to preserve the
information. information.
Advantages of RAM
Disadvantages of RAM
Read-Only Memory (ROM)--is a memory device or storage medium that is used to permanently store
information inside a chip. It is a read-only memory that can only read stored information, data or programs, but
we cannot write or modify anything. A ROM contains some important instructions or program data that are
required to start or boot a computer. It is a non-volatile memory; it means that the stored information cannot be
lost even when the power is turned off or the system is shut down.
Types of ROM
Advantages of ROM
1. It is a non-volatile memory in which stored information can be lost even power is turned off.
2. It is static, so it does not require refreshing the content every time.
3. Data can be stored permanently.
4. It is easy to test and store large data as compared to RAM.
5. These cannot be changed accidently
6. It is cheaper than RAM.
7. It is simple and reliable as compared to RAM.
8. It helps to start the computer and loads the OS.
Disadvantages of ROM
1. Store data cannot be updated or modify except to read the existing data.
2. It is a slower memory than RAM to access the stored data.
3. It takes around 40 minutes to destroy the existing data using the high charge of ultraviolet light.
RAM Vs. ROM
RAM ROM
Read and write operations can be performed. Only Read operation can be performed.
Data can be lost in volatile memory when the Data cannot be lost in non-volatile memory when the
power supply is turned off. power supply is turned off.
Storage data requires to be refreshed in RAM. Storage data does not need to be refreshed in ROM.
The size of the chip is bigger than the ROM chip The size of the chip is smaller than the RAM chip to
to store the data. store the same amount of data.
Types of RAM: DRAM and SRAM Types of ROM: MROM, PROM, EPROM, EEPROM
Secondary Memory--is a permanent storage space to hold a large amount of data. Secondary memory is also
known as external memory that representing the various storage media (hard drives, USB, CDs, flash
drives and DVDs) on which the computer data and program can be saved on a long term basis. However,
it is cheaper and slower than the main memory. Unlike primary memory, secondary memory cannot be
accessed directly by the CPU. Instead of that, secondary memory data is first loaded into the RAM
(Random Access Memory) and then sent to the processor to read and update the data. Secondary memory
devices also include magnetic disks like hard disk and floppy disks, an optical disk such as CDs and
CDROMs, and magnetic tapes.
Hard Disk
A hard disk is a computer's permanent storage device. It is a non-volatile disk that permanently stores data,
programs, and files, and cannot lose store data when the computer's power source is switched off. Typically, it is
located internally on computer's motherboard that stores and retrieves data using one or more rigid fast rotating
disk platters inside an air-sealed casing. It is a large storage device, found on every computer or laptop for
permanently storing installed software, music, text documentation, videos, operating system, and data until the
user did not delete.
Floppy Disk
A floppy disk is a secondary storage system that consisting of thin, flexible magnetic coating disks for holding
electronic data such as computer files. It is also known as Floppy Diskette that comes in three sizes like 8 inches,
5.5 inches and 3.5 inches. The stored data of a floppy disk can be accessed through the floppy disk drive.
Furthermore, it is the only way through a new program installed on a computer or backup of the information.
However, it is the oldest type of portable storage device, which can store data up to 1.44 MB. Since most programs
were larger, that required multiple floppy diskettes to store large amounts of data. Therefore, it is not used due to
very low memory storage.
CD (Compact Disc)
is an optical disk storage device, stands for Compact Disc. It is a storage device used to store various data types
like audio, videos, files, OS, Back-Up file, and any other information useful to a computer. The CD has a width
of 1.2 mm and 12 cm in height, which can store approximately 783 MB of data size. It uses laser light to read
and write data from the CDs.
Types of CDs
1. CD-ROM (Compact Disc Read Only Memory): It is mainly used for bulk size mass like audio CDs,
software and computer games at the time of manufacture. Users can only read data, text, music, videos
from the disc, but they cannot modify or burnt it.
2. CD-R (Compact Disc Recordable): The type of Compact Disc used to write once by the user; after that,
it cannot be modified or erased.
3. CD-RW (Compact Disc Rewritable): It is a rewritable CD disc, often used to write or delete the stored
data.
DVD Drive/Disc
DVD is an optical disc storage device, stands for Digital Video Display or Digital Versatile Disc. It has the
same size as a CD but can store a larger amount of data than a compact disc. It was developed in 1995 by Sony,
Panasonic, Toshiba and Philips four electronics companies. DVD drives are divided into three types, such as
DVD ROM (Read Only Memory), DVD R (Recordable) and DVD RW (Rewritable or Erasable). It can store
multiple data formats like audio, videos, images, software, operating system, etc. The storing capacity of data in
DVD is 4.7 GB to 17 GB.
Blu Ray Disc (BD)
Blu Ray is an Optical disc storage device used to store a large amount of data or high definition of video recording
and playing other media files. It uses laser technology to read the stored data of the Blu-ray Disk. It can store
more data at a greater density as compared to CD/ DVD. For example, compact discs allow us to store 700 MB
of data, and in DVDs, it provides up to 8 GB of storage capacity, while Blu-ray Discs provide 28 GB of space to
store data.
Pen Drive
A pen drive is a portable device used to permanently store data and is also known as a USB flash drive. It is
commonly used to store and transfer the data connected to a computer using a USB port. It does not have any
moveable part to store the data; it uses an integrated circuit chip that stores the data. It allows the users to store
and transfer data like audio, videos, images, etc. from one computer to any USB pen drive. The storing capacity
of pen drives from 64 MB to 128 GB or more.
Cache Memory
It is a small-sized chip-based computer memory that lies between the CPU and the main memory. It is a faster,
high performance and temporary memory to enhance the performance of the CPU. It stores all the data and
instructions that are often used by computer CPUs. It also reduces the access time of data from the main memory.
It is faster than the main memory, and sometimes, it is also called CPU memory because it is very close to the
CPU chip. The following are the levels of cache memory.
1. L1 Cache: The L1 cache is also known as the onboard, internal, or primary cache. It is built with the help
of the CPU. Its speed is very high, and the size of the L1 cache varies from 8 KB to 128 KB.
2. L2 Cache: It is also known as external or secondary cache, which requires fast access time to store
temporary data. It is built into a separate chip in a motherboard, not built into the CPU like the L1 level.
The size of the L2 cache may be 128 KB to 1 MB.
3. L3 Cache: L3 cache levels are generally used with high performance and capacity of the computer. It is
built into a motherboard. Its speed is very slow, and the maximum size up to 8 MB.
1. It is very costly as compared to the Main memory and the Secondary memory.
2. It has limited storage capacity.
Register Memory
The register memory is a temporary storage area for storing and transferring the data and the instructions to a
computer. It is the smallest and fastest memory of a computer. It is a part of computer memory located in the CPU
as the form of registers. The register memory is 16, 32 and 64 bits in size. It temporarily stores data instructions
and the address of the memory that is repeatedly used to provide faster response to the CPU.
Data can be access directly by the processor or Data cannot be accessed directly by the I/O processor
CPU. or CPU.
Stored data can be a volatile or non-volatile The nature of secondary memory is always non-
memory. volatile.
It is more costly than secondary memory. It is less costly than primary memory.
It required the power to retain the data in primary It does not require power to retain the data in
memory. secondary memory.
Examples of primary memory are RAM, ROM, Examples of secondary memory are CD, DVD,
Registers, EPROM, PROM and cache memory. HDD, magnetic tapes, flash disks, pen drive, etc.