Ram and Rom Calculations
Ram and Rom Calculations
Ram and Rom Calculations
A memory is just like a human brain. It is used to store data and instructions.
Computer memory is the storage space in computer where data is to be processed
and instructions required for processing are stored. The memory is divided into
large number of small parts called cells. Each location or cell has a unique address
which varies from zero to memory size minus one. For example if computer has
64k words, then this memory unit has 64 * 1024=65536 memory locations. The
address of these locations varies from 0 to 65535.
Memory circuits can largely be separated into two major groups: dyanamic
memories that store data for use in a computer system (such as the RAM in a PC);
and static memories that store information that defines the operating state of a
digital system.
1
Basic Concepts
The maximum size of the memory that can be used in any computer is determined
by the addressing scheme.
2
Main Memory (RAM)
If we were to sum all the bits of all registers within CPU, the total amount of
memory probably would not exceed 5,000 bits. Most computational tasks
undertaken by a computer require a lot more memory. Main memory is the
next fastest memory within a computer and is much larger in size.
Typical main memory capacities for different kinds of computers are: PC
512MB5 , fileserver 4GB , database server 8GB.
Computer architectures also impose an architectural constraint on the
maximum allowable RAM. This constraint is normally equal to 2WordSize
memory locations.
RAM (Random Access Memory) is the most common form of Main
Memory. RAM is normally located on the motherboard and so is typically
less than 12 inches from the CPU.
ROM (Read Only Memory) is like RAM except that its contents cannot be
overwritten and its contents are not lost if power is turned off (ROM is non-
volatile).Although slower than register memory, the contents of any location
in RAM can still be “read” or “written” very quickly . The time to read or
write is referred to as the access time and is constant for all RAM locations.
In contrast to register memory, RAM is used to hold both program code
(instructions) and data (numbers, strings etc). Programs are “loaded” into
RAM from a disk prior to execution by the CPU.
Locations in RAM are identified by an addressing scheme e.g. numbering
the bytes in RAM from 0 onwards 10. Like registers, the contents of RAM
are lost if the power is turned off.
3
Memory is primarily of three types
1) Primary Memory/Main Memory
2) Cache Memory
3) Secondary Memory
4
Primary Memory (Main Memory)
Primary memory holds only those data and instructions on which computer is
currently working. It has limited capacity and data is lost when power is switched
off. It is generally made up of semiconductor device. These memories are not as
fast as registers. The data and instruction required to be processed reside in main
memory.
5
6
7
8
9
10
The Maximum size of the memory in any computer is determined by the number
address lines, provided by processor used in the computer. For ex: if processor has
20 address lines, it is capable of addressing 220 = 1M (mega ) memory locations.
The maximum bits that can be transferred from memory or to the memory depend
on the data lines supported by the processor. From the system standpoint, the
memory unit is viewed as a black box. Data transfer between the memory and the
processor takes place through the two processor registers AR(Address Register)
and DR(Data Register).
11
The processor writes the data into a memory location by loading the address of
this location into AR and loading the data into DR. Random access memory
(RAM) is the best known form of computer memory. RAM is considered "random
access" because you can access any memory cell directly if you know the row and
column that intersect at that cell. RAM data, on the other hand, can be accessed in
any order.
RAM memory consists of memory cells. Each memory cell represents a single bit
of data (logic 1 or logic 0). Memory cells are etched onto a silicon wafer in an
array of columns (bit lines) and rows (word lines). The intersection of a bit line
and word line constitutes the address of the memory cell.
There are many kinds of RAM and new ones are invented all the time. One aim is
to make RAM access as fast as possible in order to keep up with the increasing
speed of CPUs.
12
Characteristics of Main Memory
13
ROM was used to store the “boot” or start-up program (so called firmware) that a
computer executes when powered on, although it has now fallen out-of-favour to
more flexible memories that support occasional writes. ROM is still used in
systems with fixed functionalities.
14
15
16
Cache memory is a very high speed semiconductor memory which can speed up
CPU. It acts as a buffer between the CPU and main memory. It is used to hold
those parts of data and program which are most frequently used by CPU. The parts
of data and programs are transferred from disk to cache memory by operating
system, from where CPU can access them.
First generation processors, those designed with vacuum tubes in 1950 or those
designed with integrated circuits in 1965 or those designed as microprocessors in
1980 were generally about the same speed as main memory. On such processors,
this naive model was perfectly reasonable. By 1970, however, transistorized
supercomputers were being built where the central processor was significantly
faster than the main memory, and by 1980, the difference had increased, although
it took several decades for the performance difference to reach today's extreme.
Solution to this problem is to use what is called a cache memory between the
central processor and the main memory. Cache memory takes advantage of the fact
that, with any of the memory technologies available for the past half century, we
have had a choice between building large but slow memories or small but fast
memories.
A cache memory sits between the central processor and the main memory. During
any particular memory cycle, the cache checks the memory address being issued
by the processor. If this address matches the address of one of the few memory
locations held in the cache, the cache handles the memory cycle very quickly; this
is called a cache hit. If the address does not, then the memory cycle must be
satisfied far more slowly by the main memory; this is called a cache miss.
17
The basic characteristic of cache memory is its fast access time, therefore very
little or no time must be wasted when searching forwards in the cache memory.
The speed of the main memory is very low in comparison with the speed of
modern processors Hence,
it is important to devise a scheme that reduces the time needed to access the
necessary information
Since the speed of main memory unit is limited by electronic and packaging
constraints, the solution must be sought in a different architectural
arrangement.
An efficient solution is to use a fast cache memory which essentially makes
the main memory appear to the processor to be faster than it really is
Usually, the cache memory can store a reasonable number of blocks at any
given time, but this number is small compared to the total number of blocks
in the main memory. The correspondence between the main memory blocks
and those in the cache is specified by a mapping function.
18
When the cache is full and a memory word that is not in the cache is referenced,
the cache control hardware must decide which block should be removed to create
space for the new block that contains the referenced word. The collection of rules
for making this decision constitutes the replacement algorithm.
1. Associative Mapping
2. Direct Mapping
3. Set-Associative Mapping.
Advantages
Disadvantages
19
20
Speed, Size and Cost
The processor fetches the code and data from the main memory to execute the
program. The DRAMs which form the main memory are slower devices. So it is
necessary to insert wait states in memory read/write cycles. This reduces the speed
of execution. The solution for this problem is in the memory system small section
of SRAM is added along with the main memory, referred to as cache memory. The
program which is to be executed is loaded in the main memory, but the part of the
program and data accessed from the cache memory. The cache controller looks
after this swapping between main memory and cache memory with the help of
21
DMA controller, Such cache memory is called secondary cache. Recent processor
have the built in cache memory called primary cache. The size of the memory is
still small compared to the demands of the large programs with the voluminous
data. A solution is provided by using secondary storage, mainly magnetic disks and
magnetic tapes to implement large memory spaces, which is available at
reasonable prices.
22
23