Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DDR, DDR3, DDR4, DDR5 Ram Architecture

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

1

Independent University, Bangladesh (IUB)


School of Engineering and Computer Science
Department of CSE

COMPUTER ORGANIZATION AND ARCHITECTURE


PROJECT REPORT

Submitted By:

Student Names: Tushar Basak

Student ID: 2022315


Section: 3

Project Information:
Project No.: 3
Experiment Name: DDR, DDR3, DDR4 and DDR5 RAM Architecture

Course Name: Computer Organization and Architecture (COA)


Semester: Spring 2023

Course ID: CSE214


Submission Date: 10.05.2023

Submitted To:
Name of faculty: Mohammad Noor Nabi

Report on DDR, DDR3, DDR4 and DDR5 RAM


Architecture

1
Table of Contents

Page Contents

2 Abstract

3 DDR Architecture

4 DDR3 Architecture

5 DDR4 Architecture

6 DDR5 Architecture

7 Comparison

ABSTRACT One of the most critical system


bottlenecks when using high-performance processors
is the interface to internal main memory. This
interface is the most important pathway in the entire
computer system. The basic building block of main

2
memory remains the DRAM chip, as it has for
decades; until recently, there had been no significant
changes in DRAM architecture since the early 1970s.
The traditional DRAM chip is constrained both by its
internal architecture and by its interface to the
processor’s memory bus.
We have seen that one attack on the performance problem of DRAM main memory has been to
insert one or more levels of high-speed SRAM cache between the DRAM main memory and the
processor. But SRAM is much costlier than DRAM, and expanding cache size beyond a certain
point yield diminishing returns.
In recent years, a number of enhancements to the basic DRAM architecture have been
explored. The schemes that currently dominate the market are SDRAM and DDR-DRAM. We
examine each of these in turn.
One of the most widely used forms of DRAM is the synchronous DRAM (SDRAM). Unlike the
traditional DRAM, which is asynchronous, the SDRAM exchanges data with the processor
synchronized to an external clock signal and running at the full speed of the processor/memory
bus without imposing wait states.
In a typical DRAM, the processor presents addresses and control levels to the memory,
indicating that a set of data at a particular location in memory should be either read from or
written into the DRAM. After a delay, the access time, the DRAM either writes or reads the data.
During the access-time delay, the DRAM performs various internal functions, such as activating
the high capacitance of the row and column lines, sensing the data, and routing the data out
through the output buff- ers. The processor must simply wait through this delay, slowing system
performance.
With synchronous access, the DRAM moves data in and out under control of the system clock.
The processor or other master issues the instruction and address information, which is latched
by the DRAM. The DRAM then responds after a set number of clock cycles. Meanwhile, the
master can safely do other tasks while the SDRAM is processing the request.
Figure 5.12 shows the internal logic of a typical 256-Mb SDRAM typical of SDRAM organization,
and Table 5.3 defines the various pin assignments. The SDRAM employs a burst mode to
eliminate the address setup time and row and column line precharge time after the first access.
In burst mode, a series of data bits can be clocked out rapidly after the first bit has been
accessed. This mode is useful when all the bits to be accessed are in sequence and in the same
row of the array as the initial access. In addition, the SDRAM has a multiple-bank internal
architecture that improves opportunities for on-chip parallelism.
The mode register and associated control logic is another key feature differen- tiating SDRAMs
from conventional DRAMs. It provides a mechanism to custom- ize the SDRAM to suit specific
system needs. The mode register specifies the burst length, which is the number of separate
units of data synchronously fed onto the bus. The register also allows the programmer to adjust
the latency between receipt of a read request and the beginning of data transfer.
The SDRAM performs best when it is transferring large blocks of data sequen- tially, such as for
applications like word processing, spreadsheets, and multimedia.
Figure 5.13 shows an example of SDRAM operation. In this case, the burst length is 4 and the
latency is 2. The burst read command is initiated by having CS and CAS low while holding RAS
and WE high at the rising edge of the clock. The address inputs determine the starting column
address for the burst, and the mode register sets the type of burst (sequential or interleave) and

3
the burst length (1, 2, 4, 8, full page). The delay from the start of the command to when the
data from the first cell appears on the outputs is equal to the value of the CAS latency that is set
in the mode register. Although SDRAM is a significant improvement on asynchronous RAM, it
still has shortcomings that unnecessarily limit that I/O data rate that can be achieved. To
address these shortcomings a newer version of SDRAM, referred to as double- data-rate DRAM
(DDR DRAM) provides several features that dramatically increase the data rate. DDR DRAM was
developed by the JEDEC Solid State Tech- nology Association, the Electronic Industries Alliance’s
semiconductor-engineering- standardization body. Numerous companies make DDR chips,
which are widely used in desktop computers and servers.
DDR achieves higher data rates in three ways. First, the data transfer is syn- chronized to both
the rising and falling edge of the clock, rather than just the rising edge. This doubles the data
rate; hence the term double data rate. Second, DDR uses higher clock rate on the bus to
increase the transfer rate. Third, a buffering scheme is used, as explained subsequently.
JEDEC has thus far defined four generations of the DDR technology (Table 5.4). The initial DDR
version makes use of a 2-bit prefetch buffer. The prefetch buffer is a memory cache located on
the SDRAM chip. It enables the SDRAM chip to pre- position bits to be placed on the data bus as
rapidly as possible. The DDR I/O bus uses the same clock rate as the memory chip, but because
it can handle two bits per cycle, it achieves a data rate that is double the clock rate. The 2-bit
prefetch buffer enables the SDRAM chip to keep up with the I/O bus.
To understand the operation of the prefetch buffer, we need to look at it from the point of view
of a word transfer. The prefetch buffer size determines how many words of data are fetched
(across multiple SDRAM chips) every time a column com- mand is performed with DDR
memories. Because the core of the DRAM is much slower than the interface, the difference is
bridged by accessing information in par- allel and then serializing it out the interface through a
multiplexor (MUX). Thus, DDR prefetches two words, which means that every time a read or a
write operation is performed, it is performed on two words of data, and bursts out of, or into,
the SDRAM over one clock cycle on both clock edges for a total of two consecutive operations.
As a result, the DDR I/O interface is twice as fast as the SDRAM core.
Although each new generation of SDRAM results is much greater capacity, the core speed of the
SDRAM has not changed significantly from generation to generation. To achieve greater data
rates than those afforded by the rather modest increases in SDRAM clock rate, JEDEC increased
the buffer size. For DDR2, a 4-bit buffer is used, allowing for words to be transferred in parallel,
increasing the effective data rate by a factor of 4. For DDR3, an 8-bit buffer is used and a factor
of 8 speedup is achieved (Figure 5.14).
The downside to the prefetch is that it effectively determines the minimum burst length for the
SDRAMs. For example, it is very difficult to have an efficient burst length of four words with
DDR3’s prefetch of eight. Accordingly, the JEDEC designers chose not to increase the buffer size
to 16 bits for DDR4, but rather to introduce the concept of a bank group [ALLA13]. Bank groups
are separate enti- ties such that they allow a column cycle to complete within a bank group, but
that column cycle does not impact what is happening in another bank group. Thus, two
prefetches of eight can be operating in parallel in the two bank groups. This arrangement keeps
the prefetch buffer size the same as for DDR3, while increasing performance as if the prefetch is
larger.
Figure 5.14 shows a configuration with two bank groups. With DDR4, up to 4 bank groups can
be used.

4
DDR4 SDRAM is the abbreviation for “double data rate fourth generation synchronous dynamic
random-access memory,” the latest variant of memory in computing. The primary advantages of
DDR4 over its predecessor, DDR3, include higher module density and lower voltage
requirements, coupled with higher data rate transfer speeds. The DDR4 standard allows for
DIMMs of up to 64 GiB in capacity, compared to DDR3's maximum of 16 GiB per DIMM. Unlike
previous generations of DDR memory, prefetch has not been increased above the 8n used in
DDR3 the basic burst size is eight words, and higher bandwidths are achieved by sending more
read/write commands per second. To allow this, the standard divides the DRAM banks into two
or four selectable bank groups, where transfers to different bank groups may be done more
rapidly. Because power consumption increases with speed, the reduced voltage allows higher
speed operation without unreasonable power and cooling requirements.

DDR4 operates at a voltage 1.2 V with a frequency between 800 and 1600 MHz (DDR41600
through DDR4-3200), compared to frequencies between 400 and 1067 MHz and voltage
requirements of 1.5 V of DDR3. Due to the nature of DDR, speeds are typically advertised as
doubles of these numbers (DDR3-1600 and DDR4-2400 are common, with DDR4-3200, DDR4-
4800 and DDR4-5000 available at high cost). Unlike DDR3's 1.35 V low voltage standard DDR3L,
there is no DDR4L low voltage version of DDR4.

DDR SDRAM is a stack of acronyms. Double Data Rate (DDR) Synchronous Dynamic Random
Access Memory (SDRAM) is a common type of memory used as RAM for most every modern
processor. First on the scene of this stack of acronyms was Dynamic Random-Access Memory
(DRAM), introduced in the 1970s. DRAM is not regulated by a clock. DRAM is asynchronous, i.e.,
not synchronized by any external influence. This posed a problem in organizing data as it comes
in so it can be queued for the process it’s associated with. Because DRAM was asynchronous, it
was not going to work as fast with processors that were just getting faster.

SDRAM is synchronous, and therefore relies on a clock to synchronize signals, creating


predictable orderly cycles of data fetches and writes. However, SDRAM transfers data on one
edge of the clock. DDR SDRAM means that this type of SDRAM fetches data on both the leading
edge and the falling edge of the clock signal that regulates it, thus the name “Double Data
Rate.” Prior to DDR, RAM would fetch data only once per clock cycle. Synchronous data lends
itself to faster operation when coordinating memory fetches with the processor’s requirements.

Many people refer to a processor’s RAM as simply “DDR”, using the terms interchangeably
because DDR is so widely used as CPU RAM and has been since the late 1990s. DDR is not flash
memory like the kind that is used for Solid State Drives (SSDs), Secure Digital (SD) cards, or
Universal Serial Bus (USB) drives. DDR memory is volatile, which means that it loses everything
once power is removed.

This may seem like a detriment, but the trade-off is that DDR has much faster transfer rates than
other memory products, as well as a high capacity. The ubiquitous use of DDR SDRAM for a
processor’s working memory, or RAM, has improved over the years as the industry has
progressed from DDR to DDR2, DDR3, and now DDR4 SDRAM (see Table 1). DDR2 – DDR4

5
evolved to require lower supply voltages, which generally saves power. Other changes were
made to increase the speed, as well. DDR2 SDRAM was reduced to operating at a voltage of 1.8
volts, and a clock multiplier was added to the memory module to again double data transfer
speeds while operating at the same bus speed. DDR3 RAM integrated a 4x clock multiplier, again
doubling the memory transfer rate for the same bus speed.

In addition to a steady decrease in operating voltage and power consumption, DDR also became
denser as more transistors were packed into a smaller area. DDR SDRAM is packaged as an
integrated chip module, which includes the Dual In-Line Memory Module (DIMM) used with
desktop computers. DIMM is a small PCB populated with SDRAM chips. Before DIMM, we had
Single In-Line Memory Modules (SIMMs), which were used in the 1980s and 1990s. DIMM chips
carry DDR SDRAM for upgrading RAM on a PC.

This article discusses the more modern versions of volatile RAM, including general points on
DDR SDRAM standard and how it evolved. Before DRAM, there was the also-volatile SRAM
(Static Random Access Memory). The fundamental differences between DRAM and SRAM were
covered in an earlier post. More on non-volatile memory can be found in an earlier post
Embedded use of NAND and NOR flash memory is evolving.

DDR5 SDRAM is the next standard proposed to double the speed of DDR4 SDRAM. According to
the JEDEC Solid State Technology Association, the standard-bearer for DDR SDRAM, “The JEDEC
DDR5 standard is currently in development in JEDEC’s JC-42 Committee for Solid State
Memories. JEDEC DDR5 will offer improved performance with greater power efficiency as
compared to previous generation DRAM technologies. As planned, DDR5 will provide double
the bandwidth and density over DDR4, along with delivering improved channel efficiency.”
DDR5 SDRAM is forecasted for 2018.

SDRAM (Synchronous Dynamic Random Access Memory):


"Synchronous" tells about the behaviour of the DRAM type. In late 1996, SDRAM began to
appear in systems. Unlike previous technologies, SDRAM is designed to synchronize itself with
the timing of the CPU. This enables the memory controller to know the exact clock cycle when
the requested data will be ready, so the CPU no longer has to wait between memory accesses.
For example, PC66 SDRAM runs at 66 MT/s, PC100 SDRAM runs at 100 MT/s, PC133 SDRAM
runs at 133 MT/s, and so on.

6
SDRAM can stand for SDR SDRAM (Single Data Rate SDRAM), where the I/O, internal clock and
bus clock are the same. For example, the I/O, internal clock and bus clock of PC133 are all 133
Mhz. Single Data Rate means that SDR SDRAM can only read/write one time in a clock cycle.
SDRAM have to wait for the completion of the previous command to be able to do another
read/write operation.

DDR SDRAM (Double Data Rate SDRAM):


The next generation of SDRAM is DDR, which achieves greater bandwidth than the preceding
single data rate SDRAM by transferring data on the rising and falling edges of the clock signal
(double pumped). Effectively, it doubles the transfer rate without increasing the frequency of
the clock. The transfer rate of DDR SDRAM is the double of SDR SDRAM without changing the
internal clock. DDR SDRAM, as the first generation of DDR memory, the prefetch buffer is 2bit,
which is the double of SDR SDRAM. The transfer rate of DDR is between 266~400 MT/s.
DDR266 and DDR400 are of this type.

DDR2 SDRAM(Double Data Rate Two SDRAM):


Its primary benefit is the ability to operate the external data bus twice as fast as DDR SDRAM.
This is achieved by improved bus signal. The prefetch buffer of DDR2 is 4 bit(double of DDR
SDRAM). DDR2 memory is at the same internal clock speed (133~200MHz) as DDR, but the
transfer rate of DDR2 can reach 533~800 MT/s with the improved I/O bus signal. DDR2 533 and
DDR2 800 memory types are on the market.

DDR3 SDRAM(Double Data Rate Three SDRAM):


DDR3 memory reduces 40% power consumption compared to current DDR2 modules, allowing
for lower operating currents and voltages (1.5 V, compared to DDR2's 1.8 V or DDR's 2.5 V). The
transfer rate of DDR3 is 800~1600 MT/s. DDR3's prefetch buffer width is 8 bit, whereas
DDR2's is 4 bit, and DDR's is 2 bit. DDR3 also adds two functions, such as ASR (Automatic Self-
Refresh) and SRT (Self-Refresh Temperature). They can make the memory control the refresh
rate according to the temperature variation.

DDR4 SDRAM (Double Data Rate Fourth SDRAM):


DDR4 SDRAM provides the lower operating voltage (1.2V) and higher transfer rate. The transfer
rate of DDR4 is 2133~3200 MT/s. DDR4 adds four new Bank Groups technology. Each bank
group has the feature of singlehanded operation. DDR4 can process 4 data within a clock cycle,
so DDR4's efficiency is better than DDR3 obviously. DDR4 also adds some functions, such as DBI
(Data Bus Inversion), CRC (Cyclic Redundancy Check) and CA parity. They can enhance DDR4
memory's signal integrity, and improve the stability of data transmission/access.

DDR itself stands for Double Data Rate. Technically, when you’re referring to a RAM stick, the
full nomenclature is Double Data Rate Synchronous Dynamic Random-Access Memory.
However, we abbreviate that to DDR SDRAM.

DDR (or DDR1), of course, was the first generation of upgraded RAM. Before, RAM was Single
Data Rate or SDR. From there, DDR2, DDR3, and DDR4 were developed over time. Each
generation works more quickly and efficiently than the generations before it. DDR5, of course, is
in the works as the next generation of RAM.

7
Right now, most modern computers are built to use DDR4 RAM, and they’re not
backwardcompatible. As such, if you have an older motherboard that uses DDR3 RAM, you
cannot install DDR4 RAM in it successfully without replacing the entire motherboard. In the
same way, most modern motherboards can’t make use of DDR3 or DDR2 RAM, either.

To look at the evolution of DDR RAM in more specific numbers, have a look at the table below.

Of course, we’ll only genuinely be concerned with DDR3, DDR4, and DDR5 RAM in this article, as
the other RAM protocols are functionally obsolete for gaming. While DDR4 RAM is the most
common now, you may still see some functional computers that utilize DDR3 RAM.

While the technology for DDR5 RAM is available today, it’s only been developed by a handful of
companies. Additionally, it will take a while longer for Intel and AMD to release motherboards
that will support the DDR5 protocol.

The Differences Between DDR3, DDR4, And DDR5


If you look at the table above, you’ll see that the performance of every iteration of the DDR
protocol sees significant performance increases over the last generation. Some of the stats, like
bandwidth, can almost double from one generation to the next, while others, such as voltage,
still improve, but suffer from diminishing returns.

Each of the parameters we mentioned in the table above has a different purpose. We’ll delve
deeper into these parameters below.

8
Bandwidth

The bandwidth of your computer is calculated using several different parameters. However, in
its simplest terms, the faster the speed of your RAM (in MHz), the more functional it will be.
However, do keep in mind that DDR4 RAM will almost always be faster than RAM from any
previous generation.

Additionally, keep in mind that the size of your RAM makes a difference, too. More RAM will
almost always be better than faster ram. For example, if you have 32GB of RAM at 2400 MHz, it
will, in most cases, work better than 16GB of RAM at 3600 MHz.

When the current DDR4 RAM capabilities came on the market, they were too expensive to
warrant purchasing over DDR3 RAM for most everyday users for some time. Most likely, as
DDR5 begins to reach everyday consumers, this will also be the case.

Voltage
As a general rule, the higher the voltage on RAM, the better it will perform (within a generation,
of course). Higher-voltage RAM will also produce more heat. However, it’s essential that the
voltage of your RAM matches up with your motherboard.

If your motherboard supports voltage changes, you may have a broader range of RAM to choose
from, but when in doubt, select RAM that follows the restrictions of your motherboard. Some
RAM protocols also come in low voltage or ultra-low voltage options to give you more choices.

Prefetch

You may have already heard the term prefetching as a computer science term. However, where
RAM is concerned, the word “prefetch” refers to Prefetch Architecture. RAM with a prefetch
buffer size of 2n will access memory two times faster than SDRAM, which has a prefetch buffer
size of 1n (or one unit of data).

Essentially, when SDRAM reads data, it reads one unit of data at a time. However, DDR1 RAM,
which has a prefetch buffer of 2n, reads two units of data at a time. The RAM is reading two
units of data that are adjacent to each other because it assumes that the CPU will need that
data. Practically, this is usually the case.

Predictably, the higher the prefetch buffer of your RAM, the more data it will read in one pass.
Reading more data in one pass, even if it ends up being mostly data that your computer doesn’t
need is much more efficient than taking a second pass to read it.

9
As such, DDR4 RAM operates eight times faster than SDRAM, since it has a prefetch buffer of 8n.
DDR5 RAM, on the other hand, has the potential to be sixteen times faster than SDRAM. DDR3
RAM operates under the same prefetch buffer as DDR4 RAM.

Size

One parameter we didn’t list in our table above is the size. Believe it or not, DDR3, DDR4, and
DDR5 RAM are all shaped differently. This is done intentionally to ensure that users don’t install
the wrong size memory for a specific motherboard, of course.

However, this means that if you have already purchased your motherboard, you will need to
make sure the RAM you want to buy is compatible with that motherboard.

Latency
Latency is another parameter that we didn’t list in our table above. Latency is used to calculate
the bandwidth of a specific RAM chip.

However, we didn’t list this because the latency between DDR3 and DDR4 chips will be mostly
unnoticeable to regular users.

While most DDR4 chips have slightly higher latency than comparable DDR3 chips, other
advances in performance tend to outweigh this fact.

Conclusion
Essentially, there are a few things to keep in mind when deciding between DDR3, DDR4, and
DDR5 RAM. If you’d like to hold out for DDR5, you may need to wait a while for it to become
economical and stable, but it will have predictably substantial performance advantages over
DDR3 and DDR4.

If you have an older computer, you may be restricted to DDR3 RAM by default. However, unless
you’re specifically looking for low-latency RAM, there is no other reason to search out DDR3
RAM. DDR4 RAM is your most viable option and will be (probably) for several years.

DDR: https://en.wikipedia.org/wiki/DDR_SDRAM

DDR3: https://en.wikipedia.org/wiki/DDR3_SDRAM

DDR4: https://en.wikipedia.org/wiki/DDR4_SDRAM

DDR5: https://en.wikipedia.org/wiki/DDR5_SDRAM

1
0
1
1

You might also like