Module-4 Data storage
Module-4 Data storage
Spring 2018
Based on and image from “Database System Concepts” book and slides, 6th edition
Physical Storage Media
● Magnetic-disk
− Data is stored on spinning disk, and read/written magnetically
− Primary medium for the long-term storage of data; typically stores entire
database.
− Data must be moved from disk to main memory for access, and written back
for storage
● Much slower access than main memory (more on this later)
− direct-access – possible to read data on disk in any order, unlike magnetic
tape
− Capacities range up to roughly 1.5 TB as of 2009
● Much larger capacity and cost/byte than main memory/flash memory
● Growing constantly and rapidly with technology improvements (factor of 2
to 3 every 2 years)
− Survives power failures and system crashes
● disk failure can destroy data, but is rare
Based on and image from “Database System Concepts” book and slides, 6th edition
Physical Storage Media
● Optical storage
− non-volatile, data is read optically from a spinning
disk using a laser
− CD-ROM (640 MB) and DVD (4.7 to 17 GB) most
popular forms
− Blu-ray disks: 27 GB to 54 GB
− Write-one, read-many (WORM) optical disks used
for archival storage (CD-R, DVD-R, DVD+R)
− Multiple write versions also available (CD-RW,
DVD-RW, DVD+RW, and DVD-RAM)
− Reads and writes are slower than with magnetic
disk
Based on and image from “Database System Concepts” book and slides, 6th edition
Physical Storage Media
● Tape storage
− non-volatile, used primarily for backup (to
recover from disk failure), and for archival data
− sequential-access – much slower than disk
− very high capacity (40 to 300 GB tapes
available)
− tape can be removed from drive
− storage costs much cheaper than disk, but drives
are expensive
− Tape jukeboxes available for storing massive
amounts of data
● hundreds of terabytes (1 terabyte = 109 bytes) to even
multiple petabytes (1 petabyte = 1012 bytes)
Based on and image from “Database System Concepts” book and slides, 6th edition
Storage Hierarchy
Based on and image from “Database System Concepts” book and slides, 6th edition
Storage Hierarchy
● primary storage: Fastest media but
volatile (cache, main memory).
● secondary storage: next level in
hierarchy, non-volatile, moderately fast
access time
− also called on-line storage
− E.g. flash memory, magnetic disks
● tertiary storage: lowest level in
hierarchy, non-volatile, slow access
time
− also called off-line storage
− E.g. magnetic tape, optical storage
Based on and image from “Database System Concepts” book and slides, 6th edition
Lecture Outline
• Overview of Physical Storage Media
• Magnetic Disk and Flash Storage
• RAID
• File Organization
Magnetic Hard Disk Mechanism
Based on and image from “Database System Concepts” book and slides, 6th edition
Magnetic Disks
● Earlier generation disks were susceptible to head-crashes
− Surface of earlier generation disks had metal-oxide
coatings which would disintegrate on head crash and
damage all data on disk
− Current generation disks are less susceptible to such
disastrous failures, although individual sectors may get
corrupted
Based on and image from “Database System Concepts” book and slides, 6th edition
Magnetic Disks
● Disk controller – interfaces between the computer system
and the disk drive hardware.
− accepts high-level commands to read or write a sector
− initiates actions such as moving the disk arm to the right
track and actually reading or writing the data
− Computes and attaches checksums to each sector to
verify that data is read back correctly
● If data is corrupted, with very high probability stored
checksum won’t match recomputed checksum
− Ensures successful writing by reading back sector after
writing it
− Performs remapping of bad sectors
Based on and image from “Database System Concepts” book and slides, 6th edition
Disk Subsystem
Based on and image from “Database System Concepts” book and slides, 6th edition
Disk Subsystem
● Disks usually connected directly to
computer system
● In Storage Area Networks (SAN), a large
number of disks are connected by a
high-speed network to a number of servers
● In Network Attached Storage (NAS)
networked storage provides a file system
interface using networked file system
protocol, instead of providing a disk system
interface
Based on and image from “Database System Concepts” book and slides, 6th edition
Performance Measures of Disks
● Access time – the time it takes from when a read or write request is issued
to when data transfer begins. Consists of:
− Seek time – time it takes to reposition the arm over the correct track.
● Average seek time is 1/2 the worst case seek time.
− Would be 1/3 if all tracks had the same number of sectors, and
we ignore the time to start and stop arm movement
● 4 to 10 milliseconds on typical disks
− Rotational latency – time it takes for the sector to be accessed to
appear under the head.
● Average latency is 1/2 of the worst case latency.
● 4 to 11 milliseconds on typical disks (5400 to 15000 r.p.m.)
● Data-transfer rate – the rate at which data can be retrieved from or stored
to the disk.
− 25 to 100 MB per second max rate, lower for inner tracks
− Multiple disks may share a controller, so rate that controller can handle
is also important
● E.g. SATA: 150 MB/sec, SATA-II 3Gb (300 MB/sec)
● Ultra 320 SCSI: 320 MB/s, SAS (3 to 6 Gb/sec)
● Fiber Channel (FC2Gb or 4Gb): 256 to 512 MB/s
Based on and image from “Database System Concepts” book and slides, 6th edition
Performance Measures of Disks
Based on and image from “Database System Concepts” book and slides, 6th edition
Optimization of Disk-Block Access
● Block – a contiguous sequence of sectors from a single track
− data is transferred between disk and main memory in blocks
− sizes range from 512 bytes to several kilobytes
● Smaller blocks: more transfers from disk
● Larger blocks: more space wasted due to partially filled
blocks
● Typical block sizes today range from 4 to 16 kilobytes
● Disk-arm-scheduling algorithms order pending accesses to tracks
so that disk arm movement is minimized
− elevator algorithm:
R6 R3 R1 R5 R2 R4
Based on and image from “Database System Concepts” book and slides, 6th edition
Optimization of Disk-Block Access
● File organization – optimize block
access time by organizing the blocks to
correspond to how data will be accessed
− E.g. Store related information on the same
or nearby cylinders.
− Files may get fragmented over time
● E.g. if data is inserted to/deleted from the file
● Or free blocks on disk are scattered, and newly
created file has its blocks scattered over the disk
● Sequential access to a fragmented file results in
increased disk arm movement
− Some systems have utilities to defragment
the file system, in order to speed up file
access
Based on and image from “Database System Concepts” book and slides, 6th edition
Optimization of Disk-Block Access
● Nonvolatile write buffers speed up disk writes by writing blocks to a non-volatile RAM
buffer immediately
− Non-volatile RAM: battery backed up RAM or flash memory
● Even if power fails, the data is safe and will be written to disk when power returns
− Controller then writes to disk whenever the disk has no other requests or request has
been pending for some time
− Database operations that require data to be safely stored before continuing can
continue without waiting for data to be written to disk
− Writes can be reordered to minimize disk arm movement
● Log disk – a disk devoted to writing a sequential log of block updates
− Used exactly like nonvolatile RAM
● Write to log disk is very fast since no seeks are required
● No need for special hardware (NV-RAM)
● File systems typically reorder writes to disk to improve performance
− Journaling file systems write data in safe order to NV-RAM or log disk
− Reordering without journaling: risk of corruption of file system data
Based on and image from “Database System Concepts” book and slides, 6th edition
Flash Storage
● NOR flash vs NAND flash
● NAND flash
− used widely for storage, since it is much cheaper
than NOR flash
− requires page-at-a-time read (page: 512 bytes to 4
KB)
− transfer rate around 20 MB/sec
Based on and image from “Database System Concepts” book and slides, 6th edition
Lecture Outline
• Overview of Physical Storage Media
• Magnetic Disk and Flash Storage
• RAID
• File Organization
RAID
● RAID: Redundant Arrays of Independent Disks
− disk organization techniques that manage a large numbers of disks, providing a
view of a single disk of
● high capacity and high speed by using multiple disks in parallel,
● high reliability by storing data redundantly, so that data can be recovered
even if a disk fails
● The chance that some disk out of a set of N disks will fail is much higher than the
chance that a specific single disk will fail.
− E.g., a system with 100 disks, each with MTTF of 100,000 hours (approx. 11
years), will have a system MTTF of 1000 hours (approx. 41 days)
− Techniques for using redundancy to avoid data loss are critical with large
numbers of disks
● Originally a cost-effective alternative to large, expensive disks
− I in RAID originally stood for ``inexpensive’’
− Today RAIDs are used for their higher reliability and bandwidth.
● The “I” is interpreted as independent
Based on and image from “Database System Concepts” book and slides, 6th edition
Improvement of Reliability via
Redundancy
● Redundancy – store extra information that can
be used to rebuild information lost in a disk
failure
● E.g., Mirroring (or shadowing)
− Duplicate every disk. Logical disk consists of two
physical disks.
− Every write is carried out on both disks
● Reads can take place from either disk
− If one disk in a pair fails, data still available in the
other
● Data loss would occur only if a disk fails, and its mirror
disk also fails before the system is repaired
− Probability of combined event is very small
● Except for dependent failure modes such as fire or building
th edition
Improvement of Reliability via
Redundancy
● Mean time to data loss depends on mean
time to failure,
and mean time to repair
− E.g. MTTF of 100,000 hours, mean time to repair
of 10 hours gives mean time to data loss of
500*106 hours (or 57,000 years) for a mirrored pair
of disks (ignoring dependent failure modes)
●
Based on and image from “Database System Concepts” book and slides, 6th edition
Improvement in Performance via
Parallelism
● Two main goals of parallelism in a disk
system:
1. Load balance multiple small accesses to
increase throughput
2. Parallelize large accesses to reduce response
time.
● Improve transfer rate by striping data
across multiple disks.
Based on and image from “Database System Concepts” book and slides, 6th edition
Improvement in Performance via
Parallelism
● Bit-level striping – split the bits of each
byte across multiple disks
− In an array of eight disks, write bit i of each
byte to disk i.
− Each access can read data at eight times the
rate of a single disk.
− But seek/access time worse than for a single
disk
● Bit level striping is not used much any more
Based on and image from “Database System Concepts” book and slides, 6th edition
Improvement in Performance via
Parallelism
● Block-level striping – with n disks, block i
of a file goes to disk (i mod n) + 1
− Requests for different blocks can run in
parallel if the blocks reside on different disks
− A request for a long sequence of blocks can
utilize all disks in parallel
Based on and image from “Database System Concepts” book and slides, 6th edition
What are the 2 main goals
of parallelism in a disk
system?
What does mirroring give
us?
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
●Benefits of RAID 0:
○ Best performance read/write
○ No parity overhead
Not fault tolerant
● Benefits of RAID 1:
○ Simple
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 2: Memory-Style Error-Correcting-Codes (ECC)
with bit striping.
● RAID Level 3: Bit-Interleaved Parity
− a single parity bit is enough for error correction, not just
detection, since we know which disk has failed
● When writing data, corresponding parity bits must also
be computed and written to a parity bit disk
● To recover data in a damaged disk, compute XOR of
bits from other disks (including parity bit disk)
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 3 (Cont.)
− Faster data transfer than with a single disk,
but fewer I/Os per second since every disk
has to participate in every I/O.
− Subsumes Level 2 (provides all its benefits,
at lower cost).
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 4: Block-Interleaved Parity;
uses block-level striping, and keeps a
parity block on a separate disk for
corresponding blocks from N other disks.
− When writing data block, corresponding
block of parity bits must also be computed
and written to parity disk
− To find value of a damaged block, compute
XOR of bits from corresponding blocks
(including parity block) from other disks.
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 4 (Cont.)
− Provides higher I/O rates for independent block
reads than Level 3
● block read goes to a single disk, so blocks stored on
different disks can be read in parallel
− Provides high transfer rates for reads of multiple
blocks than no-striping
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 4 (Cont.)
− Before writing a block, parity data must be
computed
● Can be done by using old parity block, old value of
current block and new value of current block (2 block
reads + 2 block writes)
● Or by recomputing the parity value using the new values
of blocks corresponding to the parity block
− More efficient for writing large amounts of data sequentially
− Parity block becomes a bottleneck for independent
block writes since every block write also writes to
parity disk
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 5: Block-Interleaved
Distributed Parity; partitions data and parity
among all N + 1 disks, rather than storing
data in N disks and parity in 1 disk.
− E.g., with 5 disks, parity block for nth set of
blocks is stored on disk (n mod 5) + 1, with the
data blocks stored on the other 4 disks.
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
●Advantages RAID Level 5:
○ Read transactions fast
○ Is fault tolerant
Based on and image from “Database System Concepts” book and slides, 6th edition
RAID Levels
● RAID Level 5 (Cont.)
− Higher I/O rates than Level 4.
● Block writes occur in parallel if the blocks and their
parity blocks are on different disks.
− Subsumes Level 4: provides same benefits,
but avoids bottleneck of parity disk.
● RAID Level 6: P+Q Redundancy scheme;
similar to Level 5, but stores extra
redundant information to guard against
multiple disk failures.
− Better reliability than Level 5 at a higher cost;
not used as widely.
Based on and image from “Database System Concepts” book and slides, 6th edition
Choice of RAID Level
● Factors in choosing RAID level
− Monetary cost
− Performance: Number of I/O operations per
second, and bandwidth during normal
operation
− Performance during failure
− Performance during rebuild of failed disk
● Including time taken to rebuild failed disk
Based on and image from “Database System Concepts” book and slides, 6th edition
Choice of RAID Level
● RAID 0 is used only when data safety is not
important
− E.g. data can be recovered quickly from other
sources
● Level 2 and 4 never used since they are
subsumed by 3 and 5
● Level 3 is not used anymore since
bit-striping forces single block reads to
access all disks, wasting disk arm
movement, which block striping (level 5)
avoids
● Level 6 is rarely used since levels 1 and 5
offer adequate safety for most applications
th edition
Choice of RAID Level
● Level 1 provides much better write
performance than level 5
− Level 5 requires at least 2 block reads and 2
block writes to write a single block, whereas
Level 1 only requires 2 block writes
− Level 1 preferred for high update environments
such as log disks
Based on and image from “Database System Concepts” book and slides, 6th edition
Choice of RAID Level
● Level 1 had higher storage cost than level 5
− disk drive capacities increasing rapidly
(50%/year) whereas disk access times have
decreased much less (x 3 in 10 years)
− I/O requirements have increased greatly, e.g.
for Web servers
− When enough disks have been bought to
satisfy required rate of I/O, they often have
spare storage capacity
● so there is often no extra monetary cost for Level 1!
Based on and image from “Database System Concepts” book and slides, 6th edition
Choice of RAID Level
● Level 5 is preferred for applications with low
update rate,
and large amounts of data
● Level 1 is preferred for all other
applications
Based on and image from “Database System Concepts” book and slides, 6th edition
Hardware Issues
Based on and image from “Database System Concepts” book and slides, 6th edition
Hardware Issues
Based on and image from “Database System Concepts” book and slides, 6th edition
Lecture Outline
• Overview of Physical Storage Media
• Magnetic Disk and Flash Storage
• RAID
• File Organization
File Organization
Based on and image from “Database System Concepts” book and slides, 6th edition
Fixed-Length Records
● Simple approach:
− Store record i starting from byte n ∗ (i – 1), where
n is the size of each record.
− Record access is simple but records may cross
blocks
● Modification: do not allow records to cross block
boundaries
● Deletion of record i:
alternatives:
− move records i + 1, . . ., n
to i, . . . , n – 1
− move record n to I
− do not move records, but
link all free records on a
Based on and image from “Database System Concepts” book and slides, 6th edition
Deleting record 3 and compacting
Based on and image from “Database System Concepts” book and slides, 6th edition
Deleting record 3 and moving last
record
Based on and image from “Database System Concepts” book and slides, 6th edition
Free Lists
● Store the address of the first deleted record
in the file header.
● Use this first record to store the address of
the second deleted record, and so on
Based on and image from “Database System Concepts” book and slides, 6th edition
Free Lists
● Can think of these stored addresses as
pointers since they “point” to the location of a
record.
● More space efficient representation: reuse
space for normal attributes of free records to
store pointers. (No pointers stored in in-use records.)
Based on and image from “Database System Concepts” book and slides, 6th edition
Variable-Length Records
● Variable-length records arise in database systems in several
ways:
− Storage of multiple record types in a file.
− Record types that allow variable lengths for one or more
fields such as strings (varchar)
− Record types that allow repeating fields (used in some
older data models).
● Attributes are stored in order
● Variable length attributes represented by fixed size (offset,
length), with actual data stored after all fixed length attributes
● Null values represented by null-value bitmap
Based on and image from “Database System Concepts” book and slides, 6th edition
Variable-Length Records: Slotted
Page Structure
Based on and image from “Database System Concepts” book and slides, 6th edition
Variable-Length Records: Slotted
Page Structure
Based on and image from “Database System Concepts” book and slides, 6th edition
Organization of Records in Files
● Hashing – a hash function computed on
some attribute of each record; the result
specifies in which block of the file the
record should be placed
● Records of each relation may be stored in
a separate file. In a multitable clustering
file organization records of several
different relations can be stored in the
same file
− Motivation: store related records on the same
block to minimize I/O
Based on and image from “Database System Concepts” book and slides, 6th edition
Sequential File Organization
● Suitable for applications that require
sequential processing of the entire
file
● The records in the file are ordered by
a search-key
Based on and image from “Database System Concepts” book and slides, 6th edition
Sequential File Organization
● Deletion – use pointer
chains
● Insertion –locate the
position where the record
is to be inserted
− if there is free space
insert there
− if no free space, insert the
record in an overflow
block
− in either case, pointer
chain must be updated
● Need to reorganize the file from time to time
to restore sequential order
Based on and image from “Database System Concepts” book and slides, 6th edition
Multitable Clustering File
Organization
● Store several relations in one file using a multitable clustering file
organization
● department
● instructor
● multitable clustering
● of department and
● instructor
Based on and image from “Database System Concepts” book and slides, 6th edition
Multitable Clustering File
Organization
● good for queries involving department
instructor, and for queries involving one
single department and its instructors
● bad for queries involving only department
● results in variable size records
● Can add pointer chains to link records of a
particular relation
Based on and image from “Database System Concepts” book and slides, 6th edition
Data Dictionary Storage
The Data dictionary (also called system catalog) stores metadata;
that is, data about data, such as
Based on and image from “Database System Concepts” book and slides, 6th edition
Data Dictionary Storage
Based on and image from “Database System Concepts” book and slides, 6th edition
Relational Representation of
System Metadata
● Relational
representation on
disk
● Specialized data
structures
designed for
efficient access, in
memory
Based on and image from “Database System Concepts” book and slides, 6th edition
Storage Access
● A database file is partitioned into
fixed-length storage units called blocks.
Blocks are units of both storage allocation
and data transfer.
● Database system seeks to minimize the
number of block transfers between the disk
and memory. We can reduce the number of
disk accesses by keeping as many blocks
as possible in main memory.
● Buffer – portion of main memory available
to store copies of disk blocks.
● Buffer manager – subsystem responsible
for allocating buffer space in main memory.
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer Manager
● Programs call on the buffer manager
when they need a block from disk.
− If the block is already in the buffer, buffer
manager returns the address of the block in
main memory
− If the block is not in the buffer, the buffer
manager
● Allocates space in the buffer for the block
− Replacing (throwing out) some other block, if required,
to make space for the new block.
− Replaced block written back to disk only if it was
modified since the most recent time that it was written
to/fetched from the disk.
● Reads the block from the disk to the buffer, and
returns the address of the block in main memory
to requester.
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer-Replacement Policies
● Most operating systems replace the block
least recently used (LRU strategy)
● Idea behind LRU – use past pattern of block
references as a predictor of future
references
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer-Replacement Policies
● Queries have well-defined access patterns
(such as sequential scans), and a database
system can use the information in a user’s
query to predict future references
− LRU can be a bad strategy for certain access
patterns involving repeated scans of data
● For example: when computing the join of 2 relations
r and s by a nested loops
for each tuple tr of r do
for each tuple ts of s do
if the tuples tr and ts match …
− Mixed strategy with hints on replacement
strategy provided
by the query optimizer is preferable
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer-Replacement Policies
● Pinned block – memory block that is
not allowed to be written back to disk.
● Toss-immediate strategy – frees the
space occupied by a block as soon as
the final tuple of that block has been
processed
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer-Replacement Policies
● Most recently used (MRU) strategy –
system must pin the block currently
being processed. After the final tuple of
that block has been processed, the
block is unpinned, and it becomes the
most recently used block.
Based on and image from “Database System Concepts” book and slides, 6th edition
Buffer-Replacement Policies
● Buffer manager can use statistical
information regarding the probability that
a request will reference a particular
relation
− E.g., the data dictionary is frequently
accessed. Heuristic: keep data-dictionary
blocks in main memory buffer
● Buffer managers also support forced
output of blocks for the purpose of
recovery (more in Chapter 16)
Based on and image from “Database System Concepts” book and slides, 6th edition