Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Solaris Volume Manager

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 43
At a glance
Powered by AI
Some key takeaways from the document are that logical volume management can decrease the number of file systems, avoid data loss due to disk failure, balance I/O across disks for performance, avoid file system checks at boot, and allow file systems to grow online.

Some advantages of using logical volume management include decreasing the number of file systems, avoiding data loss due to disk failure, balancing I/O across disks for performance, avoiding file system checks at boot, and allowing file systems to grow online.

Some types of volumes that can be created using Solaris Volume Manager include concatenated volumes, striped volumes, concatenated striped volumes, mirrored volumes, and RAID5 volumes.

Solaris Volume Manager

Finnbarr P. Murphy SCSA RHCSA RHCE (fpm@fpmurphy.com)

Logical Volume Manager


Why consider Logical Volume Manager?
Decrease the number of file systems Avoid data loss due to disk failure Balance I/O across disks, performance Avoid file systems check at boot Grow file systems online

Solaris Volume Manager


Solaris Volume Manager (SVM) is a disk and storage management solution suitable for enterprise-class deployment. It can be used to:
Pool storage elements into volumes and allocate them to applications. Provide redundancy and failover capabilities which can help provide continuous data access in the event of a device failure.

Solaris Volume Manager


SVM is a free component of Solaris 9 and Solaris 10
It was previously known as Solstice DiskSuite Provides mechanisms to configure physical slices of hard drives into logical volumes Logical volumes can then be configured to provide mirroring and RAID5 Similar to Veritas Volume Manager Not used with ZFS!

Why SVM?
Without SVM
Each disk slice has its own physical and logical device File system cannot span more than one disk slice The maximum size of a file system is limited to the size of the disk even with large file summit support.

With SVM
disk slices can be grouped across several disks to appear as one or more volumes (metadevices) to the operating system
/dev/md/dsk/d0 /dev/md/rdsk/d0

RAID
Acronym for
Redundant Array of Independent Disks
formerly Redundant Array of Inexpensive Disks

Combine multiple disk drive components into a logical unit, where data is distributed across the drives in one of several ways called RAID levels Concept introduced at the University of California at Berkeley in 1987 by David Patterson, Garth Gibson and Randy Katz

RAID
RAID 0 RAID 1 RAID 0+1 RAID 1+0 RAID 2 RAID 3 RAID 4 RAID 5 RAID 6 - Striping or Concatenation - Mirroring - Striping with Mirroring - Mirroring with Striping - Hamming code correction - Striping with dedicated party disk - Independent reads and writes - Striping with distributed parity - RAID 5 with second parity calculation

RAID
RAID levels 2, 3, 4 and 6 not available in SVM
These RAID levels are not commonly implemented in commercial applications.

Raid 0+1 and RAID 1+0 are not RAID levels.


Are abstractions composed of more than one RAID levels.

Advantages of RAID
The foremost advantage of using a RAID drive is that it increases the performance and/or reliability of a system. Two main types of RAID
Hardware RAID Software RAID
SVM is software RAID

Concatenation (RAID 0)

FS 1

FS 2

RAID Management Software

Virtual FS

FS 3

Concatenation (RAID 0)

Partition 10 mb SVM Volume ~20mb

Partition 10 mb

Concatenation (RAID 0)
Combines multiple stripes to create a large volume
No redundancy Can contain slices of different sizes because they are merely joined together.

Striping (RAID 0)
FS 1

FS 2

RAID Management Software

FS 1 FS 2 FS 3

FS 3

Striping (RAID 0)

Interlace1 interlace3 10 mb SVM Volume ~20mb

Interlace2 interlace4 10 mb

Striping (RAID 0)
Used to increase read and write performance
by spreading data requests over multiple disks and controllers

Stripes must all be same size.

Mirror (RAID 1)
Data
FS 1 FS 2 FS 3 FS 4 FS 1

RAID Management Software


FS 1 FS 2 FS 3 FS 4

FS 2 FS 3 FS 4

Mirror

Mirror (RAID 1)

Data1 Data2 10 mb SVM Volume ~10mb

Data1 Data2 10 mb

Mirror (RAID 1)
Used to guard against disk failure
Redundancy

Any file system can be mirrored


Including root, swap and usr.

Requires twice the disks for the same capacity


Most expensive option

Striping with Distributed Parity (RAID 5)


FS 1 FS 4 FS 7 P(10-12) FS 2 FS 5 P(7-9) FS 10 FS 3 P(4-6) FS 8 FS 11 P(1-3) FS 6 FS 9 FS 12

FS 1
FS 2 FS 3 FS 4

RAID Management Software

FS 5 FS 6 FS 7

FS 8
FS 9 FS 10 FS 11 FS 12

Striping with Distributed Parity (RAID 5)


Interlace1 Interlace4 Parity 2,3 10 Mb

Interlace2 Interlace 5 Parity 1,3 10 Mb

RAID 5 D2 20% mb

Interlace 3 Interlace 6 Parity 1,2 10 Mb

Striping with Distributed Parity (RAID 5)


Minimum of three slices
Must be same size

The pattern of writing data and parity results in both data and parity spread across all the disks in the volume Parity protect against a single disk failure.

RAID 0+1

RAID 0+1
FS 1 FS 2 FS 3 FS 4 FS 5 FS 6 FS 7 FS 8 RAID Management Software Striping FS 1 FS 2 FS 3 FS 4 FS 5 FS 1 FS 6 FS 2 FS 7 FS 8 RAID Management Software Mirroring FS 3 FS 4 FS 5 FS 1 FS 2 FS 3 FS 4 FS 5 FS 6 FS 7 FS 8 RAID Management Software Striping FS 1 FS 2 FS 3 FS 4 FS 5 FS 6 FS 7 FS 8 FS 6 FS 7 FS 8

RAID 1+0

RAID Comparison

FEATURE
Redundant data Improve read performance Improve write performance

RAID 0 RAID 0 CONCATENATION STRIPE

RAID 1

RAID 5

No

No

Yes Depends on underlying device

Yes

No

Yes

Yes

No

Yes

No

No

RAID Comparison
FEATURE RAID 1

RAID 5

NON-Redundant

Writes operations

Faster

Slower Neutral

Random read

Slower

Faster

Neutral

Hardware cost

Highest

Higher Lowest

Redundancy

Best

OK

Data loss

State Replicas
Are repositories of information on the state and configuration of each metadevice
Also known as Database State Replica or State Databases Store disk state, configuration, ownership and other information in special areas of a disk (slices/partitions) Minimum of 3 required One is designated as the master

System will not boot into multiuser unless a majority (half + 1, 51%) of the state replicas are available System will panic if more than half of the state replicas are corrupt

State Replicas
Replicas Store:
Disk configuration information State Information

Planning for Replicas:


One Disk: 3 replicas on one slice Two-Four Disks: 2 replicas on each Five or more Disks: 1 on each

Create State Replicas


Create 3 state replicas using the metadb command:
# metadb -a -f c0t0d0s3 # metadb -a c0t0d0s5 # metadb -a c0t0d0s6 * The -a and -f options used together to create the initial state replica The -a option attaches a new database device and automatically edits the appropriate files You can create more than one state replica in a slice!

Create State Replicas


Create 2 state replicas using the metadb command:
# metadb a f c2 c0t0d0s4 c0t0d1s4

* The -a and -f options are used together to create the initial replica * The -a option attaches a new database device and automatically edits the appropriate files * The -c2 option specifies that 2 replicas are to be created

Delete State Replicas


Delete replica(s): # metadb d c0t0d1s4 The -d option is used to delete all replicas in the specified slice Delete all replicas: # metaclear

State Replica Status


Display replica status:
# metadb # metadb i The -i option displays description of flags The last field of each replica listing is the path to the location of the replica A m flag indicates master replica A u flag indicates replica is up to-date & active

State Replica Problems


SVM does not detect problems with replicas until existing configuration changes and an update to replicas is required If insufficient replicas are available, boot to single user and
Delete any corrupted replicas Create sufficient new replicas to achieve quorum

Check replica status frequently!

SVM Volumes
A (logical) volume (metadevice) is a group of physical slices that appear to the operating system as a single device A volume is used to increase storage capacity and increase data availability SVM can support up to 8192 volumes
Volume names start with d followed by a number default configuration is 128 volumes
d0 to d127

SVM Volumes
You can create the following types of volumes
Concatenations Stripes Concatenated scripes Mirrors RAID5

Transactional volumes are no longer supported as of Solaris 10


Use UFS logging to achieve same functionality

Creating a volume does not create a filesystem!


Unlike ZFS, you need to manually create the fileystem
newfs

RAID 0 Volumes
Create a concatenated volume: # metainit f d10 1 1 ctt0d0s3 Create a striped volume: # metainit f d10 1 3 c2t1d0s6 c2t2d0s6 c2d3d0s6 Monitor state of a volume: # metastat d10 # metastat -c d10 * The c option means output in concise format

Soft Partitions
A soft partition is a means of dividing a disk or volume into as many partitions (extents) as needed
Overcomes the eight slice limit Can be noncontiguous (hard must be contiguous)
This can cause I/O performance degradation

# metainit d10 p c2t1d0s6 * The p option specifies that the metadevice created will be a soft partition

Mirrors
Mirror:
Should be on different disks. Why? Slices should be the same size. Why?

Types of Mirrors:
One Way Two Way Three Way

Mirrors
A mirror is a volume that consists of 2 or more submirrors First create the submirrors: # metainit d11 1 1 c0t0d0s6; metainit d12 1 1 c0t0d1s6

Then create the mirror volume using one of the submirrors: # metainit d1 m d11
Then attach the second submirror: # metainit d1 d12
* The m option specifies that the volume created will be a mirror

Mirrors
To offline the d11 submirror of the d1 mirrored volume: # metaoffline d1 d11 To online the d11 submirror of the d1 mirrored volume: # metaonline d1 d11 To attach another submirror (d13) to the d1 mirrored volume: # metattach d1 d13 To detach submirror d13 from the d1 mirrored volume: # metadetach d1 d13

Create 2-Way Mirror


Create a two-way mirror for /home:
# metainit -f d51 1 1 c0t0d0s2 (/home) # metainit d52 1 1 c1t0d0s2 (unmounted) # metainit d50 -m d51 # umount /home (what if you cant umount?) # vi /etc/vfstab (add /dev/md/dsk/d50) # newfs /dev/md/dsk/d50 # mount /home (on d50) # metattach d50 d52

Delete Mirror
Detach a mirror metadevice
# metadetach d50 d5

Delete the metadevices


# metaclear -a

42

Unmirror
To unmirror a non-critical filesystem (/test) which is based on a mirror d1 (d11 and d12 submirrors) # umount /test # metadetach d1 d12 # metaclear r d1 # metaclear d12 Then edit /etc/vsftab to replace the /test entry with a regular device instead a metadevice # mount /test

You might also like