Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SLES 11 Storage Administration Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 160

www.novell.

com/documentation

Storage Administration Guide


SUSE Linux Enterprise Server 11 SP1

December 15, 2011

Legal Notices
Novell,Inc.,makesnorepresentationsorwarrantieswithrespecttothecontentsoruseofthisdocumentation,andspecifically disclaimsanyexpressorimpliedwarrantiesofmerchantabilityorfitnessforanyparticularpurpose.Further,Novell,Inc., reservestherighttorevisethispublicationandtomakechangestoitscontent,atanytime,withoutobligationtonotifyany personorentityofsuchrevisionsorchanges. Further,Novell,Inc.,makesnorepresentationsorwarrantieswithrespecttoanysoftware,andspecificallydisclaimsany expressorimpliedwarrantiesofmerchantabilityorfitnessforanyparticularpurpose.Further,Novell,Inc.,reservestheright tomakechangestoanyandallpartsofNovellsoftware,atanytime,withoutanyobligationtonotifyanypersonorentityof suchchanges. AnyproductsortechnicalinformationprovidedunderthisAgreementmaybesubjecttoU.S.exportcontrolsandthetrade lawsofothercountries.Youagreetocomplywithallexportcontrolregulationsandtoobtainanyrequiredlicensesor classificationtoexport,reexportorimportdeliverables.YouagreenottoexportorreexporttoentitiesonthecurrentU.S. exportexclusionlistsortoanyembargoedorterroristcountriesasspecifiedintheU.S.exportlaws.Youagreetonotuse deliverablesforprohibitednuclear,missile,orchemicalbiologicalweaponryenduses.SeetheNovellInternationalTrade ServicesWebpage(http://www.novell.com/info/exports/)formoreinformationonexportingNovellsoftware.Novellassumes noresponsibilityforyourfailuretoobtainanynecessaryexportapprovals. Copyright20092011Novell,Inc.Allrightsreserved.Nopartofthispublicationmaybereproduced,photocopied,stored onaretrievalsystem,ortransmittedwithouttheexpresswrittenconsentofthepublisher. Novell, Inc. 1800 South Novell Place Provo, UT 84606 U.S.A. www.novell.com OnlineDocumentation:ToaccessthelatestonlinedocumentationforthisandotherNovellproducts,seetheNovell DocumentationWebpage(http://www.novell.com/documentation).

Novell Trademarks
ForNovelltrademarks,seetheNovellTrademarkandServiceMarklist(http://www.novell.com/company/legal/trademarks/ tmlist.html).

Third-Party Materials
Allthirdpartytrademarksandcopyrightsarethepropertyoftheirrespectiveowners.

Contents
About This Guide 1 Overview of File Systems in Linux
1.1 1.2

9 11

1.3 1.4 1.5

Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Major File Systems in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.1 Ext2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.2 Ext3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.2.3 ReiserFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2.4 XFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Other Supported File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Large File Support in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Whats New for Storage in SLES 11


2.1

19

2.2

Whats New in SLES 11 SP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.1 Saving iSCSI Target Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.2 Modifying Authentication Parameters in the iSCSI Initiator . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.3 Allowing Persistent Reservations for MPIO Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.4 MDADM 3.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.5 Boot Loader Support for MDRAID External Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.6 YaST Install and Boot Support for MDRAID External Metadata . . . . . . . . . . . . . . . . . . . . . 20 2.1.7 Improved Shutdown for MDRAID Arrays that Contain the Root File System . . . . . . . . . . . 20 2.1.8 MD over iSCSI Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.9 MD-SGPIO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.10 Resizing LVM 2 Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.11 Updating Storage Drivers for Adapters on IBM Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Whats New in SLES 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.1 EVMS2 Is Deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.2 Ext3 as the Default File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.3 JFS File System Is Deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.4 OCFS2 File System Is in the High Availability Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.5 /dev/disk/by-name Is Deprecated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.6 Device Name Persistence in the /dev/disk/by-id Directory . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.7 Filters for Multipathed Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.8 User-Friendly Names for Multipathed Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.9 Advanced I/O Load-Balancing Options for Multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.10 Location Change for Multipath Tool Callouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.11 Change from mpath to multipath for the mkinitrd -f Option . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.12 Change from Multibus to Failover as the Default Setting for the MPIO Path Grouping Policy24

3 Planning a Storage Solution


3.1 3.2 3.3 3.4 3.5

25

Partitioning Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Multipath Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Software RAID Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 File System Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Backup and Antivirus Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Open Source Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Contents

3.5.2

Commercial Backup and Antivirus Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 LVM Configuration
4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10

27

Understanding the Logical Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Creating LVM Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Creating Volume Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Configuring Physical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Configuring Logical Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Resizing a Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Resizing a Logical Volume with YaST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Resizing a Logical Volume with Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Deleting a Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Deleting an LVM Partition (Physical Volume) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5 Resizing File Systems


5.1

39

5.2 5.3 5.4 5.5

Guidelines for Resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1.1 File Systems that Support Resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1.2 Increasing the Size of a File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.1.3 Decreasing the Size of a File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Increasing the Size of an Ext2, Ext3, or Ext4 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Increasing the Size of a Reiser File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Decreasing the Size of an Ext2, Ext3, or Ext4 File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Decreasing the Size of a Reiser File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6 Using UUIDs to Mount Devices


6.1 6.2

45

6.3 6.4 6.5

Naming Devices with udev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Understanding UUIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 6.2.1 Using UUIDs to Assemble or Activate File System Devices . . . . . . . . . . . . . . . . . . . . . . . . 46 6.2.2 Finding the UUID for a File System Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Using UUIDs in the Boot Loader and /etc/fstab File (x86) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Using UUIDs in the Boot Loader and /etc/fstab File (IA64) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7 Managing Multipath I/O for Devices


7.1

49

7.2

7.3

Understanding Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.1.1 What Is Multipathing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.1.2 Benefits of Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Planning for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 7.2.1 Guidelines for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 7.2.2 Using By-ID Names for Multipathed Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.2.3 Using LVM2 on Multipath Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.2.4 Using mdadm with Multipath Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7.2.5 Using --noflush with Multipath Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7.2.6 SAN Timeout Settings When the Root Device Is Multipathed . . . . . . . . . . . . . . . . . . . . . . . 53 7.2.7 Partitioning Multipath Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 7.2.8 Supported Architectures for Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 7.2.9 Supported Storage Arrays for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Multipath Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7.3.1 Device Mapper Multipath Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7.3.2 Multipath I/O Management Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.3.3 Using MDADM for Multipathed Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 7.3.4 The Linux multipath(8) Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Contents

7.4

7.5 7.6

7.7

7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15

Configuring the System for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 7.4.1 Preparing SAN Devices for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 7.4.2 Partitioning Multipathed Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.4.3 Configuring the Server for Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.4.4 Adding multipathd to the Boot Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.4.5 Creating and Configuring the /etc/multipath.conf File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Enabling and Starting Multipath I/O Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Configuring Path Failover Policies and Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 7.6.1 Configuring the Path Failover Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 7.6.2 Configuring Failover Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.6.3 Using a Script to Set Path Priorities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 7.6.4 Configuring ALUA (mpath_prio_alua) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 7.6.5 Reporting Target Path Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Configuring Multipath I/O for the Root Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 7.7.1 Enabling Multipath I/O at Install Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 7.7.2 Enabling Multipath I/O for an Existing Root Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 7.7.3 Disabling Multipath I/O on the Root Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Configuring Multipath I/O for an Existing Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Scanning for New Devices without Rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Scanning for New Partitioned Devices without Rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Viewing Multipath I/O Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Managing I/O in Error Situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Resolving Stalled I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Troubleshooting MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Whats Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

8 Software RAID Configuration


8.1

91

8.2 8.3 8.4

Understanding RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 8.1.1 RAID 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.2 RAID 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.3 RAID 2 and RAID 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.4 RAID 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.5 RAID 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.1.6 Nested RAID Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Soft RAID Configuration with YaST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Troubleshooting Software RAIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

9 Configuring Software RAID for the Root Partition


9.1 9.2 9.3 9.4

95

Prerequisites for the Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Enabling iSCSI Initiator Support at Install Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Enabling Multipath I/O Support at Install Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Creating a Software RAID Device for the Root (/) Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

10 Managing Software RAIDs 6 and 10 with mdadm


10.1

101

10.2

10.3

Creating a RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 10.1.1 Understanding RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 10.1.2 Creating a RAID 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Creating Nested RAID 10 Devices with mdadm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 10.2.1 Understanding Nested RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 10.2.2 Creating Nested RAID 10 (1+0) with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 10.2.3 Creating Nested RAID 10 (0+1) with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Creating a Complex RAID 10 with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Contents

10.4

10.3.1 Understanding the mdadm RAID10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 10.3.2 Creating a RAID 10 with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Creating a Degraded RAID Array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

11 Resizing Software RAID Arrays with mdadm


11.1

111

11.2

11.3

Understanding the Resizing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 11.1.1 Guidelines for Resizing a Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 11.1.2 Overview of Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Increasing the Size of a Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 11.2.1 Increasing the Size of Component Partitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 11.2.2 Increasing the Size of the RAID Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 11.2.3 Increasing the Size of the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Decreasing the Size of a Software RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 11.3.1 Decreasing the Size of the File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 11.3.2 Decreasing the Size of Component Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 11.3.3 Decreasing the Size of the RAID Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

12 iSNS for Linux


12.1 12.2 12.3

121

12.4 12.5 12.6

How iSNS Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Installing iSNS Server for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Configuring iSNS Discovery Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 12.3.1 Creating iSNS Discovery Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 12.3.2 Creating iSNS Discovery Domain Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 12.3.3 Adding iSCSI Nodes to a Discovery Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 12.3.4 Adding Discovery Domains to a Discovery Domain Set . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Starting iSNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Stopping iSNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 For More Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

13 Mass Storage over IP Networks: iSCSI


13.1

129

13.2

13.3

13.4

13.5

Installing iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 13.1.1 Installing iSCSI Target Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 13.1.2 Installing the iSCSI Initiator Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Setting Up an iSCSI Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 13.2.1 Preparing the Storage Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 13.2.2 Creating iSCSI Targets with YaST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 13.2.3 Configuring an iSCSI Target Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 13.2.4 Configuring Online Targets with ietadm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Configuring iSCSI Initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 13.3.1 Using YaST for the iSCSI Initiator Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 13.3.2 Setting Up the iSCSI Initiator Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 13.3.3 The iSCSI Client Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Troubleshooting iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 13.4.1 Hotplug Doesnt Work for Mounting iSCSI Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 13.4.2 Data Packets Dropped for iSCSI Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 13.4.3 Using iSCSI Volumes with LVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

14 Volume Snapshots
14.1 14.2 14.3 14.4

145

Understanding Volume Snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Creating Linux Snapshots with LVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Monitoring a Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Deleting Linux Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Contents

15 Troubleshooting Storage Issues


15.1 15.2 15.3 15.4

147

Is DM-MPIO Available for the Boot Partition? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Issues for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Issues for Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Issues for Software RAIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

A Documentation Updates
A.1

149

A.2 A.3 A.4

A.5 A.6

A.7 A.8

A.9

A.10

A.11

A.12

A.13 A.14

A.15

December 15, 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 A.1.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 A.1.2 Resizing File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 September 8, 2011. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 A.2.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 July 12, 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 A.3.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 June 14, 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 A.4.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 A.4.2 Whats New for Storage in SLES 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 May 5, 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 January 2011 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 A.6.1 LVM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 A.6.2 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 A.6.3 Resizing File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 September 16, 2010. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 A.7.1 LVM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 June 21, 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 A.8.1 LVM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 A.8.2 Managing Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 A.8.3 Managing Software RAIDs 6 and 10 with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 A.8.4 Mass Storage on IP NetWork: iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 May 2010 (SLES 11 SP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 A.9.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 A.9.2 Mass Storage over IP Networks: iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 A.9.3 Software RAID Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.9.4 Whats New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 February 23, 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.10.1 Configuring Software RAID for the Root Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.10.2 Managing Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 December 1, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 A.11.1 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 A.11.2 Resizing File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 A.11.3 Whats New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 October 20, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 A.12.1 LVM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 A.12.2 Managing Multipath I/O for Devices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 A.12.3 Whats New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 August 3, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 A.13.1 Managing Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 June 22, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 A.14.1 Managing Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 A.14.2 Managing Software RAIDs 6 and 10 with mdadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 A.14.3 Mass Storage over IP Networks: iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 May 21, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 A.15.1 Managing Multipath I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Contents

SLES 11 SP1: Storage Administration Guide

About This Guide


ThisguideprovidesinformationabouthowtomanagestoragedevicesonaSUSELinuxEnterprise Server11SupportPack1(SP1)server.

Audience
Thisguideisintendedforsystemadministrators.

Feedback
Wewanttohearyourcommentsandsuggestionsaboutthismanualandtheotherdocumentation includedwiththisproduct.PleaseusetheUserCommentsfeatureatthebottomofeachpageofthe onlinedocumentation,orgotowww.novell.com/documentation/feedback.htmlandenteryour commentsthere.

Documentation Updates
ForthemostrecentversionoftheSUSELinuxEnterpriseServer11SP1StorageAdministrationGuide, visittheNovellDocumentationWebsiteforSUSELinuxEnterpriseServer11SP1(http:// www.novell.com/documentation/sles11).

Additional Documentation
Forinformationaboutpartitioningandmanagingdevices,seeAdvancedDiskSetup(http:// www.novell.com/documentation/sles11/book_sle_deployment/data/cha_advdisk.html)intheSUSE LinuxEnterpriseServer11SP1DeploymentGuide(http://www.novell.com/documentation/sles11/ book_sle_deployment/data/pre_sle.html).

About This Guide

10

SLES 11 SP1: Storage Administration Guide

Overview of File Systems in Linux

SUSELinuxEnterpriseServershipswithanumberofdifferentfilesystemsfromwhichtochoose, includingExt3,Ext2,ReiserFS,andXFS.Eachfilesystemhasitsownadvantagesanddisadvantages. Professionalhighperformancesetupsmightrequireahighlyavailablestoragesystems.Tomeetthe requirementsofhighperformanceclusteringscenarios,SUSELinuxEnterpriseServerincludes OCFS2(OracleClusterFileSystem2)andtheDistributedReplicatedBlockDevice(DRBD)inthe SLESHighAvailabilityStorageInfrastructure(HASI)release.Theseadvancedstoragesystemsare notcoveredinthisguide.Forinformation,seetheSUSELinuxEnterpriseServer11SP1High AvailabilityGuide(http://www.novell.com/documentation/sle_ha/book_sleha/data/book_sleha.html). Section 1.1,Terminology,onpage 11 Section 1.2,MajorFileSystemsinLinux,onpage 12 Section 1.3,OtherSupportedFileSystems,onpage 16 Section 1.4,LargeFileSupportinLinux,onpage 16 Section 1.5,AdditionalInformation,onpage 17

1.1

Terminology
metadata Adatastructurethatisinternaltothefilesystem.Itassuresthatalloftheondiskdatais properlyorganizedandaccessible.Essentially,itisdataaboutthedata.Almosteveryfile systemhasitsownstructureofmetadata,whichisonreasonthatthefilesystemsshowdifferent performancecharacteristics.Itisextremelyimportanttomaintainmetadataintact,because otherwisealldataonthefilesystemcouldbecomeinaccessible. inode Adatastructureonafilesystemthatcontainsvariousinformationaboutafile,includingsize, numberoflinks,pointerstothediskblockswherethefilecontentsareactuallystored,anddate andtimeofcreation,modification,andaccess. journal Inthecontextofafilesystem,ajournalisanondiskstructurecontainingatypeofloginwhich thefilesystemstoreswhatitisabouttochangeinthefilesystemsmetadata.Journalinggreatly reducestherecoverytimeofafilesystembecauseithasnoneedforthelengthysearchprocess thatcheckstheentirefilesystematsystemstartup.Instead,onlythejournalisreplayed.

Overview of File Systems in Linux

11

1.2

Major File Systems in Linux


SUSELinuxEnterpriseServeroffersavarietyoffilesystemsfromwhichtochoose.Thissection containsanoverviewofhowthesefilesystemsworkandwhichadvantagestheyoffer. Itisveryimportanttorememberthatnofilesystembestsuitsallkindsofapplications.Eachfile systemhasitsparticularstrengthsandweaknesses,whichmustbetakenintoaccount.Inaddition, eventhemostsophisticatedfilesystemcannotreplaceareasonablebackupstrategy. Thetermsdataintegrityanddataconsistency,whenusedinthissection,donotrefertotheconsistency oftheuserspacedata(thedatayourapplicationwritestoitsfiles).Whetherthisdataisconsistent mustbecontrolledbytheapplicationitself. IMPORTANT:Unlessstatedotherwiseinthissection,allthestepsrequiredtosetuporchange partitionsandfilesystemscanbeperformedbyusingYaST. Section 1.2.1,Ext2,onpage 12 Section 1.2.2,Ext3,onpage 13 Section 1.2.3,ReiserFS,onpage 14 Section 1.2.4,XFS,onpage 15

1.2.1

Ext2
TheoriginsofExt2gobacktotheearlydaysofLinuxhistory.Itspredecessor,theExtendedFile System,wasimplementedinApril1992andintegratedinLinux0.96c.TheExtendedFileSystem underwentanumberofmodificationsand,asExt2,becamethemostpopularLinuxfilesystemfor years.Withthecreationofjournalingfilesystemsandtheirshortrecoverytimes,Ext2becameless important. AbriefsummaryofExt2sstrengthsmighthelpunderstandwhyitwasandinsomeareasstillis thefavoriteLinuxfilesystemofmanyLinuxusers. SolidityandSpeedonpage 12 EasyUpgradabilityonpage 13

Solidity and Speed


Beingquiteanoldtimer,Ext2underwentmanyimprovementsandwasheavilytested.Thismight bethereasonwhypeopleoftenrefertoitasrocksolid.Afterasystemoutagewhenthefilesystem couldnotbecleanlyunmounted,e2fsckstartstoanalyzethefilesystemdata.Metadataisbrought intoaconsistentstateandpendingfilesordatablocksarewrittentoadesignateddirectory(called lost+found).Incontrasttojournalingfilesystems,e2fsckanalyzestheentirefilesystemandnotjust therecentlymodifiedbitsofmetadata.Thistakessignificantlylongerthancheckingthelogdataofa journalingfilesystem.Dependingonfilesystemsize,thisprocedurecantakehalfanhourormore. Therefore,itisnotdesirabletochooseExt2foranyserverthatneedshighavailability.However, becauseExt2doesnotmaintainajournalandusessignificantlylessmemory,itissometimesfaster thanotherfilesystems.

12

SLES 11 SP1: Storage Administration Guide

Easy Upgradability
BecauseExt3isbasedontheExt2codeandsharesitsondiskformataswellasitsmetadataformat, upgradesfromExt2toExt3areveryeasy.

1.2.2

Ext3
Ext3wasdesignedbyStephenTweedie.Unlikeallothernextgenerationfilesystems,Ext3doesnot followacompletelynewdesignprinciple.ItisbasedonExt2.Thesetwofilesystemsareveryclosely relatedtoeachother.AnExt3filesystemcanbeeasilybuiltontopofanExt2filesystem.Themost importantdifferencebetweenExt2andExt3isthatExt3supportsjournaling.Insummary,Ext3has threemajoradvantagestooffer: EasyandHighlyReliableUpgradesfromExt2onpage 13 ReliabilityandPerformanceonpage 13 ConvertinganExt2FileSystemintoExt3onpage 13

Easy and Highly Reliable Upgrades from Ext2


ThecodeforExt2isthestrongfoundationonwhichExt3couldbecomeahighlyacclaimednext generationfilesystem.ItsreliabilityandsolidityareelegantlycombinedinExt3withtheadvantages ofajournalingfilesystem.Unliketransitionstootherjournalingfilesystems,suchasReiserFSor XFS,whichcanbequitetedious(makingbackupsoftheentirefilesystemandrecreatingitfrom scratch),atransitiontoExt3isamatterofminutes.Itisalsoverysafe,becauserecreatinganentire filesystemfromscratchmightnotworkflawlessly.ConsideringthenumberofexistingExt2systems thatawaitanupgradetoajournalingfilesystem,youcaneasilyseewhyExt3mightbeofsome importancetomanysystemadministrators.DowngradingfromExt3toExt2isaseasyasthe upgrade.JustperformacleanunmountoftheExt3filesystemandremountitasanExt2filesystem.

Reliability and Performance


Someotherjournalingfilesystemsfollowthemetadataonlyjournalingapproach.Thismeansyour metadataisalwayskeptinaconsistentstate,butthiscannotbeautomaticallyguaranteedforthefile systemdataitself.Ext3isdesignedtotakecareofbothmetadataanddata.Thedegreeofcarecan becustomized.EnablingExt3inthedata=journalmodeoffersmaximumsecurity(dataintegrity), butcanslowdownthesystembecausebothmetadataanddataarejournaled.Arelativelynew approachistousethedata=orderedmode,whichensuresbothdataandmetadataintegrity,butuses journalingonlyformetadata.Thefilesystemdrivercollectsalldatablocksthatcorrespondtoone metadataupdate.Thesedatablocksarewrittentodiskbeforethemetadataisupdated.Asaresult, consistencyisachievedformetadataanddatawithoutsacrificingperformance.Athirdoptiontouse isdata=writeback,whichallowsdatatobewrittenintothemainfilesystemafteritsmetadatahas beencommittedtothejournal.Thisoptionisoftenconsideredthebestinperformance.Itcan, however,allowolddatatoreappearinfilesaftercrashandrecoverywhileinternalfilesystem integrityismaintained.Ext3usesthedata=orderedoptionasthedefault.

Converting an Ext2 File System into Ext3


ToconvertanExt2filesystemtoExt3:
1 CreateanExt3journalbyrunningtune2fs -jastherootuser.

ThiscreatesanExt3journalwiththedefaultparameters.

Overview of File Systems in Linux

13

Tospecifyhowlargethejournalshouldbeandonwhichdeviceitshouldreside,runtune2fs Jinsteadtogetherwiththedesiredjournaloptionssize=anddevice=.Moreinformationabout thetune2fsprogramisavailableinthetune2fsmanpage.


2 Editthefile/etc/fstabastherootusertochangethefilesystemtypespecifiedforthe correspondingpartitionfromext2toext3,thensavethechanges.

ThisensuresthattheExt3filesystemisrecognizedassuch.Thechangetakeseffectafterthenext reboot.
3 TobootarootfilesystemthatissetupasanExt3partition,includethemodulesext3andjbdin theinitrd. 3a Edit/etc/sysconfig/kernelasroot,addingext3andjbdtotheINITRD_MODULES

variable,thensavethechanges.
3b Runthemkinitrdcommand.

Thisbuildsanewinitrdandpreparesitforuse.
4 Rebootthesystem.

1.2.3

ReiserFS
Officiallyoneofthekeyfeaturesofthe2.4kernelrelease,ReiserFShasbeenavailableasakernel patchfor2.2.xSUSEkernelssinceversion6.4.ReiserFSwasdesignedbyHansReiserandthe Namesysdevelopmentteam.IthasprovenitselftobeapowerfulalternativetoExt2.Itskeyassets arebetterdiskspaceutilization,betterdiskaccessperformance,fastercrashrecovery,andreliability throughdatajournaling. BetterDiskSpaceUtilizationonpage 14 BetterDiskAccessPerformanceonpage 14 FastCrashRecoveryonpage 14 ReliabilitythroughDataJournalingonpage 15

Better Disk Space Utilization


InReiserFS,alldataisorganizedinastructurecalledaB*balancedtree.Thetreestructure contributestobetterdiskspaceutilizationbecausesmallfilescanbestoreddirectlyintheB*treeleaf nodesinsteadofbeingstoredelsewhereandjustmaintainingapointertotheactualdisklocation.In additiontothat,storageisnotallocatedinchunksof1or4KB,butinportionsoftheexactsize needed.Anotherbenefitliesinthedynamicallocationofinodes.Thiskeepsthefilesystemmore flexiblethantraditionalfilesystems,likeExt2,wheretheinodedensitymustbespecifiedatfile systemcreationtime.

Better Disk Access Performance


Forsmallfiles,filedataandstat_data(inode)informationareoftenstorednexttoeachother.They canbereadwithasinglediskI/Ooperation,meaningthatonlyoneaccesstodiskisrequiredto retrievealltheinformationneeded.

Fast Crash Recovery


Usingajournaltokeeptrackofrecentmetadatachangesmakesafilesystemcheckamatterof seconds,evenforhugefilesystems.

14

SLES 11 SP1: Storage Administration Guide

Reliability through Data Journaling


ReiserFSalsosupportsdatajournalingandordereddatamodessimilartotheconceptsoutlinedin Ext3onpage 13.Thedefaultmodeisdata=ordered,whichensuresbothdataandmetadata integrity,butusesjournalingonlyformetadata.

1.2.4

XFS
OriginallyintendedasthefilesystemfortheirIRIXOS,SGIstartedXFSdevelopmentintheearly 1990s.TheideabehindXFSwastocreateahighperformance64bitjournalingfilesystemtomeet extremecomputingchallenges.XFSisverygoodatmanipulatinglargefilesandperformswellon highendhardware.However,evenXFShasadrawback.LikeReiserFS,XFStakesgreatcareof metadataintegrity,butlesscareofdataintegrity. AquickreviewofXFSskeyfeaturesexplainswhyitmightprovetobeastrongcompetitorforother journalingfilesystemsinhighendcomputing. HighScalabilitythroughtheUseofAllocationGroupsonpage 15 HighPerformancethroughEfficientManagementofDiskSpaceonpage 15 PreallocationtoAvoidFileSystemFragmentationonpage 15

High Scalability through the Use of Allocation Groups


AtthecreationtimeofanXFSfilesystem,theblockdeviceunderlyingthefilesystemisdividedinto eightormorelinearregionsofequalsize.Thosearereferredtoasallocationgroups.Eachallocation groupmanagesitsowninodesandfreediskspace.Practically,allocationgroupscanbeseenasfile systemsinafilesystem.Becauseallocationgroupsareratherindependentofeachother,morethan oneofthemcanbeaddressedbythekernelsimultaneously.ThisfeatureisthekeytoXFSsgreat scalability.Naturally,theconceptofindependentallocationgroupssuitstheneedsofmultiprocessor systems.

High Performance through Efficient Management of Disk Space


FreespaceandinodesarehandledbyB+treesinsidetheallocationgroups.TheuseofB+treesgreatly contributestoXFSsperformanceandscalability.XFSusesdelayedallocation,whichhandlesallocation bybreakingtheprocessintotwopieces.ApendingtransactionisstoredinRAMandtheappropriate amountofspaceisreserved.XFSstilldoesnotdecidewhereexactly(infilesystemblocks)thedata shouldbestored.Thisdecisionisdelayeduntilthelastpossiblemoment.Someshortlived temporarydatamightnevermakeitswaytodisk,becauseitisobsoletebythetimeXFSdecides whereactuallytosaveit.Inthisway,XFSincreaseswriteperformanceandreducesfilesystem fragmentation.Becausedelayedallocationresultsinlessfrequentwriteeventsthaninotherfile systems,itislikelythatdatalossafteracrashduringawriteismoresevere.

Preallocation to Avoid File System Fragmentation


Beforewritingthedatatothefilesystem,XFSreserves(preallocates)thefreespaceneededforafile. Thus,filesystemfragmentationisgreatlyreduced.Performanceisincreasedbecausethecontentsof afilearenotdistributedalloverthefilesystem.

Overview of File Systems in Linux

15

1.3

Other Supported File Systems


Table11summarizessomeotherfilesystemssupportedbyLinux.Theyaresupportedmainlyto ensurecompatibilityandinterchangeofdatawithdifferentkindsofmediaorforeignoperating systems.
Table 1-1 FileSystemTypesinLinux

File System Type

Description Compressed ROM file system: A compressed read-only file system for ROMs. High Performance File System: The IBM OS/2 standard file system. Only supported in read-only mode. Standard file system on CD-ROMs. This file system originated from academic projects on operating systems and was the first file system used in Linux. Today, it is used as a file system for floppy disks.

cramfs hpfs iso9660 minix msdos ncpfs nfs smbfs sysv ufs umsdos vfat ntfs

fat, the file system originally used by DOS, is today used by various operating
systems. File system for mounting Novell volumes over networks. Network File System: Here, data can be stored on any machine in a network and access might be granted via a network. Server Message Block is used by products such as Windows to enable file access over a network. Used on SCO UNIX, Xenix, and Coherent (commercial UNIX systems for PCs). Used by BSD, SunOS, and NextStep. Only supported in read-only mode. UNIX on MS-DOS: Applied on top of a standard fat file system, achieves UNIX functionality (permissions, links, long filenames) by creating special files. Virtual FAT: Extension of the fat file system (supports long filenames). Windows NT file system; read-only.

1.4

Large File Support in Linux


Originally,Linuxsupportedamaximumfilesizeof2 GB.Thiswasenoughbeforetheexplosionof multimediaandaslongasnoonetriedtomanipulatehugedatabasesonLinux.Becomingmoreand moreimportantforservercomputing,thekernelandClibraryweremodifiedtosupportfilesizes largerthan2 GBwhenusinganewsetofinterfacesthatapplicationsmustuse.Today,almostall majorfilesystemsofferLFSsupport,allowingyoutoperformhighendcomputing.Table12offers anoverviewofthecurrentlimitationsofLinuxfilesandfilesystems.

16

SLES 11 SP1: Storage Administration Guide

Table 1-2 MaximumSizesofFileSystems(OnDiskFormat)

File System Ext2 or Ext3 (1 KB block size) Ext2 or Ext3 (2 KB block size) Ext2 or Ext3 (4 KB block size) Ext2 or Ext3 (8 KB block size) (systems with 8 KB pages, like Alpha) ReiserFS v3 XFS NFSv2 (client side) NFSv3 (client side)

File Size (Bytes) 234 (16 GB) 238 (256 GB) 241 (2 TB) 246 (64 TB)

File System Size (Bytes) 241 (2 TB) 243 (8 TB) 244 -4096 (16 TB-4096 Bytes) 245 (32 TB)

246 (64 TB) 263 (8 EB) 231 (2 GB) 263 (8 EB)

245 (32 TB) 263 (8 EB) 263 (8 EB) 263 (8 EB)

IMPORTANT:Table12describesthelimitationsregardingtheondiskformat.The2.6Linuxkernel imposesitsownlimitsonthesizeoffilesandfilesystemshandledbyit.Theseareasfollows: FileSize On32bitsystems,filescannotexceed2TB(241bytes). FileSystemSize Filesystemscanbeupto273bytesinsize.However,thislimitisstilloutofreachforthe currentlyavailablehardware.

1.5

Additional Information
TheFileSystemPrimer(http://wiki.novell.com/index.php/File_System_Primer)ontheNovellWebsite describesavarietyoffilesystemsforLinux.Itdiscussesthefilesystems,whytherearesomany,and whichonesarethebesttouseforwhichworkloadsanddata. Eachofthefilesystemprojectsdescribedabovemaintainsitsownhomepageonwhichtofind mailinglistinformation,furtherdocumentation,andFAQs: E2fsprogs:Ext2/3/4FilesystemUtilities(http://e2fsprogs.sourceforge.net/) IntroducingExt3(http://www.ibm.com/developerworks/linux/library/lfs7.html) UsingReiserFSwithLinux(http://www.ibm.com/developerworks/aix/library/auunixreiserFS/) XFS:AHighPerformanceJournalingFilesytem(http://oss.sgi.com/projects/xfs/) OCFS2Project(http://oss.oracle.com/projects/ocfs2/) AcomprehensivemultiparttutorialaboutLinuxfilesystemscanbefoundatIBM developerWorksin theAdvancedFilesystemImplementorsGuide(http://www106.ibm.com/developerworks/library/l fs.html).Anindepthcomparisonoffilesystems(notonlyLinuxfilesystems)isavailablefromthe WikipediaprojectinComparisonofFileSystems(http://en.wikipedia.org/wiki/ Comparison_of_file_systems#Comparison).

Overview of File Systems in Linux

17

18

SLES 11 SP1: Storage Administration Guide

2
2.1

Whats New for Storage in SLES 11

ThefeaturesandbehaviorchangesnotedinthissectionweremadeforSUSELinuxEnterpriseServer 11. Section 2.1,WhatsNewinSLES11SP1,onpage 19 Section 2.2,WhatsNewinSLES11,onpage 22

Whats New in SLES 11 SP1


Inadditiontobugfixes,thefeaturesandbehaviorchangesnotedinthissectionweremadeforthe SUSELinuxEnterpriseServer11SP1release. Section 2.1.1,SavingiSCSITargetInformation,onpage 19 Section 2.1.2,ModifyingAuthenticationParametersintheiSCSIInitiator,onpage 19 Section 2.1.3,AllowingPersistentReservationsforMPIODevices,onpage 20 Section 2.1.4,MDADM3.0.2,onpage 20 Section 2.1.5,BootLoaderSupportforMDRAIDExternalMetadata,onpage 20 Section 2.1.6,YaSTInstallandBootSupportforMDRAIDExternalMetadata,onpage 20 Section 2.1.7,ImprovedShutdownforMDRAIDArraysthatContaintheRootFileSystem,on page 20 Section 2.1.8,MDoveriSCSIDevices,onpage 21 Section 2.1.9,MDSGPIO,onpage 21 Section 2.1.10,ResizingLVM2Mirrors,onpage 21 Section 2.1.11,UpdatingStorageDriversforAdaptersonIBMServers,onpage 21

2.1.1

Saving iSCSI Target Information


IntheYaST>NetworkServices>iSCSITargetfunction,aSaveoptionwasaddedthatallowsyouto exporttheiSCSItargetinformation.Thismakesiteasiertoprovideinformationtoconsumersofthe resources.

2.1.2

Modifying Authentication Parameters in the iSCSI Initiator


IntheYaST>NetworkServices>iSCSIInitiatorfunction,youcanmodifytheauthentication parametersforconnectingtoatargetdevices.Previously,youneededtodeletetheentryandre createitinordertochangetheauthenticationinformation.

Whats New for Storage in SLES 11

19

2.1.3

Allowing Persistent Reservations for MPIO Devices


ASCSIinitiatorcanissueSCSIreservationsforasharedstoragedevice,whichlocksoutSCSI initiatorsonotherserversfromaccessingthedevice.ThesereservationspersistacrossSCSIresetsthat mighthappenaspartoftheSCSIexceptionhandlingprocess. ThefollowingarepossiblescenarioswhereSCSIreservationswouldbeuseful: InasimpleSANenvironment,persistentSCSIreservationshelpprotectagainstadministrator errorswhereaLUNisattemptedtobeaddedtooneserverbutitisalreadyinusebyanother server,whichmightresultindatacorruption.SANzoningistypicallyusedtopreventthistype oferror. Inahighavailabilityenvironmentwithfailoversetup,persistentSCSIreservationshelpprotect againsterrantserversconnectingtoSCSIdevicesthatarereservedbyotherservers.

2.1.4

MDADM 3.0.2
UsethelatestversionoftheMultipleDevicesAdministration(MDADM,mdadm)utilitytotake advantageofbugfixesandimprovements.

2.1.5

Boot Loader Support for MDRAID External Metadata


SupportwasaddedtousetheexternalmetadatacapabilitiesoftheMDADMutilityversion3.0to installandruntheoperatingsystemfromRAIDvolumesdefinedbytheIntelMatrixStorage Technologymetadataformat.ThismovesthefunctionalityfromtheDeviceMapperRAID (DMRAID)infrastructuretotheMultipleDevicesRAID(MDRAID)infrastructure,whichoffersthe morematureRAID5implementationandoffersawiderfeaturesetoftheMDkernelinfrastructure. ItallowsacommonRAIDdrivertobeusedacrossallmetadataformats,includingIntel,DDF (commonRAIDdiskdataformat),andnativeMDmetadata.

2.1.6

YaST Install and Boot Support for MDRAID External Metadata


TheYaSTinstallertooladdedsupportforMDRAIDExternalMetadataforRAID0,1,10,5,and6.The installercandetectRAIDarraysandwhethertheplatformRAIDcapabilitiesareenabled.IfRAIDis enabledintheplatformBIOSforIntelMatrixStorageManager,itoffersoptionsforDMRAID, MDRAID(recommended),ornone.TheinitrdwasalsomodifiedtosupportassemblingBIOS basedRAIDarrays.

2.1.7

Improved Shutdown for MDRAID Arrays that Contain the Root File System
ShutdownscriptsweremodifiedtowaituntilalloftheMDRAIDarraysaremarkedclean.The operatingsystemshutdownprocessnowwaitsforadirtybittobecleareduntilallMDRAID volumeshavefinishedwriteoperations. Changesweremadetothestartupscript,shutdownscript,andtheinitrdtoconsiderwhetherthe root(/)filesystem(thesystemvolumethatcontainstheoperatingsystemandapplicationfiles) residesonasoftwareRAIDarray.Themetadatahandlerforthearrayisstartedearlyintheshutdown processtomonitorthefinalrootfilesystemenvironmentduringtheshutdown.Thehandleris

20

SLES 11 SP1: Storage Administration Guide

excludedfromthegeneralkillallevents.Theprocessalsoallowsforwritestobequiescedandfor thearraysmetadatadirtybit(whichindicateswhetheranarrayneedstoberesynchronized)tobe clearedattheendoftheshutdown.

2.1.8

MD over iSCSI Devices


TheYaSTinstallernowallowsMDtobeconfiguredoveriSCSIdevices. IfRAIDarraysareneededonboot,theiSCSIinitiatorsoftwareisloadedbeforeboot.mdsothatthe iSCSItargetsareavailabletobeautoconfiguredfortheRAID. Foranewinstall,Libstoragecreatesan/etc/mdadm.conffileandaddsthelineAUTO -all.During anupdate,thelineisnotadded.If/etc/mdadm.confcontainstheline
AUTO -all

thennoRAIDarraysareautoassembledunlesstheyareexplicitlylistedin/etc/mdadm.conf.

2.1.9

MD-SGPIO
TheMDSGPIOutilityisastandaloneapplicationthatmonitorsRAIDarraysviasysfs(2).Events triggeranLEDchangerequestthatcontrolsblinkingforLEDlightsthatareassociatedwitheachslot inanenclosureoradrivebayofastoragesubsystem.ItsupportstwotypesofLEDsystems: 2LEDsystems(ActivityLED,StatusLED) 3LEDsystems(ActivityLED,LocateLED,FailLED)

2.1.10

Resizing LVM 2 Mirrors


Thelvresize,lvextend,andlvreducecommandsthatareusedtoresizelogicalvolumeswere modifiedtoallowtheresizingofLVM2mirrors.Previously,thesecommandsreportederrorsifthe logicalvolumewasamirror.

2.1.11

Updating Storage Drivers for Adapters on IBM Servers


Updatethefollowingstoragedriverstousethelatestavailableversionstosupportstorageadapters onIBMservers: Adaptec:aacraid,aic94xx Emulex:lpfc LSI:mptas,megaraid_sas ThemptsasdrivernowsupportsnativeEEH(EnhancedErrorHandler)recovery,whichisakey featureforalloftheIOdevicesforPowerplatformcustomers. qLogic:qla2xxx,qla3xxx,qla4xxx

Whats New for Storage in SLES 11

21

2.2

Whats New in SLES 11


ThefeaturesandbehaviorchangesnotedinthissectionweremadefortheSUSELinuxEnterprise Server11release. Section 2.2.1,EVMS2IsDeprecated,onpage 22 Section 2.2.2,Ext3astheDefaultFileSystem,onpage 22 Section 2.2.3,JFSFileSystemIsDeprecated,onpage 22 Section 2.2.4,OCFS2FileSystemIsintheHighAvailabilityRelease,onpage 22 Section 2.2.5,/dev/disk/bynameIsDeprecated,onpage 23 Section 2.2.6,DeviceNamePersistenceinthe/dev/disk/byidDirectory,onpage 23 Section 2.2.7,FiltersforMultipathedDevices,onpage 23 Section 2.2.8,UserFriendlyNamesforMultipathedDevices,onpage 23 Section 2.2.9,AdvancedI/OLoadBalancingOptionsforMultipath,onpage 24 Section 2.2.10,LocationChangeforMultipathToolCallouts,onpage 24 Section 2.2.11,ChangefrommpathtomultipathforthemkinitrdfOption,onpage 24 Section 2.2.12,ChangefromMultibustoFailoverastheDefaultSettingfortheMPIOPath GroupingPolicy,onpage 24

2.2.1

EVMS2 Is Deprecated
TheEnterpriseVolumeManagementSystems(EVMS2)storagemanagementsolutionisdeprecated. AllEVMSmanagementmoduleshavebeenremovedfromtheSUSELinuxEnterpriseServer11 packages.YourEVMSmanageddevicesshouldbeautomaticallyrecognizedandmanagedbyLinux VolumeManager2(LVM2)whenyouupgradeyoursystem.Formoreinformation,seeEvolutionof StorageandVolumeManagementinSUSELinuxEnterprise(http://www.novell.com/linux/ volumemanagement/strategy.html). ForinformationaboutmanagingstoragewithEVMS2onSUSELinuxEnterpriseServer10,seethe SUSELinuxEnterpriseServer10SP3:StorageAdministrationGuide(http://www.novell.com/ documentation/sles10/stor_admin/data/bookinfo.html).

2.2.2

Ext3 as the Default File System


TheExt3filesystemhasreplacedReiserFSasthedefaultfilesystemrecommendedbytheYaSTtools atinstallationtimeandwhenyoucreatefilesystems.ReiserFSisstillsupported.Formore information,seeFileSystemSupport(http://www.novell.com/linux/techspecs.html?tab=2)onthe SUSELinuxEnterprise11TechSpecsWebpage.

2.2.3 2.2.4

JFS File System Is Deprecated


TheJFSfilesystemisnolongersupported.TheJFSutilitieswereremovedfromthedistribution.

OCFS2 File System Is in the High Availability Release


TheOCFS2filesystemisfullysupportedaspartoftheSUSELinuxEnterpriseHighAvailability Extension.

22

SLES 11 SP1: Storage Administration Guide

2.2.5 2.2.6

/dev/disk/by-name Is Deprecated
The/dev/disk/by-namepathisdeprecatedinSUSELinuxEnterpriseServer11packages.

Device Name Persistence in the /dev/disk/by-id Directory


InSUSELinuxEnterpriseServer11,thedefaultmultipathsetupreliesonudevtooverwritethe existingsymboliclinksinthe/dev/disk/by-iddirectorywhenmultipathingisstarted.Beforeyou startmultipathing,thelinkpointstotheSCSIdevicebyusingitsscsi-xxxname.When multipathingisrunning,thesymboliclinkpointstothedevicebyusingitsdm-uuid-xxxname.This ensuresthatthesymboliclinksinthe/dev/disk/by-idpathpersistentlypointtothesamedevice regardlessofwhethermultipathingisstartedornot.Theconfigurationfiles(suchaslvm.confand md.conf)donotneedtobemodifiedbecausetheyautomaticallypointtothecorrectdevice. Seethefollowingsectionsformoreinformationabouthowthisbehaviorchangeaffectsother features: Section 2.2.7,FiltersforMultipathedDevices,onpage 23 Section 2.2.8,UserFriendlyNamesforMultipathedDevices,onpage 23

2.2.7

Filters for Multipathed Devices


Thedeprecationofthe/dev/disk/by-namedirectory(asdescribedinSection 2.2.5,/dev/disk/by nameIsDeprecated,onpage 23)affectshowyousetupfiltersformultipatheddevicesinthe configurationfiles.Ifyouusedthe/dev/disk/by-namedevicenamepathforthemultipathdevice filtersinthe/etc/lvm/lvm.conffile,youneedtomodifythefiletousethe/dev/disk/by-idpath. Considerthefollowingwhensettingupfiltersthatusetheby-idpath: The/dev/disk/by-id/scsi-*devicenamesarepersistentandcreatedforexactlythispurpose. Donotusethe/dev/disk/by-id/dm-*nameinthefilters.Thesearesymboliclinkstothe DeviceMapperdevices,andresultinreportingduplicatePVsinresponsetoapvscan command.ThenamesappeartochangefromLVM-pvuuidtodm-uuidandbacktoLVM-pvuuid. Forinformationaboutsettingupfilters,seeSection 7.2.3,UsingLVM2onMultipathDevices,on page 52.

2.2.8

User-Friendly Names for Multipathed Devices


Achangeinhowmultipatheddevicenamesarehandledinthe/dev/disk/by-iddirectory(as describedinSection 2.2.6,DeviceNamePersistenceinthe/dev/disk/byidDirectory,onpage 23) affectsyoursetupforuserfriendlynamesbecausethetwonamesforthedevicediffer.Youmust modifytheconfigurationfilestoscanonlythedevicemappernamesaftermultipathingis configured. Forexample,youneedtomodifythelvm.conffiletoscanusingthemultipatheddevicenamesby specifyingthe/dev/disk/by-id/dm-uuid-.*-mpath-.*pathinsteadof/dev/disk/by-id.

Whats New for Storage in SLES 11

23

2.2.9

Advanced I/O Load-Balancing Options for Multipath


ThefollowingadvancedI/OloadbalancingoptionsareavailableforDeviceMapperMultipath,in additiontoroundrobin: Leastpending Lengthloadbalancing Servicetime Forinformation,seepath_selectorinUnderstandingPriorityGroupsandAttributesonpage 72.

2.2.10

Location Change for Multipath Tool Callouts


Thempath_*prio_calloutsfortheDeviceMapperMultipathtoolhavebeenmovedtosharedlibraries in/lib/libmultipath/lib*.Byusingsharedlibraries,thecalloutsareloadedintomemoryon daemonstartup.Thishelpsavoidasystemdeadlockonanallpathsdownscenariowherethe programsneedtobeloadedfromthedisk,whichmightnotbeavailableatthispoint.

2.2.11

Change from mpath to multipath for the mkinitrd -f Option


TheoptionforaddingDeviceMapperMultipathservicestotheinitrdhaschangedfrom-f mpath to-f multipath. Tomakeanewinitrd,thecommandisnow:
mkinitrd -f multipath

2.2.12

Change from Multibus to Failover as the Default Setting for the MPIO Path Grouping Policy
Thedefaultsettingforthepath_grouping_policyinthe/etc/multipath.conf filehaschanged frommultibustofailover. Forinformationaboutconfiguringthepath_grouping_policy,seeSection 7.6,ConfiguringPath FailoverPoliciesandPriorities,onpage 71.

24

SLES 11 SP1: Storage Administration Guide

Planning a Storage Solution

Considerwhatyourstorageneedsareandhowyoucaneffectivelymanageanddivideyourstorage spacetobestmeetyourneeds.Usetheinformationinthissectiontohelpplanyourstorage deploymentforfilesystemsonyourSUSELinuxEnterpriseServer11server. Section 3.1,PartitioningDevices,onpage 25 Section 3.2,MultipathSupport,onpage 25 Section 3.3,SoftwareRAIDSupport,onpage 25 Section 3.4,FileSystemSnapshots,onpage 25 Section 3.5,BackupandAntivirusSupport,onpage 26

3.1

Partitioning Devices
ForinformationaboutusingtheYaSTExpertPartitioner,seeUsingtheYaSTPartitionerinthe SUSELinuxEnterpriseServer11InstallationandAdministrationGuide.

3.2

Multipath Support
LinuxsupportsusingmultipleI/Opathsforfaulttolerantconnectionsbetweentheserverandits storagedevices.Linuxmultipathsupportisdisabledbydefault.Ifyouuseamultipathsolutionthat isprovidedbyyourstoragesubsystemvendor,youdonotneedtoconfiguretheLinuxmultipath separately.

3.3

Software RAID Support


LinuxsupportshardwareandsoftwareRAIDdevices.IfyouusehardwareRAIDdevices,software RAIDdevicesareunnecessary.YoucanusebothhardwareandsoftwareRAIDdevicesonthesame server. TomaximizetheperformancebenefitsofsoftwareRAIDdevices,partitionsusedfortheRAID shouldcomefromdifferentphysicaldevices.ForsoftwareRAID1devices,themirroredpartitions cannotshareanydisksincommon.

3.4

File System Snapshots


Linuxsupportsfilesystemsnapshots.

Planning a Storage Solution

25

3.5

Backup and Antivirus Support


Section 3.5.1,OpenSourceBackup,onpage 26 Section 3.5.2,CommercialBackupandAntivirusSupport,onpage 26

3.5.1

Open Source Backup


OpensourcetoolsforbackingupdataonLinuxincludetar,cpio,andrsync.Seethemanpagesfor thesetoolsformoreinformation. PAX:POSIXFileSystemArchiver.Itsupportscpioandtar,whicharethetwomostcommon formsofstandardarchive(backup)files.Seethemanpageformoreinformation. Amanda:TheAdvancedMarylandAutomaticNetworkDiskArchiver.Seewww.amanda.org (http://www.amanda.org/).

3.5.2

Commercial Backup and Antivirus Support


NovellOpenEnterpriseServer(OES)2SupportPack1forLinuxisaproductthatincludesSUSE LinuxEnterpriseServer(SLES)10SupportPack2.Antivirusandbackupsoftwarevendorswho supportOES2SP1alsosupportSLES10SP2.YoucanvisitthevendorWebsitestofindoutabout theirscheduledsupportofSLES11. Foracurrentlistofpossiblebackupandantivirussoftwarevendors,seeNovellOpenEnterpriseServer PartnerSupport:BackupandAntivirusSupport(http://www.novell.com/products/ openenterpriseserver/partners_communities.html).Thislistisupdatedquarterly.

26

SLES 11 SP1: Storage Administration Guide

LVM Configuration

ThissectionbrieflydescribestheprinciplesbehindLogicalVolumeManager(LVM)anditsbasic featuresthatmakeitusefulundermanycircumstances.TheYaSTLVMconfigurationcanbereached fromtheYaSTExpertPartitioner.Thispartitioningtoolenablesyoutoeditanddeleteexisting partitionsandcreatenewonesthatshouldbeusedwithLVM. WARNING:UsingLVMmightbeassociatedwithincreasedrisk,suchasdataloss.Risksalsoinclude applicationcrashes,powerfailures,andfaultycommands.SaveyourdatabeforeimplementingLVM orreconfiguringvolumes.Neverworkwithoutabackup. Section 4.1,UnderstandingtheLogicalVolumeManager,onpage 27 Section 4.2,CreatingLVMPartitions,onpage 29 Section 4.3,CreatingVolumeGroups,onpage 30 Section 4.4,ConfiguringPhysicalVolumes,onpage 32 Section 4.5,ConfiguringLogicalVolumes,onpage 33 Section 4.6,ResizingaVolumeGroup,onpage 36 Section 4.7,ResizingaLogicalVolumewithYaST,onpage 36 Section 4.8,ResizingaLogicalVolumewithCommands,onpage 37 Section 4.9,DeletingaVolumeGroup,onpage 38 Section 4.10,DeletinganLVMPartition(PhysicalVolume),onpage 38

4.1

Understanding the Logical Volume Manager


LVMenablesflexibledistributionofharddiskspaceoverseveralfilesystems.Itwasdeveloped becausetheneedtochangethesegmentationofharddiskspacemightariseonlyaftertheinitial partitioninghasalreadybeendoneduringinstallation.Becauseitisdifficulttomodifypartitionsona runningsystem,LVMprovidesavirtualpool(volumegrouporVG)ofmemoryspacefromwhich logicalvolumes(LVs)canbecreatedasneeded.TheoperatingsystemaccessestheseLVsinsteadof thephysicalpartitions.Volumegroupscanspanmorethanonedisk,sothatseveraldisksorpartsof themcanconstituteonesingleVG.Inthisway,LVMprovidesakindofabstractionfromthephysical diskspacethatallowsitssegmentationtobechangedinamucheasierandsaferwaythanthrough physicalrepartitioning. Figure41comparesphysicalpartitioning(left)withLVMsegmentation(right).Ontheleftside,one singlediskhasbeendividedintothreephysicalpartitions(PART),eachwithamountpoint(MP) assignedsothattheoperatingsystemcanaccessthem.Ontherightside,twodiskshavebeen

LVM Configuration

27

dividedintotwoandthreephysicalpartitionseach.TwoLVMvolumegroups(VG 1andVG 2)have beendefined.VG 1containstwopartitionsfromDISK 1andonefromDISK 2.VG 2containsthe remainingtwopartitionsfromDISK 2.


Figure 4-1 PhysicalPartitioningversusLVM

DISK PART PART PART

DISK 1 PART PART VG 1 PART

DISK 2 PART PART

VG 2

LV 1

LV 2

LV 3

LV 4

MP

MP

MP

MP

MP

MP

MP

InLVM,thephysicaldiskpartitionsthatareincorporatedinavolumegrouparecalledphysical volumes(PVs).WithinthevolumegroupsinFigure41,fourlogicalvolumes(LV 1throughLV 4) havebeendefined,whichcanbeusedbytheoperatingsystemviatheassociatedmountpoints.The borderbetweendifferentlogicalvolumesneednotbealignedwithanypartitionborder.Seethe borderbetweenLV 1andLV 2inthisexample. LVMfeatures: Severalharddisksorpartitionscanbecombinedinalargelogicalvolume. Providedtheconfigurationissuitable,anLV(suchas/usr)canbeenlargedwhenthefreespace isexhausted. UsingLVM,itispossibletoaddharddisksorLVsinarunningsystem.However,thisrequires hotswappablehardwarethatiscapableofsuchactions. Itispossibletoactivateastripingmodethatdistributesthedatastreamofalogicalvolumeover severalphysicalvolumes.Ifthesephysicalvolumesresideondifferentdisks,thiscanimprove thereadingandwritingperformancejustlikeRAID 0. Thesnapshotfeatureenablesconsistentbackups(especiallyforservers)intherunningsystem. Withthesefeatures,usingLVMalreadymakessenseforheavilyusedhomePCsorsmallservers.If youhaveagrowingdatastock,asinthecaseofdatabases,musicarchives,oruserdirectories,LVMis especiallyuseful.Itallowsfilesystemsthatarelargerthanthephysicalharddisk.Anotheradvantage ofLVMisthatupto256LVscanbeadded.However,keepinmindthatworkingwithLVMis differentfromworkingwithconventionalpartitions. Startingfromkernelversion 2.6,LVMversion 2isavailable,whichisdownwardcompatiblewiththe previousLVMandenablesthecontinuedmanagementofoldvolumegroups.Whencreatingnew volumegroups,decidewhethertousethenewformatorthedownwardcompatibleversion.LVM 2

28

SLES 11 SP1: Storage Administration Guide

doesnotrequireanykernelpatches.Itmakesuseofthedevicemapperintegratedinkernel2.6.This kernelonlysupportsLVMversion 2.Therefore,whentalkingaboutLVM,thissectionalwaysrefers toLVMversion 2. YoucanmanageneworexistingLVMstorageobjectsbyusingtheYaSTPartitioner.Instructionsand furtherinformationaboutconfiguringLVMisavailableintheofficialLVMHOWTO(http://tldp.org/ HOWTO/LVMHOWTO/).

4.2

Creating LVM Partitions


Foreachdisk,partitionthefreespacethatyouwanttouseforLVMas0x8E Linux LVM.Youcan createoneormultipleLVMpartitionsonasingledevice.Itisnotnecessaryforallofthepartitionson adevicetobeLVMpartitions. YoucanusetheVolumeGroupfunctiontogrouponeormoreLVMpartitionsintoalogicalpoolof spacecalledavolumegroup,thencarveoutoneormorelogicalvolumesfromthespaceinthe volumegroup. IntheYaSTPartitioner,onlythefreespaceonthediskismadeavailabletoyouasyouarecreating LVMpartitions.IfyouwanttousetheentirediskforasingleLVMpartitionandotherpartitions alreadyexistsonthedisk,youmustfirstremovealloftheexistingpartitionstofreethespacebefore youcanusethatspaceinanLVMpartition. WARNING:Deletingapartitiondestroysallofthedatainthepartition.
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 (Optional)Removeoneormoreexistingpartitionstofreethatspaceandmakeitavailablefor

theLVMpartitionyouwanttocreate. Forinformation,seeSection 4.10,DeletinganLVMPartition(PhysicalVolume),onpage 38.


4 OnthePartitionspage,clickAdd. 5 UnderNewPartitionType,selectPrimaryPartitionorExtendedPartition,thenclickNext. 6 SpecifytheNewPartitionSize,thenclickNext.

Maximum Size: Use all of the free available


space on the disk.

Custom Size: Specify a size up the amount of


free available space on the disk.

Custom Region: Specify the start and end


cylinder of the free available space on the disk.

LVM Configuration

29

7 Configurethepartitionformat:
1. Under Formatting Options, select Do not format. 2. From the File System ID drop-down list, select 0x8E Linux LVM as the partition identifier. 3. Under Mounting Options, select Do not mount partition.

8 ClickFinish.

ThepartitionsarenotactuallycreateduntilyouclickNextandFinishtoexitthepartitioner.
9 RepeatStep 4throughStep 8foreachLinuxLVMpartitionyouwanttoadd. 10 ClickNext,verifythatthenewLinuxLVMpartitionsarelisted,thenclickFinishtoexitthe

partitioner.
11 (Optional)ContinuewiththeVolumeGroupconfigurationasdescribedinSection 4.3,Creating

VolumeGroups,onpage 30.

4.3

Creating Volume Groups


AnLVMvolumegrouporganizestheLinuxLVMpartitionsintoalogicalpoolofspace.Youcancarve outlogicalvolumesfromtheavailablespaceinthegroup.TheLinuxLVMpartitionsinagroupcan beonthesameordifferentdisks.YoucanaddLVMpartitionsfromthesameordifferentdisksto expandthesizeofthegroup.AssignallpartitionsreservedforLVMtoavolumegroup.Otherwise, thespaceonthepartitionremainsunused.
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,clickVolumeManagement.

AlistofexistingVolumeGroupsarelistedintherightpanel.

30

SLES 11 SP1: Storage Administration Guide

4 AtthelowerleftoftheVolumeManagementpage,clickAddVolumeGroup.

5 SpecifytheVolumeGroupName.

Ifyouarecreatingavolumegroupatinstalltime,thenamesystemissuggestedforavolume groupthatwillcontaintheSUSELinuxEnterpriseServersystemfiles.
6 SpecifythePhysicalExtentSize.

ThePhysicalExtentSizedefinesthesizeofaphysicalblockinthevolumegroup.Allthedisk spaceinavolumegroupishandledinchunksofthissize.Valuescanbefrom1KBto16GBin powersof2.Thisvalueisnormallysetto4 MB. InLVM1,a4MBphysicalextentallowedamaximumLVsizeof256GBbecauseitsupportsonly upto65534extentsperLV.VM2doesnotrestrictthenumberofphysicalextents.Havingalarge numberofextentshasnoimpactonI/Operformancetothelogicalvolume,butitslowsdown theLVMtools. IMPORTANT:DifferentphysicalextentsizesshouldnotbemixedinasingleVG.Theextent shouldnotbemodifiedaftertheinitialsetup.
7 IntheAvailablePhysicalVolumeslist,selecttheLinuxLVMpartitionsthatyouwanttomakepart

ofthisvolumegroup,thenclickAddtomovethemtotheSelectedPhysicalVolumeslist.
8 ClickFinish.

ThenewgroupappearsintheVolumeGroupslist.
9 OntheVolumeManagementpage,clickNext,verifythatthenewvolumegroupislisted,then

clickFinish.

LVM Configuration

31

4.4

Configuring Physical Volumes


WhentheLinuxLVMpartitionsareassignedtoavolumegroup,thepartitionsarethenreferredtoas aphysicalvolumes.
Figure 4-2 PhysicalVolumesintheVolumeGroupNamedHome

Toaddmorephysicalvolumestoanexistingvolumegroup:
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,selectVolumeManagementandexpandthelistofgroups. 4 UnderVolumeManagement,selectthevolumegroup,thenclicktheOverviewtab.

32

SLES 11 SP1: Storage Administration Guide

5 Atthebottomofthepage,clickResize.

6 Selectaphysicalvolume(LVMpartitions)fromtheAvailablePhysicalVolumeslistthenclickAdd

tomoveittotheSelectedPhysicalVolumeslist.
7 ClickFinish. 8 ClickNext,verifythatthechangesarelisted,thenclickFinish.

4.5

Configuring Logical Volumes


Afteravolumegrouphasbeenfilledwithphysicalvolumes,usetheLogicalVolumesdialog(see Figure43)todefineandmanagethelogicalvolumesthattheoperatingsystemshoulduse.This dialoglistsallofthelogicalvolumesinthatvolumegroup.YoucanuseAdd,Edit,andRemoveoptions tomanagethelogicalvolumes.Assignatleastonelogicalvolumetoeachvolumegroup.Youcan createnewlogicalvolumesasneededuntilallfreespaceinthevolumegrouphasbeenexhausted.

LVM Configuration

33

Figure 4-3 LogicalVolumeManagement

Itispossibletodistributethedatastreaminthelogicalvolumeamongseveralphysicalvolumes (striping).Ifthesephysicalvolumesresideondifferentharddisks,thisgenerallyresultsinabetter readingandwritingperformance(likeRAID 0).However,astripingLVwithnstripescanonlybe createdcorrectlyiftheharddiskspacerequiredbytheLVcanbedistributedevenlytonphysical volumes.Forexample,ifonlytwophysicalvolumesareavailable,alogicalvolumewiththreestripes isimpossible.


1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,selectVolumeManagementandexpandittoseethelistofvolumegroups. 4 UnderVolumeManagement,selectthevolumegroup,thenclicktheLogicalVolumestab. 5 Inthelowerleft,clickAddtoopentheAddLogicalVolumedialog. 6 SpecifytheNameforthelogicalvolume,thenclickNext.

34

SLES 11 SP1: Storage Administration Guide

7 Specifythesizeofthevolumeandwhethertousemultiplestripes.
1. Specify the size of the logical volume, up to the maximum size available. The amount of free space in the current volume group is shown next to the Maximum Size option. 2. Specify the number of stripes. WARNING: YaST has no chance at this point to verify the correctness of your entries concerning striping. Any mistake made here is apparent only later when the LVM is implemented on disk.

8 Specifytheformattingoptionsforthelogicalvolume:
1. Under Formatting Options, select Format partition, then select the format type from the File system drop-down list, such as Ext3. 2. Under Mounting Options, select Mount partition, then select the mount point. The files stored on this logical volume can be found at this mount point on the installed system. 3. Click Fstab Options to add special mounting options for the volume.

9 ClickFinish. 10 ClickNext,verifythatthechangesarelisted,thenclickFinish.

LVM Configuration

35

4.6

Resizing a Volume Group


YoucanaddandremoveLinuxLVMpartitionsfromavolumegrouptoexpandorreduceitssize. WARNING:Removingapartitioncanresultindatalossifthepartitionisinusebyalogicalvolume.
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,selectVolumeManagementandexpandittoseethelistofvolumegroups. 4 UnderVolumeManagement,selectthevolumegroup,thenclicktheOverviewtab. 5 Atthebottomofthepage,clickResize.

6 Dooneofthefollowing:

Add:Expandthesizeofthevolumegroupbymovingoneormorephysicalvolumes(LVM partitions)fromtheAvailablePhysicalVolumeslisttotheSelectedPhysicalVolumeslist. Remove:ReducethesizeofthevolumegroupbymovingLoneormorephysicalvolumes (LVMpartitions)fromtheSelectedPhysicalVolumeslisttotheAvailablePhysicalVolumeslist.


7 ClickFinish. 8 ClickNext,verifythatthechangesarelisted,thenclickFinish.

4.7

Resizing a Logical Volume with YaST


1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,selectVolumeManagementandexpandittoseethelistofvolumegroups. 4 UnderVolumeManagement,selectthevolumegroup,thenclicktheLogicalVolumestab.

36

SLES 11 SP1: Storage Administration Guide

5 Atthebottomofthepage,clickResizetoopentheResizeLogicalVolumedialog.

6 Usetheslidertoexpandorreducethesizeofthelogicalvolume.

WARNING:Reducingthesizeofalogicalvolumethatcontainsdatacancausedatacorruption.
7 ClickOK. 8 ClickNext,verifythatthechangeislisted,thenclickFinish.

4.8

Resizing a Logical Volume with Commands


Thelvresize,lvextend,andlvreducecommandsareusedtoresizelogicalvolumes.Seetheman pagesforeachofthesecommandsforsyntaxandoptionsinformation. YoucanalsoincreasethesizeofalogicalvolumebyusingtheYaSTPartitioner.YaSTusesparted(8) togrowthepartition. ToextendanLVtheremustbeenoughunallocatedspaceavailableontheVG. LVscanbeextendedorshrunkwhiletheyarebeingused,butthismaynotbetrueforafilesystemon them.ExtendingorshrinkingtheLVdoesnotautomaticallymodifythesizeoffilesystemsinthe volume.Youmustuseadifferentcommandtogrowthefilesystemafterwards.Forinformation aboutresizingfilesystems,seeChapter 5,ResizingFileSystems,onpage 39. Makesureyouusetherightsequence: IfyouextendanLV,youmustextendtheLVbeforeyouattempttogrowthefilesystem. IfyoushrinkanLV,youmustshrinkthefilesystembeforeyouattempttoshrinktheLV. Toextendthesizeofalogicalvolume:
1 Openaterminalconsole,loginastherootuser. 2 Ifthelogicalvolumecontainsfilesystemsthatarehostedforavirtualmachine(suchasaXen

VM),shutdowntheVM.
3 Dismountthefilesystemsonthelogicalvolume. 4 Attheterminalconsoleprompt,enterthefollowingcommandtogrowthesizeofthelogical

volume:

LVM Configuration

37

lvextend -L +size /dev/vgname/lvname

Forsize,specifytheamountofspaceyouwanttoaddtothelogicalvolume,suchas10GB. Replace/dev/vgname/lvnamewiththeLinuxpathtothelogicalvolume,suchas/dev/vg1/v1. Forexample:


lvextend -L +10GB /dev/vg1/v1

Forexample,toextendanLVwitha(mountedandactive)ReiserFSonitby10GB:
lvextend L +10G /dev/vgname/lvname resize_reiserfs s +10GB f /dev/vgname/lvname

Forexample,toshrinkanLVwithaReiserFSonitby5GB:
umount /mountpointofLV resize_reiserfs s 5GB /dev/vgname/lvname lvreduce /dev/vgname/lvname mount /dev/vgname/lvname /mountpointofLV

4.9

Deleting a Volume Group


WARNING:Deletingavolumegroupdestroysallofthedataineachofitsmemberpartitions.
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 Intheleftpanel,selectVolumeManagementandexpandthelistofgroups. 4 UnderVolumeManagement,selectthevolumegroup,thenclicktheOverviewtab. 5 Atthebottomofthepage,clickDelete,thenclickYestoconfirmthedeletion. 6 ClickNext,verifythatthedeletedvolumegroupislisted(deletionisindicatedbyaredcolored

font),thenclickFinish.

4.10

Deleting an LVM Partition (Physical Volume)


WARNING:Deletingapartitiondestroysallofthedatainthepartition.
1 Loginastherootuser,thenopenYaST. 2 InYaST,openthePartitioner. 3 IftheLinuxLVMpartitionisinuseasamemberofavolumegroup,removethepartitionfrom

thevolumegroup,ordeletethevolumegroup.
4 IntheYaSTPartitionerunderHardDisks,selectthedevice(suchassdc). 5 OnthePartitionspage,selectapartitionthatyouwanttoremove,clickDelete,thenclickYesto

confirmthedeletion.
6 ClickNext,verifythatthedeletedpartitionislisted(deletionisindicatedbyaredcoloredfont),

thenclickFinish.

38

SLES 11 SP1: Storage Administration Guide

Resizing File Systems

Whenyourdataneedsgrowforavolume,youmightneedtoincreasetheamountofspaceallocated toitsfilesystem. Section 5.1,GuidelinesforResizing,onpage 39 Section 5.2,IncreasingtheSizeofanExt2,Ext3,orExt4FileSystem,onpage 40 Section 5.3,IncreasingtheSizeofaReiserFileSystem,onpage 41 Section 5.4,DecreasingtheSizeofanExt2,Ext3,orExt4FileSystem,onpage 42 Section 5.5,DecreasingtheSizeofaReiserFileSystem,onpage 42

5.1

Guidelines for Resizing


Resizinganypartitionorfilesysteminvolvessomerisksthatcanpotentiallyresultinlosingdata. WARNING:Toavoiddataloss,makesuretobackupyourdatabeforeyoubeginanyresizingtask. Considerthefollowingguidelineswhenplanningtoresizeafilesystem. Section 5.1.1,FileSystemsthatSupportResizing,onpage 39 Section 5.1.2,IncreasingtheSizeofaFileSystem,onpage 40 Section 5.1.3,DecreasingtheSizeofaFileSystem,onpage 40

5.1.1

File Systems that Support Resizing


Thefilesystemmustsupportresizinginordertotakeadvantageofincreasesinavailablespacefor thevolume.InSUSELinuxEnterpriseServer11,filesystemresizingutilitiesareavailableforfile systemsExt2,Ext3,Ext4,andReiserFS.Theutilitiessupportincreasinganddecreasingthesizeas follows:
Table 5-1 FileSystemSupportforResizing

File System Ext2 Ext3 Ext4 ReiserFS

Utility resize2fs resize2fs resize2fs resize_reiserfs

Increase Size (Grow) Offline only Online or offline Offline only Online or offline

Decrease Size (Shrink) Offline only Offline only Offline only Offline only

Resizing File Systems

39

5.1.2

Increasing the Size of a File System


Youcangrowafilesystemtothemaximumspaceavailableonthedevice,orspecifyanexactsize. Makesuretogrowthesizeofthedeviceorlogicalvolumebeforeyouattempttoincreasethesizeof thefilesystem. Whenspecifyinganexactsizeforthefilesystem,makesurethenewsizesatisfiesthefollowing conditions: Thenewsizemustbegreaterthanthesizeoftheexistingdata;otherwise,datalossoccurs. Thenewsizemustbeequaltoorlessthanthecurrentdevicesizebecausethefilesystemsize cannotextendbeyondthespaceavailable.

5.1.3

Decreasing the Size of a File System


Whendecreasingthesizeofthefilesystemonadevice,makesurethenewsizesatisfiesthefollowing conditions: Thenewsizemustbegreaterthanthesizeoftheexistingdata;otherwise,datalossoccurs. Thenewsizemustbeequaltoorlessthanthecurrentdevicesizebecausethefilesystemsize cannotextendbeyondthespaceavailable. Ifyouplantoalsodecreasethesizeofthelogicalvolumethatholdsthefilesystem,makesureto decreasethesizeofthefilesystembeforeyouattempttodecreasethesizeofthedeviceorlogical volume.

5.2

Increasing the Size of an Ext2, Ext3, or Ext4 File System


ThesizeofExt2,Ext3,andExt4filesystemscanbeincreasedbyusingtheresize2fscommand whenthefilesystemisunmounted.ThesizeofanExt3filesystemcanalsobeincreasedbyusingthe resize2fscommandwhenthefilesystemismounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 IfthefilesystemisExt2orExt4,youmustunmountthefilesystem.TheExt3filesystemcanbe

mountedorunmounted.
3 Increasethesizeofthefilesystemusingoneofthefollowingmethods:

Toextendthefilesystemsizetothemaximumavailablesizeofthedevicecalled/dev/ sda1,enter
resize2fs /dev/sda1

Ifasizeparameterisnotspecified,thesizedefaultstothesizeofthepartition. Toextendthefilesystemtoaspecificsize,enter
resize2fs /dev/sda1 size

Thesizeparameterspecifiestherequestednewsizeofthefilesystem.Ifnounitsare specified,theunitofthesizeparameteristheblocksizeofthefilesystem.Optionally,the sizeparametercanbesuffixedbyoneofthefollowingtheunitdesignators:sfor512byte sectors;Kforkilobytes(1kilobyteis1024bytes);Mformegabytes;orGforgigabytes. Waituntiltheresizingiscompletedbeforecontinuing.


4 Ifthefilesystemisnotmounted,mountitnow.

40

SLES 11 SP1: Storage Administration Guide

Forexample,tomountanExt2filesystemforadevicenamed/dev/sda1atmountpoint/home, enter
mount -t ext2 /dev/sda1 /home 5 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

5.3

Increasing the Size of a Reiser File System


AReiserFSfilesystemcanbeincreasedinsizewhilemountedorunmounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Increasethesizeofthefilesystemonthedevicecalled/dev/sda2,usingoneofthefollowing

methods: Toextendthefilesystemsizetothemaximumavailablesizeofthedevice,enter
resize_reiserfs /dev/sda2

Whennosizeisspecified,thisincreasesthevolumetothefullsizeofthepartition. Toextendthefilesystemtoaspecificsize,enter
resize_reiserfs -s size /dev/sda2

Replacesizewiththedesiredsizeinbytes.Youcanalsospecifyunitsonthevalue,suchas 50000K(kilobytes),250M(megabytes),or2G(gigabytes).Alternatively,youcanspecifyan increasetothecurrentsizebyprefixingthevaluewithaplus(+)sign.Forexample,the followingcommandincreasesthesizeofthefilesystemon/dev/sda2by500MB:


resize_reiserfs -s +500M /dev/sda2

Waituntiltheresizingiscompletedbeforecontinuing.
3 Ifthefilesystemisnotmounted,mountitnow.

Forexample,tomountanReiserFSfilesystemfordevice/dev/sda2atmountpoint/home,enter
mount -t reiserfs /dev/sda2 /home 4 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

Resizing File Systems

41

5.4

Decreasing the Size of an Ext2, Ext3, or Ext4 File System


YoucanshrinkthesizeoftheExt2,Ext3,orExt4filesystemswhenthevolumeisunmounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Unmountthefilesystem. 3 Decreasethesizeofthefilesystemonthedevicesuchas/dev/sda1byentering resize2fs /dev/sda1 <size>

Replacesizewithanintegervalueinkilobytesforthedesiredsize.(Akilobyteis1024bytes.) Waituntiltheresizingiscompletedbeforecontinuing.
4 Mountthefilesystem.Forexample,tomountanExt2filesystemforadevicenamed/dev/sda1 atmountpoint/home,enter mount -t ext2 /dev/md0 /home 5 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

5.5

Decreasing the Size of a Reiser File System


Reiserfilesystemscanbereducedinsizeonlyifthevolumeisunmounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Unmountthedevicebyentering umount /mnt/point

Ifthepartitionyouareattemptingtodecreaseinsizecontainssystemfiles(suchastheroot(/) volume),unmountingispossibleonlywhenbootingfromabootableCDorfloppy.
3 Decreasethesizeofthefilesystemonadevicecalled/dev/sda1byentering resize_reiserfs -s size /dev/sda2

Replacesizewiththedesiredsizeinbytes.Youcanalsospecifyunitsonthevalue,suchas 50000K(kilobytes),250M(megabytes),or2G(gigabytes).Alternatively,youcanspecifya decreasetothecurrentsizebyprefixingthevaluewithaminus()sign.Forexample,the followingcommandreducesthesizeofthefilesystemon/dev/md0by500MB:


resize_reiserfs -s -500M /dev/sda2

Waituntiltheresizingiscompletedbeforecontinuing.
4 Mountthefilesystembyentering mount -t reiserfs /dev/sda2 /mnt/point 5 Checktheeffectoftheresizeonthemountedfilesystembyentering

42

SLES 11 SP1: Storage Administration Guide

df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

Resizing File Systems

43

44

SLES 11 SP1: Storage Administration Guide

Using UUIDs to Mount Devices

ThissectiondescribestheoptionaluseofUUIDsinsteadofdevicenamestoidentifyfilesystem devicesinthebootloaderfileandthe/etc/fstabfile. Section 6.1,NamingDeviceswithudev,onpage 45 Section 6.2,UnderstandingUUIDs,onpage 45 Section 6.3,UsingUUIDsintheBootLoaderand/etc/fstabFile(x86),onpage 46 Section 6.4,UsingUUIDsintheBootLoaderand/etc/fstabFile(IA64),onpage 47 Section 6.5,AdditionalInformation,onpage 48

6.1

Naming Devices with udev


IntheLinux2.6andlaterkernel,udevprovidesauserspacesolutionforthedynamic/devdirectory, withpersistentdevicenaming.Aspartofthehotplugsystem,udevisexecutedifadeviceisaddedto orremovedfromthesystem. Alistofrulesisusedtomatchagainstspecificdeviceattributes.Theudevrulesinfrastructure (definedinthe/etc/udev/rules.ddirectory)providesstablenamesforalldiskdevices,regardless oftheirorderofrecognitionortheconnectionusedforthedevice.Theudevtoolsexamineevery appropriateblockdevicethatthekernelcreatestoapplynamingrulesbasedoncertainbuses,drive types,orfilesystems.Forinformationabouthowtodefineyourownrulesforudev,seeWritingudev Rules(http://reactivated.net/writing_udev_rules.html). Alongwiththedynamickernelprovideddevicenodename,udevmaintainsclassesofpersistent symboliclinkspointingtothedeviceinthe/dev/diskdirectory,whichisfurthercategorizedbythe by-id,by-label,by-path,andby-uuidsubdirectories. NOTE:Otherprogramsbesidesudev,suchasLVMormd,mightalsogenerateUUIDs,buttheyare notlistedin/dev/disk.

6.2

Understanding UUIDs
AUUID(UniversallyUniqueIdentifier)isa128bitnumberforafilesystemthatisuniqueonboth thelocalsystemandacrossothersystems.Itisarandomlygeneratedwithsystemhardware informationandtimestampsaspartofitsseed.UUIDsarecommonlyusedtouniquelytagdevices. Section 6.2.1,UsingUUIDstoAssembleorActivateFileSystemDevices,onpage 46 Section 6.2.2,FindingtheUUIDforaFileSystemDevice,onpage 46

Using UUIDs to Mount Devices

45

6.2.1

Using UUIDs to Assemble or Activate File System Devices


TheUUIDisalwaysuniquetothepartitionanddoesnotdependontheorderinwhichitappearsor whereitismounted.WithcertainSANdevicesattachedtotheserver,thesystempartitionsare renamedandmovedtobethelastdevice.Forexample,ifroot(/)isassignedto/dev/sda1during theinstall,itmightbeassignedto/dev/sdg1aftertheSANisconnected.Onewaytoavoidthis problemistousetheUUIDinthebootloaderand/etc/fstabfilesforthebootdevice. ThedeviceIDassignedbythemanufacturerforadriveneverchanges,nomatterwherethedeviceis mounted,soitcanalwaysbefoundatboot.TheUUIDisapropertyofthefilesystemandcanchange ifyoureformatthedrive.Inabootloaderfile,youtypicallyspecifythelocationofthedevice(suchas /dev/sda1)tomountitatsystemboot.ThebootloadercanalsomountdevicesbytheirUUIDsand administratorspecifiedvolumelabels.However,ifyouusealabelandfilelocation,youcannot changethelabelnamewhenthepartitionismounted. YoucanusetheUUIDascriterionforassemblingandactivatingsoftwareRAIDdevices.Whena RAIDiscreated,themddrivergeneratesaUUIDforthedevice,andstoresthevalueinthemd superblock.

6.2.2

Finding the UUID for a File System Device


YoucanfindtheUUIDforanyblockdeviceinthe/dev/disk/by-uuiddirectory.Forexample,a UUIDlookslikethis:
e014e482-1c2d-4d09-84ec-61b3aefde77a

6.3

Using UUIDs in the Boot Loader and /etc/fstab File (x86)


Aftertheinstall,youcanoptionallyusethefollowingproceduretoconfiguretheUUIDforthe systemdeviceinthebootloaderand/etc/fstabfilesforyourx86system. Beforeyoubegin,makeacopyof/boot/grub/menu.1stfileandthe/etc/fstabfile.
1 InstalltheSUSELinuxEnterpriseServerforx86withnoSANdevicesconnected. 2 Aftertheinstall,bootthesystem. 3 Openaterminalconsoleastherootuserorequivalent. 4 Navigatetothe/dev/disk/by-uuiddirectorytofindtheUUIDforthedevicewhereyou installed/boot,/root,andswap. 4a Attheterminalconsoleprompt,enter cd /dev/disk/by-uuid 4b Listallpartitionsbyentering ll 4c FindtheUUID,suchas e014e482-1c2d-4d09-84ec-61b3aefde77a > /dev/sda1 5 Edit/boot/grub/menu.1stfile,usingtheBootLoaderoptioninYaST2orusingatexteditor.

Forexample,change
kernel /boot/vmlinuz root=/dev/sda1

46

SLES 11 SP1: Storage Administration Guide

to
kernel /boot/vmlinuz root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec61b3aefde77a

IMPORTANT:Ifyoumakeamistake,youcanboottheserverwithouttheSANconnected,and fixtheerrorbyusingthebackupcopyofthe/boot/grub/menu.1stfileasaguide. IfyouusetheBootLoaderoptioninYaST,thereisadefectwhereitaddssomeduplicatelinesto thebootloaderfilewhenyouchangeavalue.Useaneditortoremovethefollowingduplicate lines:


color white/blue black/light-gray default 0 timeout 8 gfxmenu (sd0,1)/boot/message

WhenyouuseYaSTtochangethewaythattheroot(/)deviceismounted(suchasbyUUIDor bylabel),thebootloaderconfigurationneedstobesavedagaintomakethechangeeffectivefor thebootloader.


6 Astherootuserorequivalent,dooneofthefollowingtoplacetheUUIDinthe/etc/fstabfile:

OpenYaSTtoSystem>Partitioner,selectthedeviceofinterest,thenmodifyFstabOptions. Editthe/etc/fstabfiletomodifythesystemdevicefromthelocationtotheUUID. Forexample,iftheroot(/)volumehasadevicepathof/dev/sda1anditsUUIDis e014e482-1c2d-4d09-84ec-61b3aefde77a,changelineentryfrom


/dev/sda1 / reiserfs acl,user_xattr 1 1

to
UUID=e014e482-1c2d-4d09-84ec-61b3aefde77a 1 1 / reiserfs acl,user_xattr

IMPORTANT:Donotleavestraycharactersorspacesinthefile.

6.4

Using UUIDs in the Boot Loader and /etc/fstab File (IA64)


Aftertheinstall,usethefollowingproceduretoconfiguretheUUIDforthesystemdeviceintheboot loaderand/etc/fstabfilesforyourIA64system.IA64usestheEFIBIOS.Itsfilesystem configurationfileis/boot/efi/SuSE/elilo.confinsteadof/etc/fstab. Beforeyoubegin,makeacopyofthe/boot/efi/SuSE/elilo.conffile.
1 InstalltheSUSELinuxEnterpriseServerforIA64withnoSANdevicesconnected. 2 Aftertheinstall,bootthesystem. 3 Openaterminalconsoleastherootuserorequivalent. 4 Navigatetothe/dev/disk/by-uuiddirectorytofindtheUUIDforthedevicewhereyou installed/boot,/root,andswap. 4a Attheterminalconsoleprompt,enter cd /dev/disk/by-uuid

Using UUIDs to Mount Devices

47

4b Listallpartitionsbyentering ll 4c FindtheUUID,suchas e014e482-1c2d-4d09-84ec-61b3aefde77a > /dev/sda1 5 Editthebootloaderfile,usingtheBootLoaderoptioninYaST2.

Forexample,change
root=/dev/sda1

to
root=/dev/disk/by-uuid/e014e482-1c2d-4d09-84ec-61b3aefde77a 6 Editthe/boot/efi/SuSE/elilo.conffiletomodifythesystemdevicefromthelocationtothe

UUID. Forexample,change
/dev/sda1 / reiserfs acl,user_xattr 1 1

to
UUID=e014e482-1c2d-4d09-84ec-61b3aefde77a 1 1 / reiserfs acl,user_xattr

IMPORTANT:Donotleavestraycharactersorspacesinthefile.

6.5

Additional Information
Formoreinformationaboutusingudev(8)formanagingdevices,seeDynamicKernelDevice Managementwithudev(http://www.novell.com/documentation/sles11/book_sle_admin/data/ cha_udev.html)intheSUSELinuxEnterpriseServer11AdministrationGuide. Formoreinformationaboutudev(8)commands,seeitsmanpage.Enterthefollowingataterminal consoleprompt:
man 8 udev

48

SLES 11 SP1: Storage Administration Guide

Managing Multipath I/O for Devices

Thissectiondescribeshowtomanagefailoverandpathloadbalancingformultiplepathsbetween theserversandblockstoragedevices. Section 7.1,UnderstandingMultipathing,onpage 49 Section 7.2,PlanningforMultipathing,onpage 50 Section 7.3,MultipathManagementTools,onpage 57 Section 7.4,ConfiguringtheSystemforMultipathing,onpage 62 Section 7.5,EnablingandStartingMultipathI/OServices,onpage 71 Section 7.6,ConfiguringPathFailoverPoliciesandPriorities,onpage 71 Section 7.7,ConfiguringMultipathI/OfortheRootDevice,onpage 79 Section 7.8,ConfiguringMultipathI/OforanExistingSoftwareRAID,onpage 82 Section 7.9,ScanningforNewDeviceswithoutRebooting,onpage 84 Section 7.10,ScanningforNewPartitionedDeviceswithoutRebooting,onpage 86 Section 7.11,ViewingMultipathI/OStatus,onpage 87 Section 7.12,ManagingI/OinErrorSituations,onpage 88 Section 7.13,ResolvingStalledI/O,onpage 89 Section 7.14,TroubleshootingMPIO,onpage 89 Section 7.15,WhatsNext,onpage 89

7.1

Understanding Multipathing
Section 7.1.1,WhatIsMultipathing?,onpage 49 Section 7.1.2,BenefitsofMultipathing,onpage 50

7.1.1

What Is Multipathing?
Multipathingistheabilityofaservertocommunicatewiththesamephysicalorlogicalblockstorage deviceacrossmultiplephysicalpathsbetweenthehostbusadaptersintheserverandthestorage controllersforthedevice,typicallyinFibreChannel(FC)oriSCSISANenvironments.Youcanalso achievemultipleconnectionswithdirectattachedstoragewhenmultiplechannelsareavailable.

Managing Multipath I/O for Devices

49

7.1.2

Benefits of Multipathing
Linuxmultipathingprovidesconnectionfaulttoleranceandcanprovideloadbalancingacrossthe activeconnections.Whenmultipathingisconfiguredandrunning,itautomaticallyisolatesand identifiesdeviceconnectionfailures,andreroutesI/Otoalternateconnections. Typicalconnectionproblemsinvolvefaultyadapters,cables,orcontrollers.Whenyouconfigure multipathI/Oforadevice,themultipathdrivermonitorstheactiveconnectionbetweendevices. WhenthemultipathdriverdetectsI/Oerrorsforanactivepath,itfailsoverthetraffictothedevices designatedsecondarypath.Whenthepreferredpathbecomeshealthyagain,controlcanbereturned tothepreferredpath.

7.2

Planning for Multipathing


Section 7.2.1,GuidelinesforMultipathing,onpage 50 Section 7.2.2,UsingByIDNamesforMultipathedDevices,onpage 52 Section 7.2.3,UsingLVM2onMultipathDevices,onpage 52 Section 7.2.4,UsingmdadmwithMultipathDevices,onpage 53 Section 7.2.5,UsingnoflushwithMultipathDevices,onpage 53 Section 7.2.6,SANTimeoutSettingsWhentheRootDeviceIsMultipathed,onpage 53 Section 7.2.7,PartitioningMultipathDevices,onpage 54 Section 7.2.8,SupportedArchitecturesforMultipathI/O,onpage 54 Section 7.2.9,SupportedStorageArraysforMultipathing,onpage 55

7.2.1

Guidelines for Multipathing


UsetheguidelinesinthissectionwhenplanningyourmultipathI/Osolution. Prerequisitesonpage 50 VendorProvidedMultipathSolutionsonpage 51 DiskManagementTasksonpage 51 SoftwareRAIDsonpage 51 HighAvailabilitySolutionsonpage 51 VolumeManagersonpage 51 VirtualizationEnvironmentsonpage 51

Prerequisites
Multipathingismanagedatthedevicelevel. Thestoragearrayyouuseforthemultipatheddevicemustsupportmultipathing.Formore information,seeSection 7.2.9,SupportedStorageArraysforMultipathing,onpage 55. Youneedtoconfiguremultipathingonlyifmultiplephysicalpathsexistbetweenhostbus adaptersintheserverandhostbuscontrollersfortheblockstoragedevice.Youconfigure multipathingforthelogicaldeviceasseenbytheserver.

50

SLES 11 SP1: Storage Administration Guide

Vendor-Provided Multipath Solutions


Forsomestoragearrays,thevendorprovidesitsownmultipathingsoftwaretomanagemultipathing forthearraysphysicalandlogicaldevices.Inthiscase,youshouldfollowthevendorsinstructions forconfiguringmultipathingforthosedevices.

Disk Management Tasks


Performthefollowingdiskmanagementtasksbeforeyouattempttoconfiguremultipathingfora physicalorlogicaldevicethathasmultiplepaths: Usethirdpartytoolstocarvephysicaldisksintosmallerlogicaldisks. Usethirdpartytoolstopartitionphysicalorlogicaldisks.Ifyouchangethepartitioninginthe runningsystem,theDeviceMapperMultipath(DMMP)moduledoesnotautomaticallydetect andreflectthesechanges.DMMPIOmustbereinitialized,whichusuallyrequiresareboot. UsethirdpartySANarraymanagementtoolstocreateandconfigurehardwareRAIDdevices. UsethirdpartySANarraymanagementtoolstocreatelogicaldevicessuchasLUNs.Logical devicetypesthataresupportedforagivenarraydependonthearrayvendor.

Software RAIDs
TheLinuxsoftwareRAIDmanagementsoftwarerunsontopofmultipathing.Foreachdevicethat hasmultipleI/OpathsandthatyouplantouseinasoftwareRAID,youmustconfigurethedevice formultipathingbeforeyouattempttocreatethesoftwareRAIDdevice.Automaticdiscoveryof multipatheddevicesisnotavailable.ThesoftwareRAIDisnotawareofthemultipathing managementrunningunderneath.

High-Availability Solutions
Highavailabilitysolutionsforclusteringtypicallyrunontopofthemultipathingserver.For example,theDistributedReplicatedBlockDevice(DRBD)highavailabilitysolutionformirroring devicesacrossaLANrunsontopofmultipathing.ForeachdevicethathasmultipleI/Opathsand thatyouplantouseinaDRDBsolution,youmustconfigurethedeviceformultipathingbeforeyou configureDRBD.

Volume Managers
VolumemanagerssuchasLVM2andEVMSrunontopofmultipathing.Youmustconfigure multipathingforadevicebeforeyouuseLVM2orEVMStocreatesegmentmanagersandfile systemsonit.

Virtualization Environments
Whenusingmultipathinginavirtualizationenvironment,themultipathingiscontrolledinthehost serverenvironment.Configuremultipathingforthedevicebeforeyouassignittoavirtualguest machine.

Managing Multipath I/O for Devices

51

7.2.2

Using By-ID Names for Multipathed Devices


IfyouwanttousetheentireLUNdirectly(forexample,ifyouareusingtheSANfeaturestopartition yourstorage),youcanusethe/dev/disk/by-id/xxxnamesformkfs,fstab,yourapplication,etc. Iftheuserfriendlynamesoptionisenabledinthe/etc/multipath.conffile,youcanusethe/dev/ disk/by-id/dm-uuid-.*-mpath-.*devicenamebecausethisnameisaliasedtothedeviceID.For information,seeConfiguringUserFriendlyNamesorAliasNamesin/etc/multipath.confon page 66.

7.2.3

Using LVM2 on Multipath Devices


Bydefault,LVM2doesnotrecognizemultipatheddevices.TomakeLVM2recognizethemultipathed devicesaspossiblephysicalvolumes,youmustmodify/etc/lvm/lvm.conf.Itisimportantto modifyitsothatitdoesnotscanandusethephysicalpaths,butonlyaccessesthemultipathI/O storagethroughthemultipathI/Olayer.Ifyouareusinguserfriendlynames,makesuretospecify thepathsothatitscansonlythedevicemappernamesforthedevice(/dev/disk/by-id/dm-uuid.*-mpath-.*)aftermultipathingisconfigured. Tomodify /etc/lvm/lvm.confformultipathuse:
1 Openthe/etc/lvm/lvm.conffileinatexteditor.

If/etc/lvm/lvm.confdoesnotexist,youcancreateonebasedonyourcurrentLVM configurationbyenteringthefollowingataterminalconsoleprompt:
lvm dumpconfig > /etc/lvm/lvm.conf 2 Changethefilterandtypesentriesin/etc/lvm/lvm.confasfollows: filter = [ "a|/dev/disk/by-id/.*|", "r|.*|" ] types = [ "device-mapper", 1 ]

ThisallowsLVM2toscanonlythebyidpathsandrejecteverythingelse. Ifyouareusinguserfriendlynames,specifythepathasfollowssothatonlythedevicemapper namesarescannedaftermultipathingisconfigured:


filter = [ "a|/dev/disk/by-id/dm-uuid-.*-mpath-.*|", "r|.*|" ] 3 IfyouarealsousingLVM2onnonmultipatheddevices,makethenecessaryadjustmentsinthe filterandtypesentriestosuityoursetup.Otherwise,theotherLVMdevicesarenotvisible withapvscanafteryoumodifythelvm.conffileformultipathing.

YouwantonlythosedevicesthatareconfiguredwithLVMtobeincludedintheLVMcache,so makesureyouarespecificaboutwhichothernonmultipatheddevicesareincludedbythefilter. Forexample,ifyourlocaldiskis/dev/sdaandallSANdevicesare/dev/sdbandabove,specify thelocalandmultipathingpathsinthefilterasfollows:


filter = [ "a|/dev/sda.*|", "a|/dev/disk/by-id/.*|", "r|.*|" ] types = [ "device-mapper", 253 ] 4 Savethefile. 5 Adddmmultipathto/etc/sysconfig/kernel:INITRD_MODULES. 6 MakeanewinitrdtoensurethattheDeviceMapperMultipathservicesareloadedwiththe changedsettings.Runningmkinitrdisneededonlyiftheroot(/)deviceoranypartsofit(such as/var,/etc,/log)areontheSANandmultipathisneededtoboot.

Enterthefollowingataterminalconsoleprompt:

52

SLES 11 SP1: Storage Administration Guide

mkinitrd -f multipath 7 Reboottheservertoapplythechanges.

7.2.4

Using mdadm with Multipath Devices


ThemdadmtoolrequiresthatthedevicesbeaccessedbytheIDratherthanbythedevicenodepath. Therefore,theDEVICEentryin/etc/mdadm.confshouldbesetasfollows:
DEVICE /dev/disk/by-id/*

Ifyouareusinguserfriendlynames,specifythepathasfollowssothatonlythedevicemapper namesarescannedaftermultipathingisconfigured:
DEVICE /dev/disk/by-id/dm-uuid-.*-mpath-.*

7.2.5

Using --noflush with Multipath Devices


The--noflushoptionshouldalwaysbeusedwhenrunningonmultipathdevices. Forexample,inscriptswhereyouperformatablereload,youusethe--noflushoptiononresumeto ensurethatanyoutstandingI/Oisnotflushed,becauseyouneedthemultipathtopology information.
load resume --noflush

7.2.6

SAN Timeout Settings When the Root Device Is Multipathed


Asystemwithroot(/)onamultipathdevicemightstallwhenallpathshavefailedandareremoved fromthesystembecauseadev_loss_tmotimeoutisreceivedfromthestoragesubsystem(suchas FibreChannelstoragearrays). Ifthesystemdeviceisconfiguredwithmultiplepathsandthemultipathno_path_retrysettingis active,youshouldmodifythestoragesubsystemsdev_loss_tmosettingaccordinglytoensurethat nodevicesareremovedduringanallpathsdownscenario.Westronglyrecommendthatyousetthe dev_loss_tmovaluetobeequaltoorhigherthantheno_path_retrysettingfrommultipath. Therecommendedsettingforthestoragesubsystemsdev_los_tmois:
<dev_loss_tmo> = <no_path_retry> * <polling_interval>

wherethefollowingdefinitionsapplyforthemultipathvalues: no_path_retryisthenumberofretriesformultipathI/Ountilthepathisconsideredtobelost, andqueuingofIOisstopped. polling_intervalisthetimeinsecondsbetweenpathchecks. Eachofthesemultipathvaluesshouldbesetfromthe/etc/multipath.confconfigurationfile.For information,seeSection 7.4.5,CreatingandConfiguringthe/etc/multipath.confFile,onpage 64.

Managing Multipath I/O for Devices

53

7.2.7

Partitioning Multipath Devices


Behaviorchangesforhowmultipatheddevicesarehandledmightaffectyourconfigurationifyou areupgrading. SUSELinuxEnterpriseServer11onpage 54 SUSELinuxEnterpriseServer10onpage 54 SUSELinuxEnterpriseServer9onpage 54

SUSE Linux Enterprise Server 11


InSUSELinuxEnterpriseServer11,thedefaultmultipathsetupreliesonudevtooverwritethe existingsymboliclinksinthe/dev/disk/by-iddirectorywhenmultipathingisstarted.Beforeyou startmultipathing,thelinkpointstotheSCSIdevicebyusingitsscsi-xxxname.When multipathingisrunning,thesymboliclinkpointstothedevicebyusingitsdm-uuid-xxxname.This ensuresthatthesymboliclinksinthe/dev/disk/by-idpathpersistentlypointtothesamedevice regardlessofwhethermultipathisstartedornot.Theconfigurationfiles(suchaslvm.confand md.conf)donotneedtobemodifiedbecausetheyautomaticallypointtothecorrectdevice.

SUSE Linux Enterprise Server 10


InSUSELinuxEnterpriseServer10,thekpartxsoftwareisusedinthe/etc/init.d/ boot.multipathtoaddsymlinkstothe/dev/dm-*lineinthemultipath.confconfigurationfilefor anynewlycreatedpartitionswithoutrequiringareboot.Thistriggersudevdtofillinthe/dev/disk/ by-*symlinks.Themainbenefitisthatyoucancallkpartxwiththenewparameterswithout rebootingtheserver.

SUSE Linux Enterprise Server 9


InSUSELinuxEnterpriseServer9,itisnotpossibletopartitionmultipathI/Odevicesthemselves.If theunderlyingphysicaldeviceisalreadypartitioned,themultipathI/Odevicereflectsthose partitionsandthelayerprovides/dev/disk/by-id/<name>p1 ... pNdevicessoyoucanaccessthe partitionsthroughthemultipathI/Olayer.Asaconsequence,thedevicesneedtobepartitionedprior toenablingmultipathI/O.Ifyouchangethepartitioningintherunningsystem,DMMPIOdoesnot automaticallydetectandreflectthesechanges.Thedevicemustbereinitialized,whichusually requiresareboot.

7.2.8

Supported Architectures for Multipath I/O


Themultipathingdriversandtoolssupportallsevenofthesupportedprocessorarchitectures:IA32, AMD64/EM64T,IPF/IA64,pSeries(32bitand64bit),andzSeries(31bitand64bit).

54

SLES 11 SP1: Storage Administration Guide

7.2.9

Supported Storage Arrays for Multipathing


Themultipathingdriversandtoolssupportmoststoragearrays.Thestoragearraythathousesthe multipatheddevicemustsupportmultipathinginordertousethemultipathingdriversandtools. Somestoragearrayvendorsprovidetheirownmultipathingmanagementtools.Consultthe vendorshardwaredocumentationtodeterminewhatsettingsarerequired. StorageArraysThatAreAutomaticallyDetectedforMultipathingonpage 55 TestedStorageArraysforMultipathingSupportonpage 56 StorageArraysthatRequireSpecificHardwareHandlersonpage 56

Storage Arrays That Are Automatically Detected for Multipathing


Themultipath-toolspackageautomaticallydetectsthefollowingstoragearrays: 3PARdataVV CompaqHSV110 CompaqMSA1000 DDNSANMultiDirector DECHSG80 EMCCLARiiONCX EMCSymmetrix FSCCentricStor HewlettPackard(HP)A6189A HPHSV110 HPHSV210 HPOpen HitachiDF400 HitachiDF500 HitachiDF600 IBM3542 IBMProFibre4000R NetApp SGITP9100 SGITP9300 SGITP9400 SGITP9500 STKOPENstorageDS280 SunStorEdge3510 SunT4 Ingeneral,mostotherstoragearraysshouldwork.Whenstoragearraysareautomaticallydetected, thedefaultsettingsformultipathingapply.Ifyouwantnondefaultsettings,youmustmanually createandconfigurethe/etc/multipath.conffile.Forinformation,seeSection 7.4.5,Creatingand Configuringthe/etc/multipath.confFile,onpage 64.

Managing Multipath I/O for Devices

55

TestingoftheIBMzSeriesdevicewithmultipathinghasshownthatthedev_loss_tmoparameter shouldbesetto90seconds,andthefast_io_fail_tmoparametershouldbesetto5seconds.Ifyouare usingzSeriesdevices,youmustmanuallycreateandconfigurethe/etc/multipath.conffileto specifythevalues.Forinformation,seeConfiguringDefaultSettingsforzSeriesin/etc/ multipath.confonpage 70. Hardwarethatisnotautomaticallydetectedrequiresanappropriateentryforconfigurationinthe


DEVICESsectionofthe/etc/multipath.conffile.Inthiscase,youmustmanuallycreateand

configuretheconfigurationfile.Forinformation,seeSection 7.4.5,CreatingandConfiguringthe/ etc/multipath.confFile,onpage 64. Considerthefollowingcaveats: NotallofthestoragearraysthatareautomaticallydetectedhavebeentestedonSUSELinux EnterpriseServer.Forinformation,seeTestedStorageArraysforMultipathingSupporton page 56. Somestoragearraysmightrequirespecifichardwarehandlers.Ahardwarehandlerisakernel modulethatperformshardwarespecificactionswhenswitchingpathgroupsanddealingwith I/Oerrors.Forinformation,seeStorageArraysthatRequireSpecificHardwareHandlerson page 56. Afteryoumodifythe/etc/multipath.conffile,youmustrunmkinitrdtorecreatethe INITRDonyoursystem,thenrebootinorderforthechangestotakeeffect.

Tested Storage Arrays for Multipathing Support


ThefollowingstoragearrayshavebeentestedwithSUSELinuxEnterpriseServer: EMC Hitachi HewlettPackard/Compaq IBM NetApp SGI Mostothervendorstoragearraysshouldalsowork.Consultyourvendorsdocumentationfor guidance.Foralistofthedefaultstoragearraysrecognizedbythemultipath-toolspackage,see StorageArraysThatAreAutomaticallyDetectedforMultipathingonpage 55.

Storage Arrays that Require Specific Hardware Handlers


Storagearraysthatrequirespecialcommandsonfailoverfromonepathtotheotherorthatrequire specialnonstandarderrorhandlingmightrequiremoreextensivesupport.Therefore,theDevice MapperMultipathservicehashooksforhardwarehandlers.Forexample,onesuchhandlerforthe EMCCLARiiONCXfamilyofarraysisalreadyprovided. IMPORTANT:Consultthehardwarevendorsdocumentationtodetermineifitshardwarehandler mustbeinstalledforDeviceMapperMultipath. Themultipath -tcommandshowsaninternaltableofstoragearraysthatrequirespecialhandling withspecifichardwarehandlers.Thedisplayedlistisnotanexhaustivelistofsupportedstorage arrays.Itlistsonlythosearraysthatrequirespecialhandlingandthatthemultipath-tools developershadaccesstoduringthetooldevelopment.

56

SLES 11 SP1: Storage Administration Guide

IMPORTANT:Arrayswithtrueactive/activemultipathsupportdonotrequirespecialhandling,so theyarenotlistedforthemultipath -tcommand. Alistinginthemultipath -ttabledoesnotnecessarilymeanthatSUSELinuxEnterpriseServerwas testedonthatspecifichardware.Foralistoftestedstoragearrays,seeTestedStorageArraysfor MultipathingSupportonpage 56.

7.3

Multipath Management Tools


ThemultipathingsupportinSUSELinuxEnterpriseServer10andlaterisbasedontheDevice MapperMultipathmoduleoftheLinux2.6kernelandthemultipath-toolsuserspacepackage.You canusetheMultipleDevicesAdministrationutility(MDADM,mdadm)toviewthestatusof multipatheddevices. Section 7.3.1,DeviceMapperMultipathModule,onpage 57 Section 7.3.2,MultipathI/OManagementTools,onpage 59 Section 7.3.3,UsingMDADMforMultipathedDevices,onpage 60 Section 7.3.4,TheLinuxmultipath(8)Command,onpage 60

7.3.1

Device Mapper Multipath Module


TheDeviceMapperMultipath(DMMP)moduleprovidesthemultipathingcapabilityforLinux. DMMPIOisthepreferredsolutionformultipathingonSUSELinuxEnterpriseServer11.Itisthe onlymultipathingoptionshippedwiththeproductthatiscompletelysupportedbyNovelland SUSE. DMMPIOfeaturesautomaticconfigurationofthemultipathingsubsystemforalargevarietyof setups.Configurationsofupto8pathstoeachdevicearesupported.Configurationsaresupported foractive/passive(onepathactive,otherspassive)oractive/active(allpathsactivewithroundrobin loadbalancing). TheDMMPIOframeworkisextensibleintwoways: Usingspecifichardwarehandlers.Forinformation,seeStorageArraysthatRequireSpecific HardwareHandlersonpage 56. Usingloadbalancingalgorithmsthataremoresophisticatedthantheroundrobinalgorithm TheuserspacecomponentofDMMPIOtakescareofautomaticpathdiscoveryandgrouping,as wellasautomatedpathretesting,sothatapreviouslyfailedpathisautomaticallyreinstatedwhenit becomeshealthyagain.Thisminimizestheneedforadministratorattentioninaproduction environment. DMMPIOprotectsagainstfailuresinthepathstothedevice,andnotfailuresinthedeviceitself.If oneoftheactivepathsislost(forexample,anetworkadapterbreaksorafiberopticcableis removed),I/Oisredirectedtotheremainingpaths.Iftheconfigurationisactive/passive,thenthe pathfailsovertooneofthepassivepaths.Ifyouareusingtheroundrobinloadbalancing configuration,thetrafficisbalancedacrosstheremaininghealthypaths.Ifallactivepathsfail, inactivesecondarypathsmustbewakedup,sofailoveroccurswithadelayofapproximately30 seconds.

Managing Multipath I/O for Devices

57

Ifadiskarrayhasmorethanonestorageprocessor,makesurethattheSANswitchhasaconnection tothestorageprocessorthatownstheLUNsyouwanttoaccess.Onmostdiskarrays,allLUNs belongtobothstorageprocessors,sobothconnectionsareactive. NOTE:Onsomediskarrays,thestoragearraymanagesthetrafficthroughstorageprocessorssothat itpresentsonlyonestorageprocessoratatime.Oneprocessorisactiveandtheotheroneispassive untilthereisafailure.Ifyouareconnectedtothewrongstorageprocessor(theonewiththepassive path)youmightnotseetheexpectedLUNs,oryoumightseetheLUNsbutgeterrorswhenyoutry toaccessthem.


Table 7-1 MultipathI/OFeaturesofStorageArrays

Features of Storage Arrays Active/passive controllers

Description One controller is active and serves all LUNs. The second controller acts as a standby. The second controller also presents the LUNs to the multipath component so that the operating system knows about redundant paths. If the primary controller fails, the second controller takes over, and it serves all LUNs. In some arrays, the LUNs can be assigned to different controllers. A given LUN is assigned to one controller to be its active controller. One controller does the disk I/O for any given LUN at a time, and the second controller is the standby for that LUN. The second controller also presents the paths, but disk I/O is not possible. Servers that use that LUN are connected to the LUNs assigned controller. If the primary controller for a set of LUNs fails, the second controller takes over, and it serves all LUNs.

Active/active controllers

Both controllers share the load for all LUNs, and can process disk I/O for any given LUN. If one controller fails, the second controller automatically handles all traffic. The Device Mapper Multipath driver automatically load balances traffic across all active paths. When the active controller fails over to the passive, or standby, controller, the Device Mapper Multipath driver automatically activates the paths between the host and the standby, making them the primary paths. Multipathing is supported for the root (/) device in SUSE Linux Enterprise Server 10 and later. The host server must be connected to the currently active controller and storage processor for the boot device. Multipathing is supported for the /boot device in SUSE Linux Enterprise Server 11 and later.

Load balancing Controller failover

Boot/Root device support

DeviceMapperMultipathdetectseverypathforamultipatheddeviceasaseparateSCSIdevice.The SCSIdevicenamestaketheform/dev/sdN,whereNisanautogeneratedletterforthedevice, beginningwithaandissuedsequentiallyasthedevicesarecreated,suchas/dev/sda,/dev/sdb, andsoon.Ifthenumberofdevicesexceeds26,thelettersareduplicatedsothatthenextdeviceafter /dev/sdzwillbenamed/dev/sdaa,/dev/sdab,andsoon. Ifmultiplepathsarenotautomaticallydetected,youcanconfigurethemmanuallyinthe/etc/ multipath.conffile.Themultipath.conffiledoesnotexistuntilyoucreateandconfigureit.For information,seeSection 7.4.5,CreatingandConfiguringthe/etc/multipath.confFile,onpage 64.

58

SLES 11 SP1: Storage Administration Guide

7.3.2

Multipath I/O Management Tools


Themultipath-toolsuserspacepackagetakescareofautomaticpathdiscoveryandgrouping.It automaticallyteststhepathperiodically,sothatapreviouslyfailedpathisautomaticallyreinstated whenitbecomeshealthyagain.Thisminimizestheneedforadministratorattentioninaproduction environment.
Table 7-2 ToolsinthemultipathtoolsPackage

Tool multipath multipathd devmap-name kpartx

Description Scans the system for multipathed devices and assembles them. Waits for maps events, then executes multipath. Provides a meaningful device name to udev for device maps (devmaps). Maps linear devmaps to partitions on the multipathed device, which makes it possible to create multipath monitoring for partitions on the device.

Thefilelistforapackagecanvaryfordifferentserverarchitectures.Foralistoffilesincludedinthe multipathtoolspackage,gototheSUSELinuxEnterpriseServerTechnicalSpecifications>Package DescriptionsWebpage(http://www.novell.com/products/server/techspecs.html?tab=1),findyour architectureandselectPackagesSortedbyName,thensearchonmultipathtoolstofindthepackage listforthatarchitecture. YoucanalsodeterminethefilelistforanRPMfilebyqueryingthepackageitself:usingtherpm -ql orrpm -qplcommandoptions. Toqueryaninstalledpackage,enter
rpm -ql <package_name>

Toqueryapackagenotinstalled,enter
rpm -qpl <URL_or_path_to_package>

Tocheckthatthemultipath-toolspackageisinstalled,dothefollowing:
1 Enterthefollowingataterminalconsoleprompt: rpm -q multipath-tools

Ifitisinstalled,theresponserepeatsthepackagenameandprovidestheversioninformation, suchas:
multipath-tools-04.7-34.23

Ifitisnotinstalled,theresponsereads:
package multipath-tools is not installed

Managing Multipath I/O for Devices

59

7.3.3

Using MDADM for Multipathed Devices


Udevisthedefaultdevicehandler,anddevicesareautomaticallyknowntothesystembythe WorldwideIDinsteadofbythedevicenodename.Thisresolvesproblemsinpreviousreleasesof MDADMandLVMwheretheconfigurationfiles(mdadm.confandlvm.conf)didnotproperly recognizemultipatheddevices. JustasforLVM2,MDADMrequiresthatthedevicesbeaccessedbytheIDratherthanbythedevice nodepath.Therefore,theDEVICEentryin/etc/mdadm.confshouldbesetasfollows:
DEVICE /dev/disk/by-id/*

Ifyouareusinguserfriendlynames,specifythepathasfollowssothatonlythedevicemapper namesarescannedaftermultipathingisconfigured:
DEVICE /dev/disk/by-id/dm-uuid-.*-mpath-.*

ToverifythatMDADMisinstalled:
1 Ensurethatthemdadmpackageisinstalledbyenteringthefollowingataterminalconsole

prompt:
rpm -q mdadm

Ifitisinstalled,theresponserepeatsthepackagenameandprovidestheversioninformation. Forexample:
mdadm-2.6-0.11

Ifitisnotinstalled,theresponsereads:
package mdadm is not installed

Forinformationaboutmodifyingthe/etc/lvm/lvm.conf file,seeSection 7.2.3,UsingLVM2on MultipathDevices,onpage 52.

7.3.4

The Linux multipath(8) Command


UsetheLinuxmultipath(8)commandtoconfigureandmanagemultipatheddevices. Generalsyntaxforthemultipath(8)command:
multipath [-v verbosity] [-d] [-h|-l|-ll|-f|-F] [-p failover | multibus | group_by_serial | group_by_prio| group_by_node_name ]

General Examples
multipath Configureallmultipathdevices. multipathdevicename Configuresaspecificmultipathdevice. Replacedevicenamewiththedevicenodenamesuchas/dev/sdb(asshownbyudevinthe $DEVNAMEvariable),orinthemajor:minorformat. multipathf Selectivelysuppressesamultipathmap,anditsdevicemappedpartitions.

60

SLES 11 SP1: Storage Administration Guide

multipathd Dryrun.Displayspotentialmultipathdevices,butdoesnotcreateanydevicesanddoesnot updatedevicemaps. multipathv2d Displaysmultipathmapinformationforpotentialmultipathdevicesinadryrun.Thev2option showsonlylocaldisks.Thisverbositylevelprintsthecreatedorupdatedmultipathnamesonly forusetofeedothertoolslikekpartx. Thereisnooutputifthedevicesalreadyexistsandtherearenochanges.Usemultipath -llto seethestatusofconfiguredmultipathdevices. multipathv2devicename Configuresaspecificpotentialmultipathdeviceanddisplaysmultipathmapinformationforit. Thisverbositylevelprintsonlythecreatedorupdatedmultipathnamesforusetofeedother toolslikekpartx. Thereisnooutputifthedevicealreadyexistsandtherearenochanges.Usemultipath -llto seethestatusofconfiguredmultipathdevices. Replacedevicenamewiththedevicenodenamesuchas/dev/sdb(asshownbyudevinthe $DEVNAMEvariable),orinthemajor:minorformat. multipathv3 Configurespotentialmultipathdevicesanddisplaysmultipathmapinformationforthem.This verbositylevelprintsalldetectedpaths,multipaths,anddevicemaps.Bothwwidanddevnode blacklisteddevicesaredisplayed. multipathv3devicename Configuresaspecificpotentialmultipathdeviceanddisplaysinformationforit.Thev3option showsthefullpathlist.Thisverbositylevelprintsalldetectedpaths,multipaths,anddevice maps.Bothwwidanddevnodeblacklisteddevicesaredisplayed. Replacedevicenamewiththedevicenodenamesuchas/dev/sdb(asshownbyudevinthe $DEVNAMEvariable),orinthemajor:minorformat. multipathll Displaythestatusofallmultipathdevices. multipathlldevicename Displaysthestatusofaspecifiedmultipathdevice. Replacedevicenamewiththedevicenodenamesuchas/dev/sdb(asshownbyudevinthe $DEVNAMEvariable),orinthemajor:minorformat. multipathF Flushesallunusedmultipathdevicemaps.Thisunresolvesthemultiplepaths;itdoesnotdelete thedevices. multipathFdevicename Flushesunusedmultipathdevicemapsforaspecifiedmultipathdevice.Thisunresolvesthe multiplepaths;itdoesnotdeletethedevice. Replacedevicenamewiththedevicenodenamesuchas/dev/sdb(asshownbyudevinthe $DEVNAMEvariable),orinthemajor:minorformat.

Managing Multipath I/O for Devices

61

multipathp[failover|multibus|group_by_serial|group_by_prio|group_by_node_name] SetsthegrouppolicybyspecifyingoneofthegrouppolicyoptionsthataredescribedinTable7 3:
Table 7-3 GroupPolicyOptionsforthemultipathpCommand

Policy Option failover multibus group_by_serial group_by_prio

Description One path per priority group. You can use only one path at a time. All paths in one priority group. One priority group per detected SCSI serial number (the controller node worldwide number). One priority group per path priority value. Paths with the same priority are in the same priority group. Priorities are determined by callout programs specified as a global, per-controller, or per-multipath option in the /etc/ multipath.conf configuration file. One priority group per target node name. Target node names are fetched in the /sys/class/fc_transport/target*/node_name location.

group_by_node_name

7.4

Configuring the System for Multipathing


Section 7.4.1,PreparingSANDevicesforMultipathing,onpage 62 Section 7.4.2,PartitioningMultipathedDevices,onpage 63 Section 7.4.3,ConfiguringtheServerforMultipathing,onpage 63 Section 7.4.4,AddingmultipathdtotheBootSequence,onpage 63 Section 7.4.5,CreatingandConfiguringthe/etc/multipath.confFile,onpage 64

7.4.1

Preparing SAN Devices for Multipathing


BeforeconfiguringmultipathI/OforyourSANdevices,preparetheSANdevices,asnecessary,by doingthefollowing: ConfigureandzonetheSANwiththevendorstools. ConfigurepermissionsforhostLUNsonthestoragearrayswiththevendorstools. InstalltheLinuxHBAdrivermodule.Uponmoduleinstallation,thedriverautomaticallyscans theHBAtodiscoveranySANdevicesthathavepermissionsforthehost.Itpresentsthemtothe hostforfurtherconfiguration. NOTE:EnsurethattheHBAdriveryouareusingdoesnothavenativemultipathingenabled. Seethevendorsspecificinstructionsformoredetails. Afterthedrivermoduleisloaded,discoverthedevicenodesassignedtospecificarrayLUNsor partitions. IftheSANdevicewillbeusedastherootdeviceontheserver,modifythetimeoutsettingsfor thedeviceasdescribedinSection 7.2.6,SANTimeoutSettingsWhentheRootDeviceIs Multipathed,onpage 53.

62

SLES 11 SP1: Storage Administration Guide

IftheLUNsarenotseenbytheHBAdriver,lsscsi canbeusedtocheckwhethertheSCSIdevices areseencorrectlybytheoperatingsystem.WhentheLUNsarenotseenbytheHBAdriver,checkthe zoningsetupoftheSAN.Inparticular,checkwhetherLUNmaskingisactiveandwhethertheLUNs arecorrectlyassignedtotheserver. IftheLUNsareseenbytheHBAdriver,buttherearenocorrespondingblockdevices,additional kernelparametersareneededtochangetheSCSIdevicescanningbehavior,suchastoindicatethat LUNsarenotnumberedconsecutively.Forinformation,seeTID3955167:TroubleshootingSCSI(LUN) ScanningIssuesintheNovellSupportKnowledgebase(http://support.novell.com/).

7.4.2

Partitioning Multipathed Devices


Partitioningdevicesthathavemultiplepathsisnotrecommended,butitissupported.Youcanuse thekpartxtooltocreatepartitionsonmultipatheddeviceswithoutrebooting.Youcanalsopartition thedevicebeforeyouattempttoconfiguremultipathingbyusingthePartitionerfunctioninYaST2, orbyusingathirdpartypartitioningtool.

7.4.3

Configuring the Server for Multipathing


Thesystemmustbemanuallyconfiguredtoautomaticallyloadthedevicedriversforthecontrollers towhichthemultipathI/Odevicesareconnectedwithintheinitrd.Youneedtoaddthenecessary drivermoduletothevariableINITRD_MODULESinthefile/etc/sysconfig/kernel. Forexample,ifyoursystemcontainsaRAIDcontrolleraccessedbytheccissdriverand multipatheddevicesconnectedtoaQLogiccontrolleraccessedbythedriverqla2xxx,thisentry wouldlooklike:
INITRD_MODULES="cciss"

BecausetheQLogicdriverisnotautomaticallyloadedonstartup,addithere:
INITRD_MODULES="cciss qla23xx"

Afterchanging/etc/sysconfig/kernel,youmustrecreatetheinitrdonyoursystemwiththe mkinitrdcommand,thenrebootinorderforthechangestotakeeffect. WhenyouareusingLILOasabootmanager,reinstallitwiththe/sbin/lilocommand.Nofurther actionisrequiredifyouareusingGRUB.

7.4.4

Adding multipathd to the Boot Sequence


UseeitherofthemethodsinthissectiontoaddmultipathI/Oservices(multipathd)totheboot sequence. UsingYaSTtoAddmultipathdonpage 63 UsingtheCommandLinetoAddmultipathdonpage 64

Using YaST to Add multipathd


1 InYaST,clickSystem>SystemServices(Runlevel)>SimpleMode. 2 Selectmultipathd,thenclickEnable. 3 ClickOKtoacknowledgetheservicestartupmessage.

Managing Multipath I/O for Devices

63

4 ClickFinish,thenclickYes.

Thechangesdonottakeaffectuntiltheserverisrestarted.

Using the Command Line to Add multipathd


1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Attheterminalconsoleprompt,enter insserv multipathd

7.4.5

Creating and Configuring the /etc/multipath.conf File


The/etc/multipath.conffiledoesnotexistunlessyoucreateit.The/usr/share/doc/packages/ multipath-tools/multipath.conf.syntheticfilecontainsasample/etc/multipath.conffile thatyoucanuseasaguideformultipathsettings.See/usr/share/doc/packages/multipathtools/multipath.conf.annotatedforatemplatewithextensivecommentsforeachofthe attributesandtheiroptions. Creatingthemultipath.confFileonpage 64 VerifyingtheSetupintheetc/multipath.confFileonpage 64 ConfiguringUserFriendlyNamesorAliasNamesin/etc/multipath.confonpage 66 BlacklistingNonMultipathedDevicesin/etc/multipath.confonpage 69 ConfiguringDefaultMultipathBehaviorin/etc/multipath.confonpage 70 ConfiguringDefaultSettingsforzSeriesin/etc/multipath.confonpage 70 Applyingthe/etc/multipath.confFileChangesonpage 70

Creating the multipath.conf File


Ifthe/etc/multipath.conffiledoesnotexist,copytheexampletocreatethefile:
1 Inaterminalconsole,loginastherootuser. 2 Enterthefollowingcommand(allononeline,ofcourse)tocopythetemplate: cp /usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/ multipath.conf 3 Usethe/usr/share/doc/packages/multipath-tools/multipath.conf.annotatedfileasa

referencetodeterminehowtoconfiguremultipathingforyoursystem.
4 MakesurethereisanappropriatedeviceentryforyourSAN.Mostvendorsprovide documentationonthepropersetupofthedevicesection.

The/etc/multipath.conffilerequiresadifferentdevicesectionfordifferentSANs.Ifyouare usingastoragesubsystemthatisautomaticallydetected(seeTestedStorageArraysfor MultipathingSupportonpage 56),thedefaultentryforthatdevicecanbeused;nofurther configurationofthe/etc/multipath.conffileisrequired.


5 Savethefile.

Verifying the Setup in the etc/multipath.conf File


Aftersettinguptheconfiguration,youcanperformadryrunbyentering

64

SLES 11 SP1: Storage Administration Guide

multipath -v2 -d

Thiscommandscansthedevices,thendisplayswhatthesetupwouldlooklike.Theoutputissimilar tothefollowing:
26353900f02796769 [size=127 GB] [features="0"] [hwhandler="1 emc"] \_ round-robin 0 [first] \_ 1:0:1:2 sdav 66:240 \_ 0:0:1:2 sdr 65:16 \_ round-robin 0 \_ 1:0:0:2 sdag 66:0 \_ 0:0:0:2 sdc 8:32 [ready ] [ready ] [ready ] [ready ]

Pathsaregroupedintoprioritygroups.Onlyoneprioritygroupisinactiveuseatatime.Tomodelan active/activeconfiguration,allpathsendinthesamegroup.Tomodelactive/passiveconfiguration, thepathsthatshouldnotbeactiveinparallelareplacedinseveraldistinctprioritygroups.This normallyhappensautomaticallyondevicediscovery. Theoutputshowstheorder,theschedulingpolicyusedtobalanceI/Owithinthegroup,andthe pathsforeachprioritygroup.Foreachpath,itsphysicaladdress(host:bus:target:lun),devicenode name,major:minornumber,andstateisshown. Byusingaverbositylevelofv3inthedryrun,youcanseealldetectedpaths,multipaths,anddevice maps.Bothwwidanddevicenodeblacklisteddevicesaredisplayed.


multipath -v3 d

Thefollowingisanexampleofv3outputona64bitSLESserverwithtwoQlogicHBAconnectedto aXiotechMagnitude3000SAN.Somemultipleentrieshavebeenomittedtoshortentheexample.
dm-22: device node name blacklisted < content omitted > loop7: device node name blacklisted < content omitted > md0: device node name blacklisted < content omitted > dm-0: device node name blacklisted sdf: not found in pathvec sdf: mask = 0x1f sdf: dev_t = 8:80 sdf: size = 105005056 sdf: subsystem = scsi sdf: vendor = XIOtech sdf: product = Magnitude 3D sdf: rev = 3.00 sdf: h:b:t:l = 1:0:0:2 sdf: tgt_node_name = 0x202100d0b2028da sdf: serial = 000028DA0014 sdf: getuid = /lib/udev/scsi_id -g -u -s /block/%n (config file default) sdf: uid = 200d0b2da28001400 (callout) sdf: prio = const (config file default) sdf: const prio = 1 < content omitted > ram15: device node name blacklisted < content omitted > ===== paths list ===== uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev 200d0b2da28001400 1:0:0:2 sdf 8:80 1 [undef][undef] XIOtech,Magnitude 3D

Managing Multipath I/O for Devices

65

200d0b2da28005400 1:0:0:1 sde 8:64 1 [undef][undef] XIOtech,Magnitude 200d0b2da28004d00 1:0:0:0 sdd 8:48 1 [undef][undef] XIOtech,Magnitude 200d0b2da28001400 0:0:0:2 sdc 8:32 1 [undef][undef] XIOtech,Magnitude 200d0b2da28005400 0:0:0:1 sdb 8:16 1 [undef][undef] XIOtech,Magnitude 200d0b2da28004d00 0:0:0:0 sda 8:0 1 [undef][undef] XIOtech,Magnitude params = 0 0 2 1 round-robin 0 1 1 8:80 1000 round-robin 0 1 1 8:32 1000 status = 2 0 0 0 2 1 A 0 1 0 8:80 A 0 E 0 1 0 8:32 A 0 sdf: mask = 0x4 sdf: path checker = directio (config file default) directio: starting new request directio: async io getevents returns 1 (errno=Success) directio: io finished 4096/0 sdf: state = 2 < content omitted >

3D 3D 3D 3D 3D

Configuring User-Friendly Names or Alias Names in /etc/multipath.conf


AmultipathdevicecanbeidentifiedbyitsWWID,byauserfriendlyname,orbyanaliasthatyou assignforit.Table74describesthetypesofdevicenamesthatcanbeusedforadeviceinthe/etc/ multipath.conffile.
Table 7-4 ComparisonofMultipathDeviceNameTypes

Name Types WWID (default)

Description The WWID (Worldwide Identifier) is an identifier for the multipath device that is guaranteed to be globally unique and unchanging. The default name used in multipathing is the ID of the logical unit as found in the /dev/disk/by-id directory. Because device node names in the form of /dev/sdn and /dev/dm-n can change on reboot, referring to multipath devices by their ID is preferred. The Device Mapper Multipath device names in the /dev/mapper directory also reference the ID of the logical unit. These multipath device names are user-friendly names in the form of /dev/mapper/mpath<n>, such as /dev/mapper/mpath0. The names are unique and persistent because they use the /var/lib/multipath/bindings file to track the association between the UUID and user-friendly names. An alias name is a globally unique name that the administrator provides for a multipath device. Alias names override the WWID and the user-friendly /dev/mapper/mpathN names.

User-friendly

Alias

Theglobalmultipathuser_friendly_namesoptioninthe/etc/multipath.conffileisusedto enableordisabletheuseofuserfriendlynamesformultipathdevices.Ifitissettono(thedefault), multipathusestheWWIDasthenameofthedevice.Ifitissettoyes,multipathusesthe/var/ lib/multipath/bindingsfiletoassignapersistentanduniquenametothedeviceintheformof mpath<n>.Thebindings_fileoptioninthe/etc/multipath.conffilecanbeusedtospecifyan alternatelocationforthebindingsfile. Theglobalmultipathaliasoptioninthe/etc/multipath.conffileisusedtoexplicitlyassigna nametothedevice.Ifanaliasnameissetupforamultipathdevice,thealiasisusedinsteadofthe WWIDortheuserfriendlyname.

66

SLES 11 SP1: Storage Administration Guide

Usingtheuser_friendly_namesoptioncanbeproblematicinthefollowingsituations: RootDeviceIsUsingMultipath:Ifthesystemrootdeviceisusingmultipathandyouusethe user_friendly_namesoption,theuserfriendlysettingsinthe/var/lib/multipath/bindings fileareincludedintheinitrd.Ifyoulaterchangethestoragesetup,suchasbyaddingor removingdevices,thereisamismatchbetweenthebindingssettinginsidetheinitrdandthe bindingssettingsin/var/lib/multipath/bindings. WARNING:Abindingsmismatchbetweeninitrdand/var/lib/multipath/bindingscan leadtoawrongassignmentofmountpointstodevices,whichcanresultinfilesystem corruptionanddataloss. Toavoidthisproblem,werecommendthatyouusethedefaultWWIDsettingsforthesystem rootdevice.Youcanalsousethealiasoptiontooverridetheuser_friendly_namesoptionfor thesystemrootdeviceinthe/etc/multipath.conffile. Forexample:
multipaths { multipath { wwid alias } multipath { wwid alias } multipath { wwid alias } multipath { wwid alias } }

36006048000028350131253594d303030 mpatha 36006048000028350131253594d303041 mpathb 36006048000028350131253594d303145 mpathc 36006048000028350131253594d303334 mpathd

IMPORTANT:Werecommendthatyoudonotusealiasesforthesystemrootdevice,because theabilitytoseamlesslyswitchoffmultipathingviathekernelcommandlineislostbecausethe devicenamediffers. Mounting/varfromAnotherPartition:Thedefaultlocationoftheuser_friendly_names configurationfileis/var/lib/multipath/bindings.Ifthe/vardataisnotlocatedonthe systemrootdevicebutmountedfromanotherpartition,thebindingsfileisnotavailablewhen settingupmultipathing. Makesurethatthe/var/lib/multipath/bindingsfileisavailableonthesystemrootdevice andmultipathcanfindit.Forexample,thiscanbedoneasfollows: 1. Movethe/var/lib/multipath/bindingsfileto/etc/multipath/bindings. 2. Setthebindings_fileoptioninthedefaultssectionof/etc/multipath.conftothis newlocation.Forexample:
defaults { } user_friendly_names yes bindings_file "/etc/multipath/bindings"

Managing Multipath I/O for Devices

67

MultipathIsintheinitrd:Evenifthesystemrootdeviceisnotonmultipath,itispossiblefor multipathtobeincludedintheinitrd.Forexample,thiscanhappenofthesystemrootdevice isonLVM.Ifyouusetheuser_friendly_namesoptionandmultipathisintheinitrd,you shouldbootwiththeparametermultipath=offtoavoidproblems. Thisdisablesmultipathonlyintheinitrdduringsystemboots.Afterthesystemboots,the boot.multipathandmultipathdbootscriptsareabletoactivatemultipathing. Foranexampleofmultipath.confsettings,seethe/usr/share/doc/packages/multipathtools/multipath.conf.syntheticfile. Toenableuserfriendlynamesortospecifyaliases:


1 Inaterminalconsole,loginastherootuser. 2 Openthe/etc/multipath.conffileinatexteditor. 3 (Optional)Modifythelocationofthe/var/lib/multipath/bindingsfile.

Thealternatepathmustbeavailableonthesystemrootdevicewheremultipathcanfindit.
3a Movethe/var/lib/multipath/bindingsfileto/etc/multipath/bindings. 3b Setthebindings_fileoptioninthedefaultssectionof/etc/multipath.conftothis

newlocation.Forexample:
defaults { } user_friendly_names yes bindings_file "/etc/multipath/bindings"

4 (Optional)Enableuserfriendlynames: 4a Uncommentthedefaultssectionanditsendingbracket. 4b Uncommenttheuser_friendly_names option,thenchangeitsvaluefromNotoYes.

Forexample:
## Use user friendly names, instead of using WWIDs as names. defaults { user_friendly_names yes } 5 (Optional)Specifyyourownnamesfordevicesbyusingthealiasoptioninthemultipath

section. Forexample:

68

SLES 11 SP1: Storage Administration Guide

## Use alias names, instead of using WWIDs as names. multipaths { multipath { wwid 36006048000028350131253594d303030 alias mpatha } multipath { wwid 36006048000028350131253594d303041 alias mpathb } multipath { wwid 36006048000028350131253594d303145 alias mpathc } multipath { wwid 36006048000028350131253594d303334 alias mpathd } } 6 Saveyourchanges,thenclosethefile.

Blacklisting Non-Multipathed Devices in /etc/multipath.conf


The/etc/multipath.conffileshouldcontainablacklistsectionwhereallnonmultipathed devicesarelisted.Forexample,localIDEharddrivesandfloppydrivesarenotnormally multipathed.Ifyouhavesinglepathdevicesthatmultipathistryingtomanageandyouwant multipathtoignorethem,putthemintheblacklistsectiontoresolvetheproblem. NOTE:Thekeyworddevnode_blacklisthasbeendeprecatedandreplacedwiththekeyword blacklist. Forexample,toblacklistlocaldevicesandallarraysfromtheccissdriverfrombeingmanagedby multipath,theblacklistsectionlookslikethis:
blacklist { wwid 26353900f02796769 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9].*" }

Youcanalsoblacklistonlythepartitionsfromadriverinsteadoftheentirearray.Forexample,using thefollowingregularexpressionblacklistsonlypartitionsfromtheccissdriverandnottheentire array:


^cciss!c[0-9]d[0-9]*[p[0-9]*]

Afteryoumodifythe/etc/multipath.conffile,youmustrunmkinitrdtorecreatetheinitrdon yoursystem,thenrebootinorderforthechangestotakeeffect. Afteryoudothis,thelocaldevicesshouldnolongerbelistedinthemultipathmapswhenyouissue themultipath -llcommand.

Managing Multipath I/O for Devices

69

Configuring Default Multipath Behavior in /etc/multipath.conf


The/etc/multipath.conffileshouldcontainadefaultssectionwhereyoucanspecifydefault behaviors.Ifthefieldisnototherwisespecifiedinadevicesection,thedefaultsettingisappliedfor thatSANconfiguration. Thefollowingdefaultssectionspecifiesasimplefailoverpolicy:
defaults { multipath_tool "/sbin/multipath -v0" udev_dir /dev polling_interval 10 default_selector "round-robin 0" default_path_grouping_policy failover default_getuid "/lib/udev/scsi_id -g -u -s /block/%n" default_prio_callout "/bin/true" default_features "0" rr_min_io 100 failback immediate

Configuring Default Settings for zSeries in /etc/multipath.conf


TestingoftheIBMzSeriesdevicewithmultipathinghasshownthatthedev_loss_tmoparameter shouldbesetto90seconds,andthefast_io_fail_tmoparametershouldbesetto5seconds.Ifyouare usingzSeriesdevices,modifythe/etc/multipath.conffiletospecifythevaluesasfollows:
defaults { dev_loss_tmo 90 fast_io_fail_tmo 5 }

Thedev_loss_tmoparametersetsthenumberofsecondstowaitbeforemarkingamultipathlinkas bad.Whenthepathfails,anycurrentI/Oonthatfailedpathfails.Thedefaultvaluevariesaccording tothedevicedriverbeingused.Thevalidrangeofvaluesis0to600seconds.Tousethedrivers internaltimeouts,setthevaluetozero(0)ortoanyvaluegreaterthan600. Thefast_io_fail_tmoparametersetsthelengthoftimetowaitbeforefailingI/Owhenalinkproblem isdetected.I/Othatreachesthedriverfails.IfI/Oisinablockedqueue,theI/Odoesnotfailuntilthe dev_loss_tmotimeelapsesandthequeueisunblocked.

Applying the /etc/multipath.conf File Changes


Changestothe/etc/multipath.conffilecannottakeeffectwhenmultipathdisrunning.Afteryou makechanges,saveandclosethefile,thendothefollowingtoapplythechanges:
1 Stopthemultipathdservice. 2 Clearoldmultipathbindingsbyentering /sbin/multipath -F 3 Createnewmultipathbindingsbyentering /sbin/multipath -v2 -l 4 Startthemultipathdservice. 5 Runmkinitrdtorecreatetheinitrdonyoursystem,thenrebootinorderforthechangesto

takeeffect.

70

SLES 11 SP1: Storage Administration Guide

7.5

Enabling and Starting Multipath I/O Services


Tostartmultipathservicesandenablethemtostartatreboot:
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Attheterminalconsoleprompt,enter chkconfig multipathd on chkconfig boot.multipath on

Iftheboot.multipathservicedoesnotstartautomaticallyonsystemboot,dothefollowingtostart themmanually:
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Enter /etc/init.d/boot.multipath start /etc/init.d/multipathd start

7.6

Configuring Path Failover Policies and Priorities


InaLinuxhost,whentherearemultiplepathstoastoragecontroller,eachpathappearsasaseparate blockdevice,andresultsinmultipleblockdevicesforsingleLUN.TheDeviceMapperMultipath servicedetectsmultiplepathswiththesameLUNID,andcreatesanewmultipathdevicewiththat ID.Forexample,ahostwithtwoHBAsattachedtoastoragecontrollerwithtwoportsviaasingle unzonedFibreChannelswitchseesfourblockdevices:/dev/sda,/dev/sdb,/dev/sdc,and/dev/ sdd.TheDeviceMapperMultipathservicecreatesasingleblockdevice,/dev/mpath/mpath1that reroutesI/Othroughthosefourunderlyingblockdevices. Thissectiondescribeshowtospecifypoliciesforfailoverandconfigureprioritiesforthepaths. Section 7.6.1,ConfiguringthePathFailoverPolicies,onpage 71 Section 7.6.2,ConfiguringFailoverPriorities,onpage 72 Section 7.6.3,UsingaScripttoSetPathPriorities,onpage 77 Section 7.6.4,ConfiguringALUA(mpath_prio_alua),onpage 78 Section 7.6.5,ReportingTargetPathGroups,onpage 79

7.6.1

Configuring the Path Failover Policies


Usethemultipathcommandwiththepoptiontosetthepathfailoverpolicy:
multipath devicename -p policy

Replacepolicywithoneofthefollowingpolicyoptions:

Managing Multipath I/O for Devices

71

Table 7-5 GroupPolicyOptionsforthemultipathpCommand

Policy Option failover multibus group_by_serial group_by_prio

Description One path per priority group. All paths in one priority group. One priority group per detected serial number. One priority group per path priority value. Priorities are determined by callout programs specified as a global, per-controller, or per-multipath option in the /etc/ multipath.conf configuration file. One priority group per target node name. Target node names are fetched in the /

group_by_node_name

sys/class/fc_transport/target*/node_name location.

7.6.2

Configuring Failover Priorities


Youmustmanuallyenterthefailoverprioritiesforthedeviceinthe/etc/multipath.conffile. Examplesforallsettingsandoptionscanbefoundinthe/usr/share/doc/packages/multipathtools/multipath.conf.annotatedfile. UnderstandingPriorityGroupsandAttributesonpage 72 ConfiguringforRoundRobinLoadBalancingonpage 77 ConfiguringforSinglePathFailoveronpage 77 GroupingI/OPathsforRoundRobinLoadBalancingonpage 77

Understanding Priority Groups and Attributes


AprioritygroupisacollectionofpathsthatgotothesamephysicalLUN.Bydefault,I/Oisdistributed inaroundrobinfashionacrossallpathsinthegroup.Themultipathcommandautomatically createsprioritygroupsforeachLUNintheSANbasedonthepath_grouping_policysettingforthat SAN.Themultipathcommandmultipliesthenumberofpathsinagroupbythegroupspriorityto determinewhichgroupistheprimary.Thegroupwiththehighestcalculatedvalueistheprimary. Whenallpathsintheprimarygrouparefailed,theprioritygroupwiththenexthighestvalue becomesactive. Apathpriorityisanintegervalueassignedtoapath.Thehigherthevalue,thehigherthepriorityis. Anexternalprogramisusedtoassignprioritiesforeachpath.Foragivendevice,thepathswiththe sameprioritiesbelongtothesameprioritygroup.
Table 7-6 MultipathAttributes

Multipath Attribute user_friendly_names

Description Specifies whether to use IDs or to use the /var/lib/ multipath/bindings file to assign a persistent and unique alias to the multipath devices in the form of /dev/mapper/ mpathN.

Values yes: Autogenerate user-friendly names as aliases for the multipath devices instead of the actual ID. no: Default. Use the WWIDs shown in the /dev/

disk/by-id/ location.

72

SLES 11 SP1: Storage Administration Guide

Multipath Attribute blacklist

Description

Values

Specifies the list of device For an example, see Blacklisting Non-Multipathed names to ignore as nonDevices in /etc/multipath.conf on page 69. multipathed devices, such as cciss, fd, hd, md, dm, sr, scd, st, ram, raw, loop. Specifies the list of device names to treat as multipath devices even if they are included in the blacklist. Specifies whether to monitor the failed path recovery, and indicates the timing for group failback after failed paths return to service. When the failed path recovers, the path is added back into the multipath enabled path list based on this setting. Multipath evaluates the priority groups, and changes the active priority group when the priority of the primary path exceeds the secondary group. For an example, see the /usr/share/doc/ packages/multipath-tools/ multipath.conf.annotated file. immediate: When a path recovers, enable the path immediately. n (> 0): When the path recovers, wait n seconds before enabling the path. Specify an integer value greater than 0. manual: (Default) The failed path is not monitored for recovery. The administrator runs the multipath command to update enabled paths and priority groups. We recommend failback setting of manual for multipath in cluster environments in order to prevent multipath failover ping-pong.

blacklist_exceptions

failback

failback "manual"
IMPORTANT: Ensure that you verify the failback setting with your storage system vendor. Different storage systems can require different settings. getuid The default program and arguments to call to obtain a unique path identifier. Should be specified with an absolute path.

/lib/udev/scsi_id -g -u -s
This is the default location and arguments. Example:

getuid "/lib/udev/scsi_id -g -u -d / dev/%n"


no_path_retry Specifies the behaviors to use on path failure. n (> 0): Specifies the number of retries until

multipath stops the queuing and fails the path.


Specify an integer value greater than 0 if you want queueing. fail: Specified immediate failure (no queuing). queue : Never stop queuing (queue forever until the path comes alive). We recommend a retry setting of fail or 0 in the /etc/multipath.conf file when working in a cluster. This causes the resources to fail over when the connection is lost to storage. Otherwise, the messages queue and the resource failover cannot occur. IMPORTANT: Ensure that you verify the retry settings with your storage system vendor. Different storage systems can require different settings.

Managing Multipath I/O for Devices

73

Multipath Attribute path_grouping_policy

Description Specifies the path grouping policy for a multipath device hosted by a given controller.

Values failover: (Default) One path is assigned per priority group so that only one path at a time is used. multibus: All valid paths are in one priority group. Traffic is load-balanced across all active paths in the group. group_by_prio: One priority group exists for each path priority value. Paths with the same priority are in the same priority group. Priorities are assigned by an external program. group_by_serial: Paths are grouped by the SCSI target serial number (controller node WWN). group_by_node_name: One priority group is assigned per target node name. Target node names are fetched in /sys/class/ fc_transport/target*/node_name.

path_checker

Determines the state of the path.

directio: (Default in multipath-tools version 0.4.8 and later) Reads the first sector that has direct I/O. This is useful for DASD devices. Logs failure messages in /var/log/messages. readsector0: (Default in multipath-tools version 0.4.7 and earlier) Reads the first sector of the device. Logs failure messages in /var/log/ messages. tur: Issues a SCSI test unit ready command to the device. This is the preferred setting if the LUN supports it. On failure, the command does not fill up /var/log/messages with messages. Some SAN vendors provide custom path_checker options:

emc_clariion: Queries the EMC Clariion


EVPD page 0xC0 to determine the path state.

hp_sw: Checks the path state (Up, Down, or


Ghost) for HP storage arrays with Active/ Standby firmware.

rdac: Checks the path state for the LSI/


Engenio RDAC storage controller.

74

SLES 11 SP1: Storage Administration Guide

Multipath Attribute path_selector

Description Specifies the path-selector algorithm to use for load balancing.

Values round-robin 0: (Default) The load-balancing algorithm used to balance traffic across all active paths in a priority group. Beginning in SUSE Linux Enterprise Server 11, the following additional I/O balancing options are available: least-pending 0: Provides a least-pending-I/O dynamic load balancing policy for bio based device mapper multipath. This load balancing policy considers the number of unserviced requests pending on a path and selects the path with least count of pending service requests. This policy is especially useful when the SAN environment has heterogeneous components. For example, when there is one 8 GB HBA and one 2 GB HBA connected to the same server, the 8 GB HBA could be utilized better with this algorithm. queue-length 0: A dynamic load balancer that balances the number of in-flight I/O on paths similar to the least-pending option. service-time 0: A service-time oriented load balancer that balances I/O on paths according to the latency.

pg_timeout

Specifies path group timeout handling.

NONE (internal default)

Managing Multipath I/O for Devices

75

Multipath Attribute prio_callout

Description

Values

Specifies the program and If no prio_callout attribute is used, all paths arguments to use to determine are equal. This is the default. Multipath prio_callouts the layout of the multipath map. are located in shared /bin/true: Use this value when the When queried by the libraries in /lib/ group_by_priority is not being used. libmultipath/lib*. multipath command, the specified mpath_prio_* callout The prioritizer programs generate path By using shared libraries, the callouts are program returns the priority for priorities when queried by the multipath command. The program names must begin with a given path in relation to the loaded into memory on mpath_prio_ and are named by the device type entire multipath layout. daemon startup. or balancing method used. Current prioritizer When it is used with the programs include the following: path_grouping_policy of mpath_prio_alua %n: Generates path priorities group_by_prio, all paths with based on the SCSI-3 ALUA settings. the same priority are grouped into one multipath group. The mpath_prio_balance_units: Generates the same group with the highest priority for all paths. aggregate priority becomes the active group. mpath_prio_emc %n: Generates the path priority When all paths in a group fail, the group with the next highest aggregate priority becomes active. Additionally, a failover command (as determined by the hardware handler) might be send to the target. for EMC arrays. mpath_prio_hds_modular %b: Generates the path priority for Hitachi HDS Modular storage arrays. mpath_prio_hp_sw %n: Generates the path priority for Compaq/HP controller in active/standby mode.

The mpath_prio_* program can also be a custom script created mpath_prio_netapp %n: Generates the path by a vendor or administrator for priority for NetApp arrays. a specified setup. mpath_prio_random %n: Generates a random A %n in the command line priority for each path. expands to the device name in mpath_prio_rdac %n: Generates the path priority the /dev directory. for LSI/Engenio RDAC controller. A %b expands to the device number in major:minor format in mpath_prio_tpc %n: You can optionally use a script created by a vendor or administrator that the /dev directory. gets the priorities from a file where you specify A %d expands to the device ID priorities to use for each path. in the /dev/disk/by-id mpath_prio_spec.sh %n: Provides the path of a directory. user-created script that generates the priorities for multipathing based on information contained in a If devices are hot-pluggable, use the %d flag instead of %n. second data file. (This path and filename are provided as an example. Specify the location of This addresses the short time that elapses between the time your script instead.) The script can be created by a when devices are available and vendor or administrator. The scripts target file when udev creates the device identifies each path for all multipathed devices and specifies a priority for each path. For an example, nodes. see Section 7.6.3, Using a Script to Set Path Priorities, on page 77.

76

SLES 11 SP1: Storage Administration Guide

Multipath Attribute rr_min_io

Description

Values

n (>0): Specify an integer value greater than 0. Specifies the number of I/O transactions to route to a path 1000: Default. before switching to the next path in the same path group, as determined by the specified algorithm in the path_selector setting. Specifies the weighting method uniform: Default. All paths have the same roundto use for paths. robin weights. priorities: Each paths weight is determined by the paths priority times the rr_min_io setting.

rr_weight

Configuring for Round-Robin Load Balancing


Allpathsareactive.I/OisconfiguredforsomenumberofsecondsorsomenumberofI/O transactionsbeforemovingtothenextopenpathinthesequence.

Configuring for Single Path Failover


Asinglepathwiththehighestpriority(lowestvaluesetting)isactivefortraffic.Otherpathsare availableforfailover,butarenotusedunlessfailoveroccurs.

Grouping I/O Paths for Round-Robin Load Balancing


Multiplepathswiththesamepriorityfallintotheactivegroup.Whenallpathsinthatgroupfail,the devicefailsovertothenexthighestprioritygroup.Allpathsinthegroupsharethetrafficloadina roundrobinloadbalancingfashion.

7.6.3

Using a Script to Set Path Priorities


YoucancreateascriptthatinteractswithDeviceMapperMultipath(DMMPIO)toprovidepriorities forpathstotheLUNwhensetasaresourcefortheprio_calloutsetting. First,setupatextfilethatlistsinformationabouteachdeviceandthepriorityvaluesyouwantto assigntoeachpath.Forexample,namethefile/usr/local/etc/primary-paths.Enteronelinefor eachpathinthefollowingformat:
host_wwpn target_wwpn scsi_id priority_value

Returnapriorityvalueforeachpathonthedevice.Makesurethatthevariable FILE_PRIMARY_PATHSresolvestoarealfilewithappropriatedata(hostwwpn,targetwwpn, scsi_idandpriorityvalue)foreachdevice. Thecontentsoftheprimary-pathsfileforasingleLUNwitheightpathseachmightlooklikethis:


0x10000000c95ebeb4 0x200200a0b8122c6e 2:0:0:0 sdb 3600a0b8000122c6d00000000453174fc 50 0x10000000c95ebeb4 0x200200a0b8122c6e 2:0:0:1 sdc 3600a0b80000fd6320000000045317563 2 0x10000000c95ebeb4 0x200200a0b8122c6e 2:0:0:2 sdd 3600a0b8000122c6d0000000345317524 50

Managing Multipath I/O for Devices

77

0x10000000c95ebeb4 0x200200a0b8122c6e 2:0:0:3 sde 3600a0b80000fd6320000000245317593 2 0x10000000c95ebeb4 0x200300a0b8122c6e 2:0:1:0 sdi 3600a0b8000122c6d00000000453174fc 5 0x10000000c95ebeb4 0x200300a0b8122c6e 2:0:1:1 sdj 3600a0b80000fd6320000000045317563 51 0x10000000c95ebeb4 0x200300a0b8122c6e 2:0:1:2 sdk 3600a0b8000122c6d0000000345317524 5 0x10000000c95ebeb4 0x200300a0b8122c6e 2:0:1:3 sdl 3600a0b80000fd6320000000245317593 51

TocontinuetheexamplementionedinTable76onpage 72,createascriptnamed/usr/local/ sbin/path_prio.sh.Youcanuseanypathandfilename.Thescriptdoesthefollowing: Onqueryfrommultipath,grepthedeviceanditspathfromthe/usr/local/etc/primarypathsfile. Returntomultipaththepriorityvalueinthelastcolumnforthatentryinthefile.

7.6.4

Configuring ALUA (mpath_prio_alua)


Thempath_prio_alua(8)commandisusedasaprioritycalloutfortheLinuxmultipath(8) command.ItreturnsanumberthatisusedbyDMMPIOtogroupSCSIdeviceswiththesame prioritytogether.ThispathprioritytoolisbasedonALUA(AsynchronousLogicalUnitAccess). Syntaxonpage 78 Prerequisiteonpage 78 Optionsonpage 78 ReturnValuesonpage 79

Syntax
mpath_prio_alua [-d directory] [-h] [-v] [-V] device [device...]

Prerequisite
SCSIdevices.

Options
ddirectory SpecifiestheLinuxdirectorypathwherethelisteddevicenodenamescanbefound.Thedefault directoryis/dev.Whenyouusethisoption,specifythedevicenodenameonly(suchassda)for thedeviceordevicesyouwanttomanage. h Displayshelpforthiscommand,thenexits. v Turnsonverboseoutputtodisplaystatusinhumanreadableformat.Outputincludes informationaboutwhichportgroupthespecifieddeviceisinanditscurrentstate.

78

SLES 11 SP1: Storage Administration Guide

V Displaystheversionnumberofthistool,thenexits. device[device...] SpecifiestheSCSIdevice(ormultipledevices)thatyouwanttomanage.Thedevicemustbea SCSIdevicethatsupportstheReportTargetPortGroups(sg_rtpg(8))command.Useoneof thefollowingformatsforthedevicenodename: ThefullLinuxdirectorypath,suchas/dev/sda.Donotusewiththedoption. Thedevicenodenameonly,suchassda.Specifythedirectorypathbyusingthedoption. Themajorandminornumberofthedeviceseparatedbyacolon(:)withnospaces,suchas 8:0.Thiscreatesatemporarydevicenodeinthe/devdirectorywithanameintheformat oftmpdev-<major>:<minor>-<pid>.Forexample,/dev/tmpdev-8:0-<pid>.

Return Values
Onsuccess,returnsavalueof0andthepriorityvalueforthegroup.Table77showsthepriority valuesreturnedbythempath_prio_aluacommand.
Table 7-7 ALUAPrioritiesforDeviceMapperMultipath

Priority Value 50 10 1 0

Description The device is in the active, optimized group. The device is in an active but non-optimized group. The device is in the standby group. All other groups.

Valuesarewidelyspacedbecauseofthewaythemultipathcommandhandlesthem.Itmultiplies thenumberofpathsinagroupwiththepriorityvalueforthegroup,thenselectsthegroupwiththe highestresult.Forexample,ifanonoptimizedpathgrouphassixpaths(6x10=60)andthe optimizedpathgrouphasasinglepath(1x50=50),thenonoptimizedgrouphasthehighestscore, somultipathchoosesthenonoptimizedgroup.Traffictothedeviceusesallsixpathsinthegroupin aroundrobinfashion. Onfailure,returnsavalueof1to5indicatingthecauseforthecommandsfailure.Forinformation, seethemanpageformpath_prio_alua.

7.6.5

Reporting Target Path Groups


UsetheSCSIReportTargetPortGroups(sg_rtpg(8))command.Forinformation,seethemanpage forsg_rtpg(8).

7.7

Configuring Multipath I/O for the Root Device


IMPORTANT:IntheSUSELinuxEnterpriseServer10SP1initialreleaseandearlier,theroot partition(/)onmultipathissupportedonlyifthe/bootpartitionisonaseparate,nonmultipathed partition.Otherwise,nobootloaderiswritten.

Managing Multipath I/O for Devices

79

DMMPIOisnowavailableandsupportedfor/bootand/rootinSUSELinuxEnterpriseServer11. Inaddition,theYaSTpartitionerintheYaST2installersupportsenablingmultipathduringtheinstall. Section 7.7.1,EnablingMultipathI/OatInstallTime,onpage 80 Section 7.7.2,EnablingMultipathI/OforanExistingRootDevice,onpage 82 Section 7.7.3,DisablingMultipathI/OontheRootDevice,onpage 82

7.7.1

Enabling Multipath I/O at Install Time


Themultipathsoftwaremustberunningatinstalltimeifyouwanttoinstalltheoperatingsystemon amultipathdevice.Themultipathddaemonisnotautomaticallyactiveduringthesystem installation.YoucanstartitbyusingtheConfigureMultipathoptionintheYaSTpartitioner. EnablingMultipathI/OatInstallTimeonanActive/ActiveMultipathStorageLUNon page 80 EnablingMultipathI/OatInstallTimeonanActive/PassiveMultipathStorageLUNon page 80

Enabling Multipath I/O at Install Time on an Active/Active Multipath Storage LUN


1 DuringtheinstallontheYaST2InstallationSettingspage,clickonPartitioningtoopentheYaST

partitioner.
2 SelectCustomPartitioning(forexperts). 3 SelecttheHardDisksmainicon,clicktheConfigurebutton,thenselectConfigureMultipath. 4 Startmultipath.

YaST2startstorescanthedisksandshowsavailablemultipathdevices(suchas/dev/mapper/ 3600a0b80000f4593000012ae4ab0ae65).Thisisthedevicethatshouldbeusedforallfurther processing.


5 ClickNexttocontinuewiththeinstallation.

Enabling Multipath I/O at Install Time on an Active/Passive Multipath Storage LUN


Themultipathddaemonisnotautomaticallyactiveduringthesysteminstallation.Youcanstartitby usingtheConfigureMultipathoptionintheYaSTpartitioner. ToenablemultipathI/Oatinstalltimeforanactive/passivemultipathstorageLUN:
1 DuringtheinstallontheYaST2InstallationSettingspage,clickonPartitioningtoopentheYaST

partitioner.
2 SelectCustomPartitioning(forexperts). 3 SelecttheHardDisksmainicon,clicktheConfigurebutton,thenselectConfigureMultipath. 4 Startmultipath.

YaST2startstorescanthedisksandshowsavailablemultipathdevices(suchas/dev/mapper/ 3600a0b80000f4593000012ae4ab0ae65).Thisisthedevicethatshouldbeusedforallfurther processing.WritedownthedevicepathandUUID;youneeditlater.


5 ClickNexttocontinuewiththeinstallation.

80

SLES 11 SP1: Storage Administration Guide

6 Afterallsettingsaredoneandtheinstallationfinished,YaST2startstowritethebootloader

information,anddisplaysacountdowntorestartthesystem.Stopthecounterbyclickingthe StopbuttonandpressCTRL+ALT+F5toaccessaconsole.
7 Usetheconsoletodetermineifapassivepathwasenteredinthe/boot/grub/device.mapfile forthehd0entry.

Thisisnecessarybecausetheinstallationdoesnotdistinguishbetweenactiveandpassivepaths.
7a Mounttherootdeviceto/mntbyentering mount /dev/mapper/UUID_part2 /mnt

Forexample,enter
mount /dev/mapper/3600a0b80000f4593000012ae4ab0ae65_part2 /mnt 7b Mountthebootdeviceto/mnt/bootbyentering mount /dev/mapper/UUID_part1 /mnt/boot

Forexample,enter
mount /dev/mapper/3600a0b80000f4593000012ae4ab0ae65_part1 /mnt/boot 7c Open/mnt/boot/grub/device.mapfilebyentering less /mnt/boot/grub/device.map 7d Inthe/mnt/boot/grub/device.mapfile,determineifthehd0entrypointstoapassive

path,thendooneofthefollowing: Activepath:Noactionisneeded;skipStep 8andcontinuewithStep 9. Passivepath:Theconfigurationmustbechangedandthebootloadermustbe reinstalled.ContinuewithStep 8.


8 Ifthehd0entrypointstoapassivepath,changetheconfigurationandreinstallthebootloader: 8a Attheconsole,enterthefollowingcommandsattheconsoleprompt: mount -o bind /dev /mnt/dev mount -o bind /sys /mnt/sys mount -o bind /proc /mnt/proc chroot 8b Attheconsole,runmultipath -ll,thenchecktheoutputtofindtheactivepath.

Passivepathsareflaggedasghost.
8c Inthe/mnt/boot/grub/device.mapfile,changethehd0entrytoanactivepath,savethe

changes,andclosethefile.
8d IncasetheselectionwastobootfromMBR,/etc/grub.confshouldlooklikethe

following:
setup --stage2=/boot/grub/stage2 (hd0) (hd0,0) quit 8e Reinstallthebootloaderbyentering grub < /etc/grub.conf

Managing Multipath I/O for Devices

81

8f Enterthefollowingcommands: exit umount /mnt/* umount /mnt 9 ReturntotheYaSTgraphicalenvironmentbypressingCTRL+ALT+F7. 10 ClickOKtocontinuewiththeinstallationreboot.

7.7.2

Enabling Multipath I/O for an Existing Root Device


1 InstallLinuxwithonlyasinglepathactive,preferablyonewheretheby-idsymlinksarelisted

inthepartitioner.
2 Mountthedevicesbyusingthe/dev/disk/by-idpathusedduringtheinstall. 3 Afterinstallation,adddm-multipathto/etc/sysconfig/kernel:INITRD_MODULES. 4 ForSystemZ,beforerunningmkinitrd,editthe/etc/zipl.conffiletochangethebypath informationinzipl.confwiththesamebyidinformationthatwasusedinthe/etc/fstab. 5 Rerun/sbin/mkinitrdtoupdatetheinitrdimage. 6 ForSystemZ,afterrunningmkinitrd,runzipl. 7 Reboottheserver.

7.7.3

Disabling Multipath I/O on the Root Device


1 Addmultipath=offtothekernelcommandline.

Thisaffectsonlytherootdevice.Allotherdevicesarenotaffected.

7.8

Configuring Multipath I/O for an Existing Software RAID


Ideally,youshouldconfiguremultipathingfordevicesbeforeyouusethemascomponentsofa softwareRAIDdevice.IfyouaddmultipathingaftercreatinganysoftwareRAIDdevices,theDM MPIOservicemightbestartingafterthemultipathserviceonreboot,whichmakesmultipathing appearnottobeavailableforRAIDs.Youcanusetheprocedureinthissectiontogetmultipathing runningforapreviouslyexistingsoftwareRAID. Forexample,youmightneedtoconfiguremultipathingfordevicesinasoftwareRAIDunderthe followingcircumstances: IfyoucreateanewsoftwareRAIDaspartofthePartitioningsettingsduringanewinstallor upgrade. IfyoudidnotconfigurethedevicesformultipathingbeforeusingtheminthesoftwareRAIDas amemberdeviceorspare. IfyougrowyoursystembyaddingnewHBAadapterstotheserverorexpandingthestorage subsysteminyourSAN.

82

SLES 11 SP1: Storage Administration Guide

NOTE:ThefollowinginstructionsassumethesoftwareRAIDdeviceis/dev/mapper/mpath0,which isitsdevicenameasrecognizedbythekernel.Makesuretomodifytheinstructionsforthedevice nameofyoursoftwareRAID.


1 Openaterminalconsole,thenloginastherootuserorequivalent.

Exceptwhereotherwisedirected,usethisconsoletoenterthecommandsinthefollowingsteps.
2 IfanysoftwareRAIDdevicesarecurrentlymountedorrunning,enterthefollowingcommands

foreachdevicetodismountthedeviceandstopit.
umount /dev/mapper/mpath0 mdadm --misc --stop /dev/mapper/mpath0 3 Stoptheboot.mdservicebyentering /etc/init.d/boot.md stop 4 Starttheboot.multipathandmultipathdservicesbyenteringthefollowingcommands: /etc/init.d/boot.multipath start /etc/init.s/multipathd start 5 Afterthemultipathingservicesarestarted,verifythatthesoftwareRAIDscomponentdevices arelistedinthe/dev/disk/by-iddirectory.Dooneofthefollowing:

DevicesAreListed:ThedevicenamesshouldnowhavesymboliclinkstotheirDevice MapperMultipathdevicenames,suchas/dev/dm-1. DevicesAreNotListed:Forcethemultipathservicetorecognizethembyflushingand rediscoveringthedevices. Todothis,enterthefollowingcommands:


multipath -F multipath -v0

Thedevicesshouldnowbelistedin/dev/disk/by-id,andhavesymboliclinkstotheir DeviceMapperMultipathdevicenames.Forexample:
lrwxrwxrwx 1 root root 10 Jun 15 09:36 scsi-mpath1 -> ../../dm-1 6 Restarttheboot.mdserviceandtheRAIDdevicebyentering /etc/init.d/boot.md start 7 CheckthestatusofthesoftwareRAIDbyentering mdadm --detail /dev/mapper/mpath0

TheRAIDscomponentdevicesshouldmatchtheirDeviceMapperMultipathdevicenamesthat arelistedasthesymboliclinksofdevicesinthe/dev/disk/by-iddirectory.
8 MakeanewinitrdtoensurethattheDeviceMapperMultipathservicesareloadedbeforethe RAIDservicesonreboot.Runningmkinitrdisneededonlyiftheroot(/)deviceoranypartsof it(suchas/var,/etc,/log)areontheSANandmultipathisneededtoboot.

Enter
mkinitrd -f multipath 9 Reboottheservertoapplythesepostinstallconfigurationsettings.

Managing Multipath I/O for Devices

83

10 VerifythatthesoftwareRAIDarraycomesupproperlyontopofthemultipatheddevicesby

checkingtheRAIDstatus.Enter
mdadm --detail /dev/mapper/mpath0

Forexample:
Number Major Minor RaidDevice State

0 253 0 0 active sync /dev/dm-0 1 253 1 1 active sync /dev/dm-1 2 253 2 2 active sync /dev/dm-2

7.9

Scanning for New Devices without Rebooting


Ifyoursystemhasalreadybeenconfiguredformultipathingandyoulaterneedtoaddmorestorage totheSAN,youcanusetherescan-scsi-bus.shscripttoscanforthenewdevices.Bydefault,this scriptscansallHBAswithtypicalLUNranges.

Syntax
rescan-scsi-bus.sh [options] [host [host ...]]

Youcanspecifyhostsonthecommandline(deprecated),orusethe--hosts=LISToption (recommended).

Options
Formoststoragesubsystems,thescriptcanberunsuccessfullywithoutoptions.However,some specialcasesmightneedtouseoneormoreofthefollowingparametersfortherescan-scsibus.shscript:
Option Description Activates scanning for LUNs 0-7. [Default: 0] Activates scanning for LUNs 0 to NUM. [Default: 0] Scans for target device IDs 0 to 15. [Default: 0 to 7] Enables scanning of channels 0 or 1. [Default: 0] Enables removing of devices. [Default: Disabled] Issues a Fibre Channel LIP reset. [Default: Disabled] Rescans existing devices. Removes and re-adds every device. WARNING: Use with caution, this option is dangerous.

-l -L NUM -w -c -r --remove -i --issueLip --forcerescan --forceremove

--nooptscan --color

Dont stop looking for LUNs if 0 is not found. Use colored prefixes OLD/NEW/DEL.

84

SLES 11 SP1: Storage Administration Guide

Option

Description Scans only hosts in LIST, where LIST is a comma-separated list of single values and ranges. No spaces are allowed.

--hosts=LIST

--hosts=A[-B][,C[-D]] --channels=LIST
Scans only channels in LIST, where LIST is a comma-separated list of single values and ranges. No spaces are allowed.

--channels=A[-B][,C[-D]] --ids=LIST
Scans only target IDs in LIST, where LIST is a comma-separated list of single values and ranges. No spaces are allowed.

--ids=A[-B][,C[-D]] --luns=LIST
Scans only LUNs in LIST, where LIST is a comma-separated list of single values and ranges. No spaces are allowed.

--luns=A[-B][,C[-D]]

Procedure
Usethefollowingproceduretoscanthedevicesandmakethemavailabletomultipathingwithout rebootingthesystem.
1 Onthestoragesubsystem,usethevendorstoolstoallocatethedeviceandupdateitsaccess

controlsettingstoallowtheLinuxsystemaccesstothenewstorage.Refertothevendors documentationfordetails.
2 ScanalltargetsforahosttomakeitsnewdeviceknowntothemiddlelayeroftheLinuxkernels

SCSIsubsystem.Ataterminalconsoleprompt,enter
rescan-scsi-bus.sh [options] 3 Checkforscanningprogressinthesystemlog(the/var/log/messagesfile).Ataterminal

consoleprompt,enter
tail -30 /var/log/messages

Thiscommanddisplaysthelast30linesofthelog.Forexample:
# tail . . . Feb 14 Feb 14 Feb 14 Feb 14 Feb 14 Feb 14 Feb 14 Feb 14 -30 /var/log/messages 01:03 kernel: SCSI device sde: 81920000 01:03 kernel: SCSI device sdf: 81920000 01:03 multipathd: sde: path checker registered 01:03 multipathd: sdf: path checker registered 01:03 multipathd: mpath4: event checker started 01:03 multipathd: mpath5: event checker started 01:03:multipathd: mpath4: remaining active paths: 1 01:03 multipathd: mpath5: remaining active paths: 1

4 RepeatStep 2throughStep 3toaddpathsthroughotherHBAadaptersontheLinuxsystemthat

areconnectedtothenewdevice.
5 RunthemultipathcommandtorecognizethedevicesforDMMPIOconfiguration.Ata

terminalconsoleprompt,enter
multipath

Youcannowconfigurethenewdeviceformultipathing.

Managing Multipath I/O for Devices

85

7.10

Scanning for New Partitioned Devices without Rebooting


UsetheexampleinthissectiontodetectanewlyaddedmultipathedLUNwithoutrebooting.
1 Openaterminalconsole,thenloginastherootuser. 2 ScanalltargetsforahosttomakeitsnewdeviceknowntothemiddlelayeroftheLinuxkernels

SCSIsubsystem.Ataterminalconsoleprompt,enter
rescan-scsi-bus.sh [options]

Forsyntaxandoptionsinformationfortherescan-scsi-bus-shscript,seeSection 7.9, ScanningforNewDeviceswithoutRebooting,onpage 84.


3 Verifythatthedeviceisseen(thelinkhasanewtimestamp)byentering ls -lrt /dev/dm-* 4 VerifythenewWWNofthedeviceappearsinthelogbyentering tail -33 /var/log/messages 5 Useatexteditortoaddanewaliasdefinitionforthedeviceinthe/etc/multipath.conffile, suchasoradata3. 6 Createapartitiontableforthedevicebyentering fdisk /dev/dm-8 7 Triggerudevbyentering echo 'add' > /sys/block/dm-8/uevent

Thisgeneratesthedevicemapperdevicesforthepartitionsondm-8.
8 Createafilesystemandlabelforthenewpartitionbyentering mke2fs -j /dev/dm-9 tune2fs -L oradata3 /dev/dm-9 9 RestartDMMPIOtoletitreadthealiasesbyentering /etc/init.d/multipathd restart 10 Verifythatthedeviceisrecognizedbymultipathdbyentering multipath -ll 11 Useatexteditortoaddamountentryinthe/etc/fstabfile.

Atthispoint,thealiasyoucreatedinStep 5isnotyetinthe/dev/disk/by-labeldirectory. Addthemountentrythe/dev/dm-9path,thenchangetheentrybeforethenexttimeyoureboot to


LABEL=oradata3 12 Createadirectorytouseasthemountpoint,thenmountthedevicebyentering md /oradata3 mount /oradata3

86

SLES 11 SP1: Storage Administration Guide

7.11

Viewing Multipath I/O Status


QueryingthemultipathI/Ostatusoutputsthecurrentstatusofthemultipathmaps. Themultipath -loptiondisplaysthecurrentpathstatusasofthelasttimethatthepathchecker wasrun.Itdoesnotrunthepathchecker. Themultipath -lloptionrunsthepathchecker,updatesthepathinformation,thendisplaysthe currentstatusinformation.Thisoptionalwaysthedisplaysthelatestinformationaboutthepath status.
1 Ataterminalconsoleprompt,enter multipath -ll

Thisdisplaysinformationforeachmultipatheddevice.Forexample:
3600601607cf30e00184589a37a31d911 [size=127 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active][first] \_ 1:0:1:2 sdav 66:240 [ready ][active] \_ 0:0:1:2 sdr 65:16 [ready ][active] \_ round-robin 0 [enabled] \_ 1:0:0:2 sdag 66:0 [ready ][active] \_ 0:0:0:2 sdc 8:32 [ready ][active]

Foreachdevice,itshowsthedevicesID,size,features,andhardwarehandlers. Pathstothedeviceareautomaticallygroupedintoprioritygroupsondevicediscovery.Onlyone prioritygroupisactiveatatime.Foranactive/activeconfiguration,allpathsareinthesamegroup. Foranactive/passiveconfiguration,thepassivepathsareplacedinseparateprioritygroups. Thefollowinginformationisdisplayedforeachgroup: SchedulingpolicyusedtobalanceI/Owithinthegroup,suchasroundrobin Whetherthegroupisactive,disabled,orenabled Whetherthegroupisthefirst(highestpriority)group Pathscontainedwithinthegroup Thefollowinginformationisdisplayedforeachpath: Thephysicaladdressashost:bus:target:lun,suchas1:0:1:2 Devicenodename,suchassda Major:minornumbers Statusofthedevice

Managing Multipath I/O for Devices

87

7.12

Managing I/O in Error Situations


YoumightneedtoconfiguremultipathingtoqueueI/Oifallpathsfailconcurrentlybyenabling queue_if_no_path.Otherwise,I/Ofailsimmediatelyifallpathsaregone.Incertainscenarios,where thedriver,theHBA,orthefabricexperiencespuriouserrors,DMMPIOshouldbeconfiguredto queueallI/Owherethoseerrorsleadtoalossofallpaths,andneverpropagateerrorsupward. Whenyouusemultipatheddevicesinacluster,youmightchoosetodisablequeue_if_no_path.This automaticallyfailsthepathinsteadofqueuingtheI/O,andescalatestheI/Oerrortocauseafailover oftheclusterresources. Becauseenablingqueue_if_no_pathleadstoI/Obeingqueuedindefinitelyunlessapathisreinstated, makesurethatmultipathdisrunningandworksforyourscenario.Otherwise,I/Omightbestalled indefinitelyontheaffectedmultipatheddeviceuntilrebootoruntilyoumanuallyreturntofailover insteadofqueuing. Totestthescenario:
1 Inaterminalconsole,loginastherootuser. 2 ActivatequeuinginsteadoffailoverforthedeviceI/Obyentering: dmsetup message device_ID 0 queue_if_no_path

Replacethedevice_IDwiththeIDforyourdevice.The0valuerepresentsthesectorandisused whensectorinformationisnotneeded. Forexample,enter:


dmsetup message 3600601607cf30e00184589a37a31d911 0 queue_if_no_path 3 ReturntofailoverforthedeviceI/Obyentering: dmsetup message device_ID 0 fail_if_no_path

ThiscommandimmediatelycausesallqueuedI/Otofail. Replacethedevice_IDwiththeIDforyourdevice.Forexample,enter:
dmsetup message 3600601607cf30e00184589a37a31d911 0 fail_if_no_path

TosetupqueuingI/Oforscenarioswhereallpathsfail:
1 Inaterminalconsole,loginastherootuser. 2 Openthe/etc/multipath.conffileinatexteditor. 3 Uncommentthedefaultssectionanditsendingbracket,thenaddthedefault_features

setting,asfollows:
defaults { default_features "1 queue_if_no_path" } 4 Afteryoumodifythe/etc/multipath.conffile,youmustrunmkinitrdtorecreatethe initrdonyoursystem,thenrebootinorderforthechangestotakeeffect. 5 WhenyouarereadytoreturnovertofailoverforthedeviceI/O,enter: dmsetup message mapname 0 fail_if_no_path

88

SLES 11 SP1: Storage Administration Guide

ReplacethemapnamewiththemappedaliasnameorthedeviceIDforthedevice.The0value representsthesectorandisusedwhensectorinformationisnotneeded. ThiscommandimmediatelycausesallqueuedI/Otofailandpropagatestheerrortothecalling application.

7.13

Resolving Stalled I/O


IfallpathsfailconcurrentlyandI/Oisqueuedandstalled,dothefollowing:
1 Enterthefollowingcommandataterminalconsoleprompt: dmsetup message mapname 0 fail_if_no_path

ReplacemapnamewiththecorrectdeviceIDormappedaliasnameforthedevice.The0value representsthesectorandisusedwhensectorinformationisnotneeded. ThiscommandimmediatelycausesallqueuedI/Otofailandpropagatestheerrortothecalling application.


2 Reactivatequeueingbyenteringthefollowingcommandataterminalconsoleprompt: dmsetup message mapname 0 queue_if_no_path

7.14

Troubleshooting MPIO
ForinformationabouttroubleshootingmultipathI/OissuesonSUSELinuxEnterpriseServer,seethe followingTechnicalInformationDocuments(TIDs)intheNovellSupportKnowledgebase: TroubleshootingSLESMultipathing(MPIO)Problems(TID3231766)(http://www.novell.com/ support/search.do?cmd=displayKC&docType=kc&externalId=3231766&sliceId=SAL_Public) DMMPIODeviceBlacklistingNotHonoredinmultipath.conf(TID3029706)(http:// www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3029706&sliceId=SAL_Public&dialogID=5 7872426&stateId=0%200%2057878058) TroubleshootingSCSI(LUN)ScanningIssues(TID3955167)(http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=3955167&sliceId=SAL_Public&dialogID=5 7868704&stateId=0%200%2057878206)

7.15

Whats Next
IfyouwanttousesoftwareRAIDs,createandconfigurethembeforeyoucreatefilesystemsonthe devices.Forinformation,seethefollowing: Chapter 8,SoftwareRAIDConfiguration,onpage 91 Chapter 10,ManagingSoftwareRAIDs6and10withmdadm,onpage 101

Managing Multipath I/O for Devices

89

90

SLES 11 SP1: Storage Administration Guide

Software RAID Configuration

ThepurposeofRAID(redundantarrayofindependentdisks)istocombineseveralharddisk partitionsintoonelargevirtualharddisktooptimizeperformance,datasecurity,orboth.MostRAID controllersusetheSCSIprotocolbecauseitcanaddressalargernumberofharddisksinamore effectivewaythantheIDEprotocolandismoresuitableforparallelprocessingofcommands.There aresomeRAIDcontrollersthatsupportIDEorSATAharddisks.SoftwareRAIDprovidesthe advantagesofRAIDsystemswithouttheadditionalcostofhardwareRAIDcontrollers.However, thisrequiressomeCPUtimeandhasmemoryrequirementsthatmakeitunsuitableforrealhigh performancecomputers. IMPORTANT:SoftwareRAIDisnotsupportedunderneathclusteredfilesystemssuchasOCFS2, becauseRAIDdoesnotsupportconcurrentactivation.IfyouwantRAIDforOCFS2,youneedthe RAIDtobehandledbythestoragesubsystem. SUSELinuxEnterpriseofferstheoptionofcombiningseveralharddisksintoonesoftRAIDsystem. RAIDimpliesseveralstrategiesforcombiningseveralharddisksinaRAIDsystem,eachwith differentgoals,advantages,andcharacteristics.ThesevariationsarecommonlyknownasRAID levels. Section 8.1,UnderstandingRAIDLevels,onpage 91 Section 8.2,SoftRAIDConfigurationwithYaST,onpage 93 Section 8.3,TroubleshootingSoftwareRAIDs,onpage 94 Section 8.4,ForMoreInformation,onpage 94

8.1

Understanding RAID Levels


ThissectiondescribescommonRAIDlevels0,1,2,3,4,5,andnestedRAIDlevels. Section 8.1.1,RAID 0,onpage 92 Section 8.1.2,RAID 1,onpage 92 Section 8.1.3,RAID 2andRAID 3,onpage 92 Section 8.1.4,RAID 4,onpage 92 Section 8.1.5,RAID 5,onpage 92 Section 8.1.6,NestedRAIDLevels,onpage 92

Software RAID Configuration

91

8.1.1

RAID 0
Thislevelimprovestheperformanceofyourdataaccessbyspreadingoutblocksofeachfileacross multiplediskdrives.Actually,thisisnotreallyaRAID,becauseitdoesnotprovidedatabackup,but thenameRAID 0forthistypeofsystemhasbecomethenorm.WithRAID 0,twoormoreharddisks arepooledtogether.Theperformanceisverygood,buttheRAIDsystemisdestroyedandyourdata lostifevenoneharddiskfails.

8.1.2

RAID 1
Thislevelprovidesadequatesecurityforyourdata,becausethedataiscopiedtoanotherharddisk 1:1.Thisisknownasharddiskmirroring.Ifadiskisdestroyed,acopyofitscontentsisavailableon anothermirroreddisk.Alldisksexceptonecouldbedamagedwithoutendangeringyourdata. However,ifdamageisnotdetected,damageddatamightbemirroredtothecorrectdiskandthedata iscorruptedthatway.Thewritingperformancesuffersalittleinthecopyingprocesscomparedto whenusingsinglediskaccess(10to20percentslower),butreadaccessissignificantlyfasterin comparisontoanyoneofthenormalphysicalharddisks,becausethedataisduplicatedsocanbe scannedinparallel.RAID1generallyprovidesnearlytwicethereadtransactionrateofsingledisks andalmostthesamewritetransactionrateassingledisks.

8.1.3

RAID 2 and RAID 3


ThesearenottypicalRAIDimplementations.Level 2stripesdataatthebitlevelratherthantheblock level.Level 3providesbytelevelstripingwithadedicatedparitydiskandcannotservice simultaneousmultiplerequests.Bothlevelsarerarelyused.

8.1.4

RAID 4
Level 4providesblocklevelstripingjustlikeLevel 0combinedwithadedicatedparitydisk.Ifadata diskfails,theparitydataisusedtocreateareplacementdisk.However,theparitydiskmightcreatea bottleneckforwriteaccess.Nevertheless,Level 4issometimesused.

8.1.5

RAID 5
RAID 5isanoptimizedcompromisebetweenLevel 0andLevel 1intermsofperformanceand redundancy.Theharddiskspaceequalsthenumberofdisksusedminusone.Thedataisdistributed overtheharddisksaswithRAID 0.Parityblocks,createdononeofthepartitions,aretherefor securityreasons.TheyarelinkedtoeachotherwithXOR,enablingthecontentstobereconstructed bythecorrespondingparityblockincaseofsystemfailure.WithRAID 5,nomorethanoneharddisk canfailatthesametime.Ifoneharddiskfails,itmustbereplacedassoonaspossibletoavoidthe riskoflosingdata.

8.1.6

Nested RAID Levels


SeveralotherRAIDlevelshavebeendeveloped,suchasRAIDn,RAID 10,RAID 0+1,RAID 30,and RAID 50.Someofthembeingproprietaryimplementationscreatedbyhardwarevendors.These levelsarenotverywidespread,andarenotexplainedhere.

92

SLES 11 SP1: Storage Administration Guide

8.2

Soft RAID Configuration with YaST


TheYaSTsoftRAIDconfigurationcanbereachedfromtheYaSTExpertPartitioner.Thispartitioning toolenablesyoutoeditanddeleteexistingpartitionsandcreatenewonesthatshouldbeusedwith softRAID. YoucancreateRAIDpartitionsbyfirstclickingCreate>Donotformatthenselecting0xFDLinuxRAID asthepartitionidentifier.ForRAID 0andRAID 1,atleasttwopartitionsareneededforRAID 1, usuallyexactlytwoandnomore.IfRAID 5isused,atleastthreepartitionsarerequired.Itis recommendedtouseonlypartitionsofthesamesizebecauseeachsegmentcancontributeonlythe sameamountofspaceasthesmallestsizedpartition.TheRAIDpartitionsshouldbestoredon differentharddiskstodecreasetheriskoflosingdataifoneisdefective(RAID 1and5)andto optimizetheperformanceofRAID 0.AftercreatingallthepartitionstousewithRAID,clickRAID> CreateRAIDtostarttheRAIDconfiguration. Inthenextdialog,chooseamongRAIDlevels0,1,and5,thenclickNext.Thefollowingdialog(see Figure81)listsallpartitionswitheithertheLinuxRAIDorLinuxnativetype.NoswaporDOS partitionsareshown.IfapartitionisalreadyassignedtoaRAIDvolume,thenameoftheRAID device(forexample,/dev/md0)isshowninthelist.Unassignedpartitionsareindicatedwith.
Figure 8-1 RAIDPartitions

ToaddapreviouslyunassignedpartitiontotheselectedRAIDvolume,firstselectthepartitionthen clickAdd.Atthispoint,thenameoftheRAIDdeviceisdisplayednexttotheselectedpartition. AssignallpartitionsreservedforRAID.Otherwise,thespaceonthepartitionremainsunused.After assigningallpartitions,clickNexttoproceedtothesettingsdialogwhereyoucanfinetunethe performance(seeFigure82).

Software RAID Configuration

93

Figure 8-2 FileSystemSettings

Aswithconventionalpartitioning,setthefilesystemtouseaswellasencryptionandthemount pointfortheRAIDvolume.AftercompletingtheconfigurationwithFinish,seethe/dev/md0device andothersindicatedwithRAIDintheExpertPartitioner.

8.3

Troubleshooting Software RAIDs


Checkthe/proc/mdstatfiletofindoutwhetheraRAIDpartitionhasbeendamaged.Intheeventof asystemfailure,shutdownyourLinuxsystemandreplacethedefectiveharddiskwithanewone partitionedthesameway.Thenrestartyoursystemandenterthecommandmdadm /dev/mdX --add /dev/sdX.ReplaceXwithyourparticulardeviceidentifiers.Thisintegratestheharddisk automaticallyintotheRAIDsystemandfullyreconstructsit. Althoughyoucanaccessalldataduringtherebuild,youmightencountersomeperformanceissues untiltheRAIDhasbeenfullyrebuilt.

8.4

For More Information


ConfigurationinstructionsandmoredetailsforsoftRAIDcanbefoundintheHOWTOsat: LinuxRAIDwiki(https://raid.wiki.kernel.org/index.php/Linux_Raid) TheSoftwareRAIDHOWTOinthe/usr/share/doc/packages/mdadm/SoftwareRAID.HOWTO.htmlfile LinuxRAIDmailinglistsarealsoavailable,suchaslinuxraid(http://marc.theaimsgroup.com/ ?l=linuxraid).

94

SLES 11 SP1: Storage Administration Guide

Configuring Software RAID for the Root Partition

InSUSELinuxEnterpriseServer11,theDeviceMapperRAIDtoolhasbeenintegratedintotheYaST Partitioner.YoucanusethepartitioneratinstalltimetocreateasoftwareRAIDforthesystemdevice thatcontainsyourroot(/)partition. Section 9.1,PrerequisitesfortheSoftwareRAID,onpage 95 Section 9.2,EnablingiSCSIInitiatorSupportatInstallTime,onpage 96 Section 9.3,EnablingMultipathI/OSupportatInstallTime,onpage 96 Section 9.4,CreatingaSoftwareRAIDDevicefortheRoot(/)Partition,onpage 96

9.1

Prerequisites for the Software RAID


Makesureyourconfigurationmeetsthefollowingrequirements:

Youneedtwoormoreharddrives,dependingonthetypeofsoftwareRAIDyouplantocreate.
RAID0(Striping):RAID0requirestwoormoredevices.RAID0offersnofaulttolerance benefits,anditisnotrecommendedforthesystemdevice. RAID1(Mirroring):RAID1requirestwodevices. RAID5(RedundantStriping):RAID5requiresthreeormoredevices.

Theharddrivesshouldbesimilarlysized.TheRAIDassumesthesizeofthesmallestdrive. Theblockstoragedevicescanbeanycombinationoflocal(inordirectlyattachedtothe
machine),FibreChannelstoragesubsystems,oriSCSIstoragesubsystems.

IfyouareusinghardwareRAIDdevices,donotattempttorunsoftwareRAIDsontopofit. IfyouareusingiSCSItargetdevices,enabletheiSCSIinitiatorsupportbeforeyoucreatethe
RAIDdevice.

IfyourstoragesubsystemprovidesmultipleI/Opathsbetweentheserveranditsdirectly
attachedlocaldevices,FibreChanneldevices,oriSCSIdevicesthatyouwanttouseinthe softwareRAID,youmustenablethemultipathsupportbeforeyoucreatetheRAIDdevice.

Configuring Software RAID for the Root Partition

95

9.2

Enabling iSCSI Initiator Support at Install Time


IfthereareiSCSItargetdevicesthatyouwanttousefortheroot(/)partition,youmustenablethe iSCSIInitiatorsoftwaretomakethosedevicesavailabletoyoubeforeyoucreatethesoftwareRAID device.
1 ProceedwiththeYaSTinstallofSUSELinuxEnterprise11untilyoureachtheInstallation

Settingspage.
2 ClickPartitioningtoopenthePreparingHardDiskpage,clickCustomPartitioning(forexperts),

thenclickNext.
3 OntheExpertPartitionerpage,expandHardDisksintheSystemViewpaneltoviewthedefault

proposal.
4 OntheHardDiskspage,selectConfigure>ConfigureiSCSI,thenclickContinuewhenpromptedto

continuewithinitializingtheiSCSIinitiatorconfiguration.

9.3

Enabling Multipath I/O Support at Install Time


IftherearemultipleI/OpathstothedevicesyouwanttousetocreateasoftwareRAIDdeviceforthe root(/)partition,youmustenablemultipathsupportbeforeyoucreatethesoftwareRAIDdevice.
1 ProceedwiththeYaSTinstallofSUSELinuxEnterprise11untilyoureachtheInstallation

Settingspage.
2 ClickPartitioningtoopenthePreparingHardDiskpage,clickCustomPartitioning(forexperts),

thenclickNext.
3 OntheExpertPartitionerpage,expandHardDisksintheSystemViewpaneltoviewthedefault

proposal.
4 OntheHardDiskspage,selectConfigure>ConfigureMultipath,thenclickYeswhenpromptedto

activatemultipath. Thisrescansthedevicesandresolvesthemultiplepathssothateachdeviceislistedonlyonce inthelistofharddisks.

9.4

Creating a Software RAID Device for the Root (/) Partition


1 ProceedwiththeYaSTinstallofSUSELinuxEnterprise11untilyoureachtheInstallation

Settingspage.
2 ClickPartitioningtoopenthePreparingHardDiskpage,clickCustomPartitioning(forexperts),

thenclickNext.
3 OntheExpertPartitionerpage,expandHardDisksintheSystemViewpaneltoviewthedefault

proposal,selecttheproposedpartitions,thenclickDelete.
4 Createaswappartition. 4a OntheExpertPartitionerpageunderHardDisks,selectthedeviceyouwanttouseforthe

swappartition,thenclickAddontheHardDiskPartitionstab.
4b UnderNewPartitionType,selectPrimaryPartition,thenclickNext. 4c UnderNewPartitionSize,specifythesizetouse,thenclickNext. 4d UnderFormatOptions,selectFormatpartition,thenselectSwapfromthedropdownlist.

96

SLES 11 SP1: Storage Administration Guide

4e UnderMountOptions,selectMountpartition,thenselectswapfromthedropdownlist. 4f ClickFinish. 5 Setupthe0xFDLinuxRAIDformatforeachofthedevicesyouwanttouseforthesoftware

RAID.
5a OntheExpertPartitionerpageunderHardDisks,selectthedeviceyouwanttouseinthe

RAID,thenclickAddontheHardDiskPartitionstab.
5b UnderNewPartitionType,selectPrimaryPartition,thenclickNext. 5c UnderNewPartitionSize,specifytousethemaximumsize,thenclickNext. 5d UnderFormatOptions,selectDonotformatpartition,thenselect0xFDLinuxRAIDfromthe

dropdownlist.
5e UnderMountOptions,selectDonotmountpartition. 5f ClickFinish. 5g RepeatStep 5atoStep 5fforeachdevicethatyouplantouseinthesoftwareRAID 6 CreatetheRAIDdevice. 6a IntheSystemViewpanel,selectRAID,thenclickAddRAIDontheRAIDpage.

ThedevicesthatyoupreparedinStep 5arelistedinAvailableDevices.

6b UnderRAIDType,selectRAID0(Striping),RAID1(Mirroring),orRAID5(Redundant

Striping). Forexample,selectRAID1(Mirroring).
6c IntheAvailableDevicespanel,selectthedevicesyouwanttousefortheRAID,thenclickAdd

tomovethedevicestotheSelectedDevicespanel. SpecifytwoormoredevicesforaRAID1,twodevicesforaRAID0,oratleastthreedevices foraRAID5. Tocontinuetheexample,twodevicesareselectedforRAID1.

Configuring Software RAID for the Root Partition

97

6d ClickNext. 6e UnderRAIDOptions,selectthechunksizefromthedropdownlist.

ThedefaultchunksizeforaRAID1(Mirroring)is4KB. ThedefaultchunksizeforaRAID0(Striping)is32KB. Availablechunksizesare4KB,8KB,16KB,32KB,64KB,128KB,256KB,512KB,1MB,2 MB,or4MB.


6f UnderFormattingOptions,selectFormatpartition,thenselectthefilesystemtype(suchas

Ext3)fromtheFilesystemdropdownlist.
6g UnderMountingOptions,selectMountpartition,thenselect/ fromtheMountPointdrop

downlist.
6h ClickFinish.

98

SLES 11 SP1: Storage Administration Guide

ThesoftwareRAIDdeviceismanagedbyDeviceMapper,andcreatesadeviceunderthe/
dev/md0path.

7 OntheExpertPartitionerpage,clickAccept.

ThenewproposalappearsunderPartitioningontheInstallationSettingspage. Forexample,thesetupforthe

8 Continuewiththeinstall.

Wheneveryourebootyourserver,DeviceMapperisstartedatboottimesothatthesoftware RAIDisautomaticallyrecognized,andtheoperatingsystemontheroot(/)partitioncanbe started.

Configuring Software RAID for the Root Partition

99

100

SLES 11 SP1: Storage Administration Guide

10

10

Managing Software RAIDs 6 and 10 with mdadm

ThissectiondescribeshowtocreatesoftwareRAID6and10devices,usingtheMultipleDevices Administration(mdadm(8))tool.YoucanalsousemdadmtocreateRAIDs0,1,4,and5.Themdadmtool providesthefunctionalityoflegacyprogramsmdtoolsandraidtools. Section 10.1,CreatingaRAID6,onpage 101 Section 10.2,CreatingNestedRAID10Deviceswithmdadm,onpage 102 Section 10.3,CreatingaComplexRAID10withmdadm,onpage 105 Section 10.4,CreatingaDegradedRAIDArray,onpage 108

10.1

Creating a RAID 6
Section 10.1.1,UnderstandingRAID6,onpage 101 Section 10.1.2,CreatingaRAID6,onpage 102

10.1.1

Understanding RAID 6
RAID6isessentiallyanextensionofRAID5thatallowsforadditionalfaulttolerancebyusinga secondindependentdistributedparityscheme(dualparity).Eveniftwooftheharddiskdrivesfail duringthedatarecoveryprocess,thesystemcontinuestobeoperational,withnodataloss. RAID6providesforextremelyhighdatafaulttolerancebysustainingmultiplesimultaneousdrive failures.Ithandlesthelossofanytwodeviceswithoutdataloss.Accordingly,itrequiresN+2drives tostoreNdrivesworthofdata.Itrequiresaminimumof4devices. TheperformanceforRAID6isslightlylowerbutcomparabletoRAID5innormalmodeandsingle diskfailuremode.Itisveryslowindualdiskfailuremode.
Table 10-1 ComparisonofRAID5andRAID6

Feature Number of devices Parity Performance Fault-tolerance

RAID 5 N+1, minimum of 3 Distributed, single

RAID 6 N+2, minimum of 4 Distributed, dual

Medium impact on write and rebuild More impact on sequential write than RAID 5 Failure of one component device Failure of two component devices

Managing Software RAIDs 6 and 10 with mdadm

101

10.1.2

Creating a RAID 6
TheprocedureinthissectioncreatesaRAID6device/dev/md0withfourdevices:/dev/sda1,/dev/ sdb1,/dev/sdc1,and/dev/sdd1.Makesuretomodifytheproceduretouseyouractualdevice nodes.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 CreateaRAID6device.Atthecommandprompt,enter mdadm --create /dev/md0 --run --level=raid6 --chunk=128 --raid-devices=4 /dev/ sdb1 /dev/sdc1 /dev/sdc1 /dev/sdd1

Thedefaultchunksizeis64KB.
3 CreateafilesystemontheRAID6device/dev/md0,suchasaReiserfilesystem(reiserfs).For

example,atthecommandprompt,enter
mkfs.reiserfs /dev/md0

Modifythecommandifyouwanttouseadifferentfilesystem.
4 Editthe/etc/mdadm.conffiletoaddentriesforthecomponentdevicesandtheRAIDdevice/ dev/md0. 5 Editthe/etc/fstabfiletoaddanentryfortheRAID6device/dev/md0. 6 Reboottheserver.

TheRAID6deviceismountedto/local.
7 (Optional)AddahotsparetoservicetheRAIDarray.Forexample,atthecommandprompt

enter:
mdadm /dev/md0 -a /dev/sde1

10.2

Creating Nested RAID 10 Devices with mdadm


Section 10.2.1,UnderstandingNestedRAIDDevices,onpage 102 Section 10.2.2,CreatingNestedRAID10(1+0)withmdadm,onpage 103 Section 10.2.3,CreatingNestedRAID10(0+1)withmdadm,onpage 104

10.2.1

Understanding Nested RAID Devices


AnestedRAIDdeviceconsistsofaRAIDarraythatusesanotherRAIDarrayasitsbasicelement, insteadofusingphysicaldisks.Thegoalofthisconfigurationistoimprovetheperformanceand faulttoleranceoftheRAID. LinuxsupportsnestingofRAID1(mirroring)andRAID0(striping)arrays.Generally,this combinationisreferredtoasRAID10.Todistinguishtheorderofthenesting,thisdocumentusesthe followingterminology: RAID1+0:RAID1(mirror)arraysarebuiltfirst,thencombinedtoformaRAID0(stripe)array. RAID0+1:RAID0(stripe)arraysarebuiltfirst,thencombinedtoformaRAID1(mirror)array. ThefollowingtabledescribestheadvantagesanddisadvantagesofRAID10nestingas1+0versus 0+1.Itassumesthatthestorageobjectsyouuseresideondifferentdisks,eachwithadedicatedI/O capability.

102

SLES 11 SP1: Storage Administration Guide

Table 10-2 NestedRAIDLevels

RAID Level 10 (1+0)

Description

Performance and Fault Tolerance

RAID 0 (stripe) built RAID 1+0 provides high levels of I/O performance, data redundancy, and with RAID 1 (mirror) disk fault tolerance. Because each member device in the RAID 0 is mirrored arrays individually, multiple disk failures can be tolerated and data remains available as long as the disks that fail are in different mirrors. You can optionally configure a spare for each underlying mirrored array, or configure a spare to serve a spare group that serves all mirrors.

10 (0+1)

RAID 1 (mirror) built with RAID 0 (stripe) arrays

RAID 0+1 provides high levels of I/O performance and data redundancy, but slightly less fault tolerance than a 1+0. If multiple disks fail on one side of the mirror, then the other mirror is available. However, if disks are lost concurrently on both sides of the mirror, all data is lost. This solution offers less disk fault tolerance than a 1+0 solution, but if you need to perform maintenance or maintain the mirror on a different site, you can take an entire side of the mirror offline and still have a fully functional storage device. Also, if you lose the connection between the two sites, either site operates independently of the other. That is not true if you stripe the mirrored segments, because the mirrors are managed at a lower level. If a device fails, the mirror on that side fails because RAID 1 is not faulttolerant. Create a new RAID 0 to replace the failed side, then resynchronize the mirrors.

10.2.2

Creating Nested RAID 10 (1+0) with mdadm


AnestedRAID1+0isbuiltbycreatingtwoormoreRAID1(mirror)devices,thenusingthemas componentdevicesinaRAID0. IMPORTANT:Ifyouneedtomanagemultipleconnectionstothedevices,youmustconfigure multipathI/ObeforeconfiguringtheRAIDdevices.Forinformation,seeChapter 7,Managing MultipathI/OforDevices,onpage 49. Theprocedureinthissectionusesthedevicenamesshowninthefollowingtable.Makesureto modifythedevicenameswiththenamesofyourowndevices.
Table 10-3 ScenarioforCreatingaRAID10(1+0)byNesting

Raw Devices

RAID 1 (mirror)

RAID 1+0 (striped mirrors)

/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

/dev/md0 /dev/md2 /dev/md1

1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Create2softwareRAID1devices,usingtwodifferentdevicesforeachRAID1device.Atthe

commandprompt,enterthesetwocommands:
mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

Managing Software RAIDs 6 and 10 with mdadm

103

mdadm --create /dev/md1 --run --level=1 --raid-devices=2 /dev/sdd1 /dev/sde1 3 CreatethenestedRAID1+0device.Atthecommandprompt,enterthefollowingcommand

usingthesoftwareRAID1devicesyoucreatedinStep 2:
mdadm --create /dev/md2 --run --level=0 --chunk=64 --raid-devices=2 /dev/md0 / dev/md1

Thedefaultchunksizeis64KB.
4 CreateafilesystemontheRAID1+0device/dev/md2,suchasaReiserfilesystem(reiserfs).For

example,atthecommandprompt,enter
mkfs.reiserfs /dev/md2

Modifythecommandifyouwanttouseadifferentfilesystem.
5 Editthe/etc/mdadm.conffiletoaddentriesforthecomponentdevicesandtheRAIDdevice/ dev/md2. 6 Editthe/etc/fstabfiletoaddanentryfortheRAID1+0device/dev/md2. 7 Reboottheserver.

TheRAID1+0deviceismountedto/local.

10.2.3

Creating Nested RAID 10 (0+1) with mdadm


AnestedRAID0+1isbuiltbycreatingtwotofourRAID0(striping)devices,thenmirroringthemas componentdevicesinaRAID1. IMPORTANT:Ifyouneedtomanagemultipleconnectionstothedevices,youmustconfigure multipathI/ObeforeconfiguringtheRAIDdevices.Forinformation,seeChapter 7,Managing MultipathI/OforDevices,onpage 49. Inthisconfiguration,sparedevicescannotbespecifiedfortheunderlyingRAID0devicesbecause RAID0cannottolerateadeviceloss.Ifadevicefailsononesideofthemirror,youmustcreatea replacementRAID0device,thanadditintothemirror. Theprocedureinthissectionusesthedevicenamesshowninthefollowingtable.Makesureto modifythedevicenameswiththenamesofyourowndevices.
Table 10-4 ScenarioforCreatingaRAID10(0+1)byNesting

Raw Devices

RAID 0 (stripe)

RAID 0+1 (mirrored stripes)

/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

/dev/md0 /dev/md2 /dev/md1

1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 CreatetwosoftwareRAID0devices,usingtwodifferentdevicesforeachRAID0device.Atthe

commandprompt,enterthesetwocommands:
mdadm --create /dev/md0 --run --level=0 --chunk=64 --raid-devices=2 /dev/sdb1 / dev/sdc1

104

SLES 11 SP1: Storage Administration Guide

mdadm --create /dev/md1 --run --level=0 --chunk=64 --raid-devices=2 /dev/sdd1 / dev/sde1

Thedefaultchunksizeis64KB.
3 CreatethenestedRAID0+1device.Atthecommandprompt,enterthefollowingcommand

usingthesoftwareRAID0devicesyoucreatedinStep 2:
mdadm --create /dev/md2 --run --level=1 --raid-devices=2 /dev/md0 /dev/md1 4 CreateafilesystemontheRAID0+1device/dev/md2,suchasaReiserfilesystem(reiserfs).For

example,atthecommandprompt,enter
mkfs.reiserfs /dev/md2

Modifythecommandifyouwanttouseadifferentfilesystem.
5 Editthe/etc/mdadm.conffiletoaddentriesforthecomponentdevicesandtheRAIDdevice/ dev/md2. 6 Editthe/etc/fstabfiletoaddanentryfortheRAID0+1device/dev/md2. 7 Reboottheserver.

TheRAID0+1deviceismountedto/local.

10.3

Creating a Complex RAID 10 with mdadm


Section 10.3.1,UnderstandingthemdadmRAID10,onpage 105 Section 10.3.2,CreatingaRAID10withmdadm,onpage 107

10.3.1

Understanding the mdadm RAID10


Inmdadm,theRAID10levelcreatesasinglecomplexsoftwareRAIDthatcombinesfeaturesofboth RAID0(striping)andRAID1(mirroring).Multiplecopiesofalldatablocksarearrangedonmultiple drivesfollowingastripingdiscipline.Componentdevicesshouldbethesamesize. ComparingtheComplexRAID10andNestedRAID10(1+0)onpage 105 NumberofReplicasinthemdadmRAID10onpage 106 NumberofDevicesinthemdadmRAID10onpage 106 NearLayoutonpage 106 FarLayoutonpage 107

Comparing the Complex RAID10 and Nested RAID 10 (1+0)


ThecomplexRAID10issimilarinpurposetoanestedRAID10(1+0),butdiffersinthefollowing ways:

Managing Software RAIDs 6 and 10 with mdadm

105

Table 10-5 Complexvs.NestedRAID10

Feature Number of devices Component devices Striping

mdadm RAID10 Option Allows an even or odd number of component devices Managed as a single RAID device Striping occurs in the near or far layout on component devices. The far layout provides sequential read throughput that scales by number of drives, rather than number of RAID 1 pairs.

Nested RAID 10 (1+0) Requires an even number of component devices Manage as a nested RAID device Striping occurs consecutively across component devices

Multiple copies of data Hot spare devices

Two or more copies, up to the number of devices in the array A single spare can service all component devices

Copies on each mirrored segment Configure a spare for each underlying mirrored array, or configure a spare to serve a spare group that serves all mirrors.

Number of Replicas in the mdadm RAID10


WhenconfiguringanmdadmRAID10array,youmustspecifythenumberofreplicasofeachdata blockthatarerequired.Thedefaultnumberofreplicasis2,butthevaluecanbe2tothenumberof devicesinthearray.

Number of Devices in the mdadm RAID10


Youmustuseatleastasmanycomponentdevicesasthenumberofreplicasyouspecify.However, numberofcomponentdevicesinaRAID10arraydoesnotneedtobeamultipleofthenumberof replicasofeachdatablock.Theeffectivestoragesizeisthenumberofdevicesdividedbythenumber ofreplicas. Forexample,ifyouspecify2replicasforanarraycreatedwith5componentdevices,acopyofeach blockisstoredontwodifferentdevices.Theeffectivestoragesizeforonecopyofalldatais5/2or2.5 timesthesizeofacomponentdevice.

Near Layout
Withthenearlayout,copiesofablockofdataarestripedneareachotherondifferentcomponent devices.Thatis,multiplecopiesofonedatablockareatsimilaroffsetsindifferentdevices.Nearis thedefaultlayoutforRAID10.Forexample,ifyouuseanoddnumberofcomponentdevicesandtwo copiesofdata,somecopiesareperhapsonechunkfurtherintothedevice. ThenearlayoutforthemdadmRAID10yieldsreadandwriteperformancesimilartoRAID0overhalf thenumberofdrives. Nearlayoutwithanevennumberofdisksandtworeplicas:

106

SLES 11 SP1: Storage Administration Guide

sda1 sdb1 sdc1 sde1 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9

Nearlayoutwithanoddnumberofdisksandtworeplicas:
sda1 sdb1 sdc1 sde1 sdf1 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12

Far Layout
Thefarlayoutstripesdataovertheearlypartofalldrives,thenstripesasecondcopyofthedataover thelaterpartofalldrives,makingsurethatallcopiesofablockareondifferentdrives.Thesecond setofvaluesstartshalfwaythroughthecomponentdrives. Withafarlayout,thereadperformanceofthemdadmRAID10issimilartoaRAID0overthefull numberofdrives,butwriteperformanceissubstantiallyslowerthanaRAID0becausethereismore seekingofthedriveheads.Itisbestusedforreadintensiveoperationssuchasforreadonlyfile servers. Thespeedoftheraid10forwritingissimilartoothermirroredRAIDtypes,likeraid1andraid10 usingnearlayout,astheelevatorofthefilesystemschedulesthewritesinamoreoptimalwaythan rawwriting.Usingraid10inthefarlayoutwellsuitedformirroredwritingapplications. Farlayoutwithanevennumberofdisksandtworeplicas:
sda1 sdb1 sdc1 sde1 0 1 2 3 4 5 6 7 . . . 3 0 1 2 7 4 5 6

Farlayoutwithanoddnumberofdisksandtworeplicas:
sda1 sdb1 sdc1 sde1 sdf1 0 1 2 3 4 5 6 7 8 9 . . . 4 0 1 2 3 9 5 6 7 8

10.3.2

Creating a RAID 10 with mdadm


TheRAID10optionformdadmcreatesaRAID10devicewithoutnesting.Forinformationabout RAID10,seeSection 10.3,CreatingaComplexRAID10withmdadm,onpage 105. Theprocedureinthissectionusesthedevicenamesshowninthefollowingtable.Makesureto modifythedevicenameswiththenamesofyourowndevices.

Managing Software RAIDs 6 and 10 with mdadm

107

Table 10-6 ScenarioforCreatingaRAID10UsingthemdadmRAID10Option

Raw Devices

RAID10 (near or far striping scheme)

/dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1

/dev/md3

1 InYaST,createa0xFDLinuxRAIDpartitiononthedevicesyouwanttouseintheRAID,suchas /dev/sdf1,/dev/sdg1,/dev/sdh1,and/dev/sdi1. 2 Openaterminalconsole,thenloginastherootuserorequivalent. 3 CreateaRAID10command.Atthecommandprompt,enter(allonthesameline): mdadm --create /dev/md3 --run --level=10 --chunk=4 --raid-devices=4 /dev/sdf1 / dev/sdg1 /dev/sdh1 /dev/sdi1 4 CreateaReiserfilesystemontheRAID10device/dev/md3.Atthecommandprompt,enter mkfs.reiserfs /dev/md3 5 Editthe/etc/mdadm.conffiletoaddentriesforthecomponentdevicesandtheRAIDdevice/ dev/md3.Forexample: DEVICE /dev/md3 6 Editthe/etc/fstabfiletoaddanentryfortheRAID10device/dev/md3. 7 Reboottheserver.

TheRAID10deviceismountedto/raid10.

10.4

Creating a Degraded RAID Array


Adegradedarrayisoneinwhichsomedevicesaremissing.Degradedarraysaresupportedonlyfor RAID1,RAID4,RAID5,andRAID6.TheseRAIDtypesaredesignedtowithstandsomemissing devicesaspartoftheirfaulttolerancefeatures.Typically,degradedarraysoccurwhenadevicefails. Itispossibletocreateadegradedarrayonpurpose.
Allowable Number of Slots Missing All but one device One slot One slot One or two slots

RAID Type RAID 1 RAID 4 RAID 5 RAID 6

Tocreateadegradedarrayinwhichsomedevicesaremissing,simplygivethewordmissingin placeofadevicename.Thiscausesmdadmtoleavethecorrespondingslotinthearrayempty.

108

SLES 11 SP1: Storage Administration Guide

WhencreatingaRAID5array,mdadmautomaticallycreatesadegradedarraywithanextraspare drive.Thisisbecausebuildingthespareintoadegradedarrayisgenerallyfasterthan resynchronizingtheparityonanondegraded,butnotclean,array.Youcanoverridethisfeature withthe--forceoption. CreatingadegradedarraymightbeusefulifyouwantcreateaRAID,butoneofthedevicesyou wanttousealreadyhasdataonit.Inthatcase,youcreateadegradedarraywithotherdevices,copy datafromtheinusedevicetotheRAIDthatisrunningindegradedmode,addthedeviceintothe RAID,thenwaitwhiletheRAIDisrebuiltsothatthedataisnowacrossalldevices.Anexampleof thisprocessisgiveninthefollowingprocedure:


1 CreateadegradedRAID1device/dev/md0,usingonesingledrive/dev/sd1,enterthe

followingatthecommandprompt:
mdadm --create /dev/md0 -l 1 -n 2 /dev/sda1 missing

Thedeviceshouldbethesamesizeorlargerthanthedeviceyouplantoaddtoit.
2 IfthedeviceyouwanttoaddtothemirrorcontainsdatathatyouwanttomovetotheRAID

array,copyitnowtotheRAIDarraywhileitisrunningindegradedmode.
3 Addadevicetothemirror.Forexample,toadd/dev/sdb1totheRAID,enterthefollowingat

thecommandprompt:
mdadm /dev/md0 -a /dev/sdb1

Youcanaddonlyonedeviceatatime.Youmustwaitforthekerneltobuildthemirrorandbring itfullyonlinebeforeyouaddanothermirror.
4 Monitorthebuildprogressbyenteringthefollowingatthecommandprompt: cat /proc/mdstat

Toseetherebuildprogresswhilebeingrefreshedeverysecond,enter
watch -n 1 cat /proc/mdstat

Managing Software RAIDs 6 and 10 with mdadm

109

110

SLES 11 SP1: Storage Administration Guide

11

Resizing Software RAID Arrays with mdadm

ThissectiondescribeshowtoincreaseorreducethesizeofasoftwareRAID1,4,5,or6devicewith theMultipleDeviceAdministration(mdadm(8))tool. WARNING:Beforestartinganyofthetasksdescribedinthissection,makesurethatyouhaveavalid backupofallofthedata. Section 11.1,UnderstandingtheResizingProcess,onpage 111 Section 11.2,IncreasingtheSizeofaSoftwareRAID,onpage 112 Section 11.3,DecreasingtheSizeofaSoftwareRAID,onpage 116

11.1

Understanding the Resizing Process


ResizinganexistingsoftwareRAIDdeviceinvolvesincreasingordecreasingthespacecontributed byeachcomponentpartition. Section 11.1.1,GuidelinesforResizingaSoftwareRAID,onpage 111 Section 11.1.2,OverviewofTasks,onpage 112

11.1.1

Guidelines for Resizing a Software RAID


Themdadm(8)toolsupportsresizingonlyforsoftwareRAIDlevels1,4,5,and6.TheseRAIDlevels providediskfaulttolerancesothatonecomponentpartitioncanberemovedatatimeforresizing.In principle,itispossibletoperformahotresizeforRAIDpartitions,butyoumusttakeextracarefor yourdatawhendoingso. ThefilesystemthatresidesontheRAIDmustalsobeabletoberesizedinordertotakeadvantageof thechangesinavailablespaceonthedevice.InSUSELinuxEnterpriseServer11,filesystemresizing utilitiesareavailableforfilesystemsExt2,Ext3,andReiserFS.Theutilitiessupportincreasingand decreasingthesizeasfollows:
Table 11-1 FileSystemSupportforResizing

File System Ext2 or Ext3 ReiserFS

Utility resize2fs resize_reiserfs

Increase Size Yes, offline only Yes, online or offline

Decrease Size Yes, offline only Yes, offline only

Resizinganypartitionorfilesysteminvolvessomerisksthatcanpotentiallyresultinlosingdata.

Resizing Software RAID Arrays with mdadm

111

WARNING:Toavoiddataloss,makesuretobackupyourdatabeforeyoubeginanyresizingtask.

11.1.2

Overview of Tasks
ResizingtheRAIDinvolvesthefollowingtasks.Theorderinwhichthesetasksisperformeddepends onwhetheryouareincreasingordecreasingitssize.
Table 11-2 TasksInvolvedinResizingaRAID

Tasks

Description

Order If Increasing Size

Order If Decreasing Size 2

Resize each of the component partitions.

Increase or decrease the active size of each 1 component partition. You remove only one component partition at a time, modify its size, then return it to the RAID. 2

Resize the software RAID The RAID does not automatically know about the itself. increases or decreases you make to the underlying component partitions. You must inform it about the new size. Resize the file system.

You must resize the file system that resides on the 3 RAID. This is possible only for file systems that provide tools for resizing, such as Ext2, Ext3, and ReiserFS.

11.2

Increasing the Size of a Software RAID


Beforeyoubegin,reviewtheguidelinesinSection 11.1,UnderstandingtheResizingProcess,on page 111. Section 11.2.1,IncreasingtheSizeofComponentPartitions,onpage 112 Section 11.2.2,IncreasingtheSizeoftheRAIDArray,onpage 114 Section 11.2.3,IncreasingtheSizeoftheFileSystem,onpage 114

11.2.1

Increasing the Size of Component Partitions


ApplytheprocedureinthissectiontoincreasethesizeofaRAID1,4,5,or6.Foreachcomponent partitionintheRAID,removethepartitionfromtheRAID,modifyitssize,returnittotheRAID, thenwaituntiltheRAIDstabilizestocontinue.Whileapartitionisremoved,theRAIDoperatesin degradedmodeandhasnoorreduceddiskfaulttolerance.EvenforRAIDsthatcantoleratemultiple concurrentdiskfailures,donotremovemorethanonecomponentpartitionatatime. WARNING:IfaRAIDdoesnothavediskfaulttolerance,oritissimplynotconsistent,dataloss resultsifyouremoveanyofitspartitions.Beverycarefulwhenremovingpartitions,andmakesure thatyouhaveabackupofyourdataavailable.

112

SLES 11 SP1: Storage Administration Guide

Theprocedureinthissectionusesthedevicenamesshowninthefollowingtable.Makesureto modifythenamestousethenamesofyourowndevices.
Table 11-3 ScenarioforIncreasingtheSizeofComponentPartitions

RAID Device

Component Partitions

/dev/md0

/dev/sda1 /dev/sdb1 /dev/sdc1

ToincreasethesizeofthecomponentpartitionsfortheRAID:
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 MakesurethattheRAIDarrayisconsistentandsynchronizedbyentering cat /proc/mdstat

IfyourRAIDarrayisstillsynchronizingaccordingtotheoutputofthiscommand,youmust waituntilsynchronizationiscompletebeforecontinuing.
3 RemoveoneofthecomponentpartitionsfromtheRAIDarray.Forexample,toremove/dev/ sda1,enter mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

Inordertosucceed,boththefailandremoveactionsmustbedone.
4 IncreasethesizeofthepartitionthatyouremovedinStep 3bydoingoneofthefollowing:

Increasethesizeofthepartition,usingadiskpartitionersuchasfdisk(8),cfdisk(8),or parted(8).Thisoptionistheusualchoice. Replacethediskonwhichthepartitionresideswithahighercapacitydevice. Thisoptionispossibleonlyifnootherfilesystemsontheoriginaldiskareaccessedbythe system.WhenthereplacementdeviceisaddedbackintotheRAID,ittakesmuchlongerto synchronizethedatabecauseallofthedatathatwasontheoriginaldevicemustberebuilt.


5 ReaddthepartitiontotheRAIDarray.Forexample,toadd/dev/sda1,enter mdadm -a /dev/md0 /dev/sda1

WaituntiltheRAIDissynchronizedandconsistentbeforecontinuingwiththenextpartition.
6 RepeatStep 2throughStep 5foreachoftheremainingcomponentdevicesinthearray.Make

suretomodifythecommandsforthecorrectcomponentpartition.
7 Ifyougetamessagethattellsyouthatthekernelcouldnotrereadthepartitiontableforthe

RAID,youmustrebootthecomputerafterallpartitionshavebeenresizedtoforceanupdateof thepartitiontable.
8 ContinuewithSection 11.2.2,IncreasingtheSizeoftheRAIDArray,onpage 114.

Resizing Software RAID Arrays with mdadm

113

11.2.2

Increasing the Size of the RAID Array


AfteryouhaveresizedeachofthecomponentpartitionsintheRAID(seeSection 11.2.1,Increasing theSizeofComponentPartitions,onpage 112),theRAIDarrayconfigurationcontinuestousethe originalarraysizeuntilyouforceittobeawareofthenewlyavailablespace.Youcanspecifyasize fortheRAIDorusethemaximumavailablespace. Theprocedureinthissectionusesthedevicename/dev/md0fortheRAIDdevice.Makesureto modifythenametousethenameofyourowndevice.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Checkthesizeofthearrayandthedevicesizeknowntothearraybyentering mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size" 3 Dooneofthefollowing:

Increasethesizeofthearraytothemaximumavailablesizebyentering
mdadm --grow /dev/md0 -z max

Increasethesizeofthearraytoaspecifiedvaluebyentering
mdadm --grow /dev/md0 -z size

Replacesizewithanintegervalueinkilobytes(akilobyteis1024bytes)forthedesiredsize.
4 Recheckthesizeofyourarrayandthedevicesizeknowntothearraybyentering mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size" 5 Dooneofthefollowing:

Ifyourarraywassuccessfullyresized,continuewithSection 11.2.3,IncreasingtheSizeof theFileSystem,onpage 114. Ifyourarraywasnotresizedasyouexpected,youmustreboot,thentrythisprocedure again.

11.2.3

Increasing the Size of the File System


Afteryouincreasethesizeofthearray(seeSection 11.2.2,IncreasingtheSizeoftheRAIDArray,on page 114),youarereadytoresizethefilesystem. Youcanincreasethesizeofthefilesystemtothemaximumspaceavailableorspecifyanexactsize. Whenspecifyinganexactsizeforthefilesystem,makesurethenewsizesatisfiesthefollowing conditions: Thenewsizemustbegreaterthanthesizeoftheexistingdata;otherwise,datalossoccurs. ThenewsizemustbeequaltoorlessthanthecurrentRAIDsizebecausethefilesystemsize cannotextendbeyondthespaceavailable.

114

SLES 11 SP1: Storage Administration Guide

Ext2 or Ext3
Ext2andExt3filesystemscanberesizedwhenmountedorunmountedwiththeresize2fs command.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Increasethesizeofthefilesystemusingoneofthefollowingmethods:

ToextendthefilesystemsizetothemaximumavailablesizeofthesoftwareRAIDdevice called/dev/md0,enter
resize2fs /dev/md0

Ifasizeparameterisnotspecified,thesizedefaultstothesizeofthepartition. Toextendthefilesystemtoaspecificsize,enter
resize2fs /dev/md0 size

Thesizeparameterspecifiestherequestednewsizeofthefilesystem.Ifnounitsare specified,theunitofthesizeparameteristheblocksizeofthefilesystem.Optionally,the sizeparametercanbesuffixedbyoneofthefollowingtheunitdesignators:sfor512byte sectors;Kforkilobytes(1kilobyteis1024bytes);Mformegabytes;orGforgigabytes. Waituntiltheresizingiscompletedbeforecontinuing.


3 Ifthefilesystemisnotmounted,mountitnow.

Forexample,tomountanExt2filesystemforaRAIDnamed/dev/md0atmountpoint/raid, enter
mount -t ext2 /dev/md0 /raid 4 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

ReiserFS
AswithExt2andExt3,aReiserFSfilesystemcanbeincreasedinsizewhilemountedorunmounted. TheresizeisdoneontheblockdeviceofyourRAIDarray.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 IncreasethesizeofthefilesystemonthesoftwareRAIDdevicecalled/dev/md0,usingoneof

thefollowingmethods: Toextendthefilesystemsizetothemaximumavailablesizeofthedevice,enter
resize_reiserfs /dev/md0

Whennosizeisspecified,thisincreasesthevolumetothefullsizeofthepartition. Toextendthefilesystemtoaspecificsize,enter
resize_reiserfs -s size /dev/md0

Resizing Software RAID Arrays with mdadm

115

Replacesizewiththedesiredsizeinbytes.Youcanalsospecifyunitsonthevalue,suchas 50000K(kilobytes),250M(megabytes),or2G(gigabytes).Alternatively,youcanspecifyan increasetothecurrentsizebyprefixingthevaluewithaplus(+)sign.Forexample,the followingcommandincreasesthesizeofthefilesystemon/dev/md0by500MB:


resize_reiserfs -s +500M /dev/md0

Waituntiltheresizingiscompletedbeforecontinuing.
3 Ifthefilesystemisnotmounted,mountitnow.

Forexample,tomountanReiserFSfilesystemforaRAIDnamed/dev/md0atmountpoint/
raid,enter mount -t reiserfs /dev/md0 /raid 4 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

11.3

Decreasing the Size of a Software RAID


Beforeyoubegin,reviewtheguidelinesinSection 11.1,UnderstandingtheResizingProcess,on page 111. Section 11.3.1,DecreasingtheSizeoftheFileSystem,onpage 116 Section 11.3.2,DecreasingtheSizeofComponentPartitions,onpage 118 Section 11.3.3,DecreasingtheSizeoftheRAIDArray,onpage 119

11.3.1

Decreasing the Size of the File System


WhendecreasingthesizeofthefilesystemonaRAIDdevice,makesurethenewsizesatisfiesthe followingconditions: Thenewsizemustbegreaterthanthesizeoftheexistingdata;otherwise,datalossoccurs. ThenewsizemustbeequaltoorlessthanthecurrentRAIDsizebecausethefilesystemsize cannotextendbeyondthespaceavailable. InSUSELinuxEnterpriseServerSP1,onlyExt2,Ext3,andReiserFSprovideutilitiesfordecreasing thesizeofthefilesystem.Usetheappropriateprocedurebelowfordecreasingthesizeofyourfile system. Theproceduresinthissectionusethedevicename/dev/md0fortheRAIDdevice.Makesureto modifycommandstousethenameofyourowndevice.

Ext2 or Ext3
TheExt2andExt3filesystemscanberesizedwhenmountedorunmounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 DecreasethesizeofthefilesystemontheRAIDbyentering

116

SLES 11 SP1: Storage Administration Guide

resize2fs /dev/md0 <size>

Replacesizewithanintegervalueinkilobytesforthedesiredsize.(Akilobyteis1024bytes.) Waituntiltheresizingiscompletedbeforecontinuing.
3 Ifthefilesystemisnotmounted,mountitnow.Forexample,tomountanExt2filesystemfora RAIDnamed/dev/md0atmountpoint/raid,enter mount -t ext2 /dev/md0 /raid 4 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

ReiserFS
ReiserFSfilesystemscanbedecreasedinsizeonlyifthevolumeisunmounted.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Unmountthedevicebyentering umount /mnt/point

Ifthepartitionyouareattemptingtodecreaseinsizecontainssystemfiles(suchastheroot(/) volume),unmountingispossibleonlywhenbootingfromabootableCDorfloppy.
3 DecreasethesizeofthefilesystemonthesoftwareRAIDdevicecalled/dev/md0byentering resize_reiserfs -s size /dev/md0

Replacesizewiththedesiredsizeinbytes.Youcanalsospecifyunitsonthevalue,suchas 50000K(kilobytes),250M(megabytes),or2G(gigabytes).Alternatively,youcanspecifya decreasetothecurrentsizebyprefixingthevaluewithaminus()sign.Forexample,the followingcommandreducesthesizeofthefilesystemon/dev/md0by500MB:


resize_reiserfs -s -500M /dev/md0

Waituntiltheresizingiscompletedbeforecontinuing.
4 Mountthefilesystembyentering mount -t reiserfs /dev/md0 /mnt/point 5 Checktheeffectoftheresizeonthemountedfilesystembyentering df -h

TheDiskFree(df)commandshowsthetotalsizeofthedisk,thenumberofblocksused,andthe numberofblocksavailableonthefilesystem.Thehoptionprintsizesinhumanreadable format,suchas1K,234M,or2G.

Resizing Software RAID Arrays with mdadm

117

11.3.2

Decreasing the Size of Component Partitions


ResizetheRAIDscomponentpartitionsoneatatime.Foreachcomponentpartition,youremoveit fromtheRAID,modifyitspartitionsize,returnthepartitiontotheRAID,thenwaituntiltheRAID stabilizes.Whileapartitionisremoved,theRAIDoperatesindegradedmodeandhasnoorreduced diskfaulttolerance.EvenforRAIDsthatcantoleratemultipleconcurrentdiskfailures,youshould neverremovemorethanonecomponentpartitionatatime. WARNING:IfaRAIDdoesnothavediskfaulttolerance,oritissimplynotconsistent,dataloss resultsifyouremoveanyofitspartitions.Beverycarefulwhenremovingpartitions,andmakesure thatyouhaveabackupofyourdataavailable. Theprocedureinthissectionusesthedevicenamesshowninthefollowingtable.Makesureto modifythecommandstousethenamesofyourowndevices.
Table 11-4 ScenarioforIncreasingtheSizeofComponentPartitions

RAID Device

Component Partitions

/dev/md0

/dev/sda1 /dev/sdb1 /dev/sdc1

ToresizethecomponentpartitionsfortheRAID:
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 MakesurethattheRAIDarrayisconsistentandsynchronizedbyentering cat /proc/mdstat

IfyourRAIDarrayisstillsynchronizingaccordingtotheoutputofthiscommand,youmust waituntilsynchronizationiscompletebeforecontinuing.
3 RemoveoneofthecomponentpartitionsfromtheRAIDarray.Forexample,toremove/dev/ sda1,enter mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

Inordertosucceed,boththefailandremoveactionsmustbedone.
4 IncreasethesizeofthepartitionthatyouremovedinStep 3bydoingoneofthefollowing:

Useadiskpartitionersuchasfdisk,cfdisk,orpartedtoincreasethesizeofthepartition. Replacethediskonwhichthepartitionresideswithadifferentdevice. Thisoptionispossibleonlyifnootherfilesystemsontheoriginaldiskareaccessedbythe system.WhenthereplacementdeviceisaddedbackintotheRAID,ittakesmuchlongerto synchronizethedata.


5 ReaddthepartitiontotheRAIDarray.Forexample,toadd/dev/sda1,enter mdadm -a /dev/md0 /dev/sda1

WaituntiltheRAIDissynchronizedandconsistentbeforecontinuingwiththenextpartition.
6 RepeatStep 2throughStep 5foreachoftheremainingcomponentdevicesinthearray.Make

suretomodifythecommandsforthecorrectcomponentpartition.

118

SLES 11 SP1: Storage Administration Guide

7 Ifyougetamessagethattellsyouthatthekernelcouldnotrereadthepartitiontableforthe

RAID,youmustrebootthecomputerafterresizingallofitscomponentpartitions.
8 ContinuewithSection 11.3.3,DecreasingtheSizeoftheRAIDArray,onpage 119.

11.3.3

Decreasing the Size of the RAID Array


AfteryouhaveresizedeachofthecomponentpartitionsintheRAID,theRAIDarrayconfiguration continuestousetheoriginalarraysizeuntilyouforceittobeawareofthenewlyavailablespace.Use thegrowoptiontoforceittoreadthechangeinavailabledisksize.Youcanspecifyasizeforthe RAIDorusethemaximumavailablespace. Theprocedureinthissectionusesthedevicename/dev/md0fortheRAIDdevice.Makesureto modifycommandstousethenameofyourowndevice.
1 Openaterminalconsole,thenloginastherootuserorequivalent. 2 Checkthesizeofthearrayandthedevicesizeknowntothearraybyentering mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size" 3 Dooneofthefollowing:

Decreasethesizeofthearraytothemaximumavailablesizebyentering
mdadm --grow /dev/md0 -z max

Decreasethesizeofthearraytoaspecifiedvaluebyentering
mdadm --grow /dev/md0 -z size

Replacesizewithanintegervalueinkilobytesforthedesiredsize.(Akilobyteis1024 bytes.)
4 Recheckthesizeofyourarrayandthedevicesizeknowntothearraybyentering mdadm -D /dev/md0 | grep -e "Array Size" -e "Device Size" 5 Dooneofthefollowing:

Ifyourarraywassuccessfullyresized,youaredone. Ifyourarraywasnotresizedasyouexpected,youmustreboot,thentrythisprocedure again.

Resizing Software RAID Arrays with mdadm

119

120

SLES 11 SP1: Storage Administration Guide

12

12

iSNS for Linux

Storageareanetworks(SANs)cancontainmanydiskdrivesthataredispersedacrosscomplex networks.Thiscanmakedevicediscoveryanddeviceownershipdifficult.iSCSIinitiatorsmustbe abletoidentifystorageresourcesintheSANanddeterminewhethertheyhaveaccesstothem. InternetStorageNameService(iSNS)isastandardsbasedservicethatisavailablebeginningwith SUSELinuxEnterpriseServer(SLES)10SupportPack2.iSNSfacilitatestheautomateddiscovery, management,andconfigurationofiSCSIdevicesonaTCP/IPnetwork.iSNSprovidesintelligent storagediscoveryandmanagementservicescomparabletothosefoundinFibreChannelnetworks. IMPORTANT:iSNSshouldbeusedonlyinsecureinternalnetworks. Section 12.1,HowiSNSWorks,onpage 121 Section 12.2,InstallingiSNSServerforLinux,onpage 122 Section 12.3,ConfiguringiSNSDiscoveryDomains,onpage 123 Section 12.4,StartingiSNS,onpage 128 Section 12.5,StoppingiSNS,onpage 128 Section 12.6,ForMoreInformation,onpage 128

12.1

How iSNS Works


ForaniSCSIinitiatortodiscoveriSCSItargets,itneedstoidentifywhichdevicesinthenetworkare storageresourcesandwhatIPaddressesitneedstoaccessthem.AquerytoaniSNSserverreturnsa listofiSCSItargetsandtheIPaddressesthattheinitiatorhaspermissiontoaccess. UsingiSNS,youcreateiSNSdiscoverydomainsanddiscoverydomainsets.Youthengroupor organizeiSCSItargetsandinitiatorsintodiscoverydomainsandgroupthediscoverydomainsinto discoverydomainsets.Bydividingstoragenodesintodomains,youcanlimitthediscoveryprocess ofeachhosttothemostappropriatesubsetoftargetsregisteredwithiSNS,whichallowsthestorage networktoscalebyreducingthenumberofunnecessarydiscoveriesandbylimitingtheamountof timeeachhostspendsestablishingdiscoveryrelationships.Thisletsyoucontrolandsimplifythe numberoftargetsandinitiatorsthatmustbediscovered.

iSNS for Linux

121

Figure 12-1 iSNSDiscoveryDomainsandDiscoveryDomainSets

BothiSCSItargetsandiSCSIinitiatorsuseiSNSclientstoinitiatetransactionswithiSNSserversby usingtheiSNSprotocol.Theythenregisterdeviceattributeinformationinacommondiscovery domain,downloadinformationaboutotherregisteredclients,andreceiveasynchronousnotification ofeventsthatoccurintheirdiscoverydomain. iSNSserversrespondtoiSNSprotocolqueriesandrequestsmadebyiSNSclientsusingtheiSNS protocol.iSNSserversinitiateiSNSprotocolstatechangenotificationsandstoreproperly authenticatedinformationsubmittedbyaregistrationrequestinaniSNSdatabase. SomeofthebenefitsprovidedbyiSNSforLinuxinclude: Providesaninformationfacilityforregistration,discovery,andmanagementofnetworked storageassets. IntegrateswiththeDNSinfrastructure. Consolidatesregistration,discovery,andmanagementofiSCSIstorage. Simplifiesstoragemanagementimplementations. Improvesscalabilitycomparedtootherdiscoverymethods. AnexampleofthebenefitsiSNSprovidescanbebetterunderstoodthroughthefollowingscenario: Supposeyouhaveacompanythathas100iSCSIinitiatorsand100iSCSItargets.Dependingonyour configuration,alliSCSIinitiatorscouldpotentiallytrytodiscoverandconnecttoanyofthe100iSCSI targets.Thiscouldcreatediscoveryandconnectiondifficulties.Bygroupinginitiatorsandtargets intodiscoverydomains,youcanpreventiSCSIinitiatorsinonedepartmentfromdiscoveringthe iSCSItargetsinanotherdepartment.TheresultisthattheiSCSIinitiatorsinaspecificdepartment onlydiscoverthoseiSCSItargetsthatarepartofthedepartmentsdiscoverydomain.

12.2

Installing iSNS Server for Linux


iSNSServerforLinuxisincludedwithSLES10SP2andlater,butisnotinstalledorconfiguredby default.YoumustinstalltheiSNSpackagemodules(isnsandyast2-isnsmodules)andconfigure theiSNSservice. NOTE:iSNScanbeinstalledonthesameserverwhereiSCSItargetoriSCSIinitiatorsoftwareis installed.InstallingboththeiSCSItargetsoftwareandiSCSIinitiatorsoftwareonthesameserveris notsupported.

122

SLES 11 SP1: Storage Administration Guide

ToinstalliSNSforLinux:
1 StartYaSTandselectNetworkServices>iSNSServer. 2 Whenpromptedtoinstalltheisnspackage,clickInstall. 3 FollowtheinstalldialoginstructionstoprovidetheSUSELinuxEnterpriseServer11installation

disks. Whentheinstallationiscomplete,theiSNSServiceconfigurationdialogopensautomaticallyto theServicetab.

4 InAddressofiSNSServer,specifytheDNSnameorIPaddressoftheiSNSServer. 5 InServiceStart,selectoneofthefollowing:

WhenBooting:TheiSNSservicestartsautomaticallyonserverstartup. Manually(Default):TheiSNSservicemustbestartedmanuallybyenteringrcisns start or/etc/init.d/isns startattheserverconsoleoftheserverwhereyouinstallit.


6 Specifythefollowingfirewallsettings:

OpenPortinFirewall:Selectthecheckboxtoopenthefirewallandallowaccesstothe servicefromremotecomputers.Thefirewallportisclosedbydefault. FirewallDetails:Ifyouopenthefirewallport,theportisopenonallnetworkinterfacesby default.ClickFirewallDetailstoselectinterfacesonwhichtoopentheport,selectthe networkinterfacestouse,thenclickOK.


7 ClickFinishtoapplytheconfigurationsettingsandcompletetheinstallation. 8 ContinuewithSection 12.3,ConfiguringiSNSDiscoveryDomains,onpage 123.

12.3

Configuring iSNS Discovery Domains


InorderforiSCSIinitiatorsandtargetstousetheiSNSservice,theymustbelongtoadiscovery domain.

iSNS for Linux

123

IMPORTANT:TheSNSservicemustbeinstalledandrunningtoconfigureiSNSdiscoverydomains. Forinformation,seeSection 12.4,StartingiSNS,onpage 128. Section 12.3.1,CreatingiSNSDiscoveryDomains,onpage 124 Section 12.3.2,CreatingiSNSDiscoveryDomainSets,onpage 125 Section 12.3.3,AddingiSCSINodestoaDiscoveryDomain,onpage 126 Section 12.3.4,AddingDiscoveryDomainstoaDiscoveryDomainSet,onpage 128

12.3.1

Creating iSNS Discovery Domains


AdefaultdiscoverydomainnameddefaultDDisautomaticallycreatedwhenyouinstalltheiSNS service.TheexistingiSCSItargetsandinitiatorsthathavebeenconfiguredtouseiSNSare automaticallyaddedtothedefaultdiscoverydomain. Tocreateanewdiscoverydomain:
1 StartYaSTandunderNetworkServices,selectiSNSServer. 2 ClicktheDiscoveryDomainstab.

TheDiscoveryDomainsarealistsalldiscoverydomains.Youcancreatenewdiscoverydomains, ordeleteexistingones.Deletingadomainremovesthemembersfromthedomain,butitdoes notdeletetheiSCSInodemembers. TheDiscoveryDomainMembersarealistsalliSCSInodesassignedtoaselecteddiscoverydomain. Selectingadifferentdiscoverydomainrefreshesthelistwithmembersfromthatdiscovery domain.YoucanaddanddeleteiSCSInodesfromaselecteddiscoverydomain.Deletingan iSCSInoderemovesitfromthedomain,butitdoesnotdeletetheiSCSInode. CreatinganiSCSInodeallowsanodethatisnotyetregisteredtobeaddedasamemberofthe discoverydomain.WhentheiSCSIinitiatorortargetregistersthisnode,thenitbecomespartof thisdomain. WhenaniSCSIinitiatorperformsadiscoveryrequest,theiSNSservicereturnsalliSCSInode targetsthataremembersofthesamediscoverydomain.

124

SLES 11 SP1: Storage Administration Guide

3 ClicktheCreateDiscoveryDomainbutton.

YoucanalsoselectanexistingdiscoverydomainandclicktheDeletebuttontoremovethat discoverydomain.
4 Specifythenameofthediscoverydomainyouarecreating,thenclickOK. 5 ContinuewithSection 12.3.2,CreatingiSNSDiscoveryDomainSets,onpage 125.

12.3.2

Creating iSNS Discovery Domain Sets


Discoverydomainsmustbelongtoadiscoverydomainset.Youcancreateadiscoverydomainand addnodestothatdiscoverydomain,butitisnotactiveandtheiSNSservicedoesnotfunctionunless youaddthediscoverydomaintoadiscoverydomainset.Adefaultdiscoverydomainsetnamed defaultDDSisautomaticallycreatedwhenyouinstalliSNSandthedefaultdiscoverydomainis automaticallyaddedtothatdomainset. Tocreateadiscoverydomainset:
1 StartYaSTandunderNetworkServices,selectiSNSServer. 2 ClicktheDiscoveryDomainsSetstab.

TheDiscoveryDomainSetsarealistsallofthediscoverdomainsets.Adiscoverydomainmustbe amemberofadiscoverydomainsetinordertobeactive. InaniSNSdatabase,adiscoverydomainsetcontainsdiscoverydomains,whichinturncontains iSCSInodemembers. TheDiscoveryDomainSetMembersarealistsalldiscoverydomainsthatareassignedtoaselected discoverydomainset.Selectingadifferentdiscoverydomainsetrefreshesthelistwithmembers fromthatdiscoverydomainset.Youcanaddanddeletediscoverydomainsfromaselected discoverydomainset.Removingadiscoverydomainremovesitfromthedomainset,butitdoes notdeletethediscoverydomain. AddingandiscoverydomaintoasetallowsanotyetregisterediSNSdiscoverydomaintobe addedasamemberofthediscoverydomainset.

iSNS for Linux

125

3 ClicktheCreateDiscoveryDomainSetbutton.

YoucanalsoselectanexistingdiscoverydomainsetandclicktheDeletebuttontoremovethat discoverydomainset.
4 Specifythenameofthediscoverydomainsetyouarecreating,thenclickOK. 5 ContinuewithSection 12.3.3,AddingiSCSINodestoaDiscoveryDomain,onpage 126.

12.3.3

Adding iSCSI Nodes to a Discovery Domain


1 StartYaSTandunderNetworkServices,selectiSNSServer. 2 ClicktheiSCSINodestab.

126

SLES 11 SP1: Storage Administration Guide

3 ReviewthelistofnodestomakesurethattheiSCSItargetsandinitiatorsthatyouwanttouse

theiSNSservicearelisted. IfaniSCSItargetorinitiatorisnotlisted,youmightneedtorestarttheiSCSIserviceonthenode. Youcandothisbyrunningthercopen-iscsi restartcommandtorestartaninitiatororthe rciscsitarget restartcommandtorestartatarget. YoucanselectaniSCSInodeandclicktheDeletebuttontoremovethatnodefromtheiSNS database.ThisisusefulifyouarenolongerusinganiSCSInodeorhaverenamedit. TheiSCSInodeisautomaticallyaddedtothelist(iSNSdatabase)againwhenyourestartthe iSCSIserviceorreboottheserverunlessyouremoveorcommentouttheiSNSportionofthe iSCSIconfigurationfile.
4 ClicktheDiscoveryDomainstab,selectthedesireddiscoverydomain,thenclicktheDisplay

Membersbutton.
5 ClickAddexistingiSCSINode,selectthenodeyouwanttoaddtothedomain,thenclickAdd

Node.
6 RepeatStep 5forasmanynodesasyouwanttoaddtothediscoverydomain,thenclickDone

whenyouarefinishedaddingnodes. AniSCSInodecanbelongtomorethanonediscoverydomain.
7 ContinuewithSection 12.3.4,AddingDiscoveryDomainstoaDiscoveryDomainSet,on

page 128.

iSNS for Linux

127

12.3.4

Adding Discovery Domains to a Discovery Domain Set


1 StartYaSTandunderNetworkServices,selectiSNSServer. 2 ClicktheDiscoveryDomainsSettab. 3 SelectCreateDiscoveryDomainSettoaddanewsettothelistofdiscoverydomainsets. 4 Chooseadiscoverydomainsettomodify. 5 ClickAddDiscoveryDomain,selectthediscoverydomainyouwanttoaddtothediscovery

domainset,thenclickAddDiscoveryDomain.
6 Repeatthelaststepforasmanydiscoverydomainsasyouwanttoaddtothediscoverydomain

set,thenclickDone. Adiscoverydomaincanbelongtomorethanonediscoverydomainset.

12.4

Starting iSNS
iSNSmustbestartedattheserverwhereyouinstallit.Enteroneofthefollowingcommandsata terminalconsoleastherootuser:
rcisns start /etc/init.d/isns start

Youcanalsousethestop,status,andrestartoptionswithiSNS. iSNScanalsobeconfiguredtostartautomaticallyeachtimetheserverisrebooted:
1 StartYaSTandunderNetworkServices,selectiSNSServer. 2 WiththeServicetabselected,specifytheIPaddressofyouriSNSserver,thenclickSaveAddress. 3 IntheServiceStartsectionofthescreen,selectWhenBooting.

YoucanalsochoosetostarttheiSNSservermanually.Youmustthenusethercisns start commandtostarttheserviceeachtimetheserverisrestarted.

12.5

Stopping iSNS
iSNSmustbestoppedattheserverwhereitisrunning.Enteroneofthefollowingcommandsata terminalconsoleastherootuser:
rcisns stop /etc/init.d/isns stop

12.6

For More Information


Forinformation,seetheLinuxiSNSforiSCSIproject(http://sourceforge.net/projects/linuxisns/).The electronicmailinglistforthisprojectisLinuxiSNSDiscussion(http://sourceforge.net/mailarchive/ forum.php?forum_name=linuxisnsdiscussion). GeneralinformationaboutiSNSisavailableinRFC4171:InternetStorageNameService(http:// www.ietf.org/rfc/rfc4171).

128

SLES 11 SP1: Storage Administration Guide

13

13

Mass Storage over IP Networks: iSCSI

Oneofthecentraltasksincomputercentersandwhenoperatingserversisprovidingharddisk capacityforserversystems.FibreChannelisoftenusedforthispurpose.iSCSI(InternetSCSI) solutionsprovidealowercostalternativetoFibreChannelthatcanleveragecommodityserversand Ethernetnetworkingequipment.LinuxiSCSIprovidesiSCSIinitiatorandtargetsoftwarefor connectingLinuxserverstocentralstoragesystems.


Figure 13-1 iSCSISANwithaniSNSServer

Ethernet Switch Network Backbone Network Backbone

Server 1 Ethernet Card(s)

Server 2

Server 3

Server 4

Server 5

Server 6

Ethernet Card(s) iSCSI Initiator iSCSI Initiator iSCSI Initiator iSCSI Initiator iSCSI Initiator iSCSI Initiator

Server 7

iSCSI SAN

Ethernet Switch

Ethernet iSCSI Target Server Shared Disks

iSCSIisastoragenetworkingprotocolthatfacilitatesdatatransfersofSCSIpacketsoverTCP/IP networksbetweenblockstoragedevicesandservers.iSCSItargetsoftwarerunsonthetargetserver anddefinesthelogicalunitsasiSCSItargetdevices.iSCSIinitiatorsoftwarerunsondifferentservers andconnectstothetargetdevicestomakethestoragedevicesavailableonthatserver. IMPORTANT:ItisnotsupportedtoruniSCSItargetsoftwareandiSCSIinitiatorsoftwareonthe sameserverinaproductionenvironment. TheiSCSItargetandinitiatorserverscommunicatebysendingSCSIpacketsattheIPlevelinyour LAN.WhenanapplicationrunningontheinitiatorserverstartsaninquiryforaniSCSItargetdevice, theoperatingsystemproducesthenecessarySCSIcommands.TheSCSIcommandsarethen

Mass Storage over IP Networks: iSCSI

129

embeddedinIPpacketsandencryptedasnecessarybysoftwarethatiscommonlyknownasthe iSCSIinitiator.ThepacketsaretransferredacrosstheinternalIPnetworktothecorrespondingiSCSI remotestation,calledtheiSCSItarget. ManystoragesolutionsprovideaccessoveriSCSI,butitisalsopossibletorunaLinuxserverthat providesaniSCSItarget.Inthiscase,itisimportanttosetupaLinuxserverthatisoptimizedforfile systemservices.TheiSCSItargetaccessesblockdevicesinLinux.Therefore,itispossibletouse RAIDsolutionstoincreasediskspaceaswellasalotofmemorytoimprovedatacaching.Formore informationaboutRAID,alsoseeChapter 8,SoftwareRAIDConfiguration,onpage 91. Section 13.1,InstallingiSCSI,onpage 130 Section 13.2,SettingUpaniSCSITarget,onpage 131 Section 13.3,ConfiguringiSCSIInitiator,onpage 137 Section 13.4,TroubleshootingiSCSI,onpage 141 Section 13.5,AdditionalInformation,onpage 143

13.1

Installing iSCSI
YaSTincludesentriesforiSCSITargetandiSCSIInitiatorsoftware,butthepackagesarenotinstalled bydefault. IMPORTANT:ItisnotsupportedtoruniSCSItargetsoftwareandiSCSIinitiatorsoftwareonthe sameserverinaproductionenvironment. Section 13.1.1,InstallingiSCSITargetSoftware,onpage 130 Section 13.1.2,InstallingtheiSCSIInitiatorSoftware,onpage 130

13.1.1

Installing iSCSI Target Software


InstalltheiSCSItargetsoftwareontheserverwhereyouwanttocreateiSCSItargetdevices.
1 OpenYaST,andloginastherootuser. 2 SelectNetworkServices>iSCSITarget. 3 Whenyouarepromptedtoinstalltheiscsitargetpackage,clickInstall. 4 Followtheonscreeninstallinstructions,andprovidetheinstallationmediaasneeded.

Whentheinstallationiscomplete,YaSTopenstotheiSCSITargetOverviewpagewiththe Servicetabselected.
5 ContinuewithSection 13.2,SettingUpaniSCSITarget,onpage 131.

13.1.2

Installing the iSCSI Initiator Software


InstalltheiSCSIinitiatorsoftwareoneachserverwhereyouwanttoaccessthetargetdevicesthatyou setupontheiSCSItargetserver.
1 OpenYaST,andloginastherootuser. 2 SelectNetworkServices>iSCSIInitiator. 3 Whenyouarepromptedtoinstalltheopen-iscsipackage,clickInstall.

130

SLES 11 SP1: Storage Administration Guide

4 Followtheonscreeninstallinstructions,andprovidetheinstallationmediaasneeded.

Whentheinstallationiscomplete,YaSTopenstotheiSCSIInitiatorOverviewpagewiththe Servicetabselected.
5 ContinuewithSection 13.3,ConfiguringiSCSIInitiator,onpage 137.

13.2

Setting Up an iSCSI Target


SUSELinuxEnterpriseServercomeswithanopensourceiSCSItargetsolutionthatevolvedfromthe ArdisiSCSItarget.AbasicsetupcanbedonewithYaST,buttotakefulladvantageofiSCSI,amanual setupisrequired. Section 13.2.1,PreparingtheStorageSpace,onpage 131 Section 13.2.2,CreatingiSCSITargetswithYaST,onpage 132 Section 13.2.3,ConfiguringaniSCSITargetManually,onpage 135 Section 13.2.4,ConfiguringOnlineTargetswithietadm,onpage 136

13.2.1

Preparing the Storage Space


TheiSCSItargetconfigurationexportsexistingblockdevicestoiSCSIinitiators.Youmustpreparethe storagespaceyouwanttouseinthetargetdevicesbysettingupunformattedpartitionsordevicesby usingthePartitionerinYaST,orbypartitioningthedevicesfromthecommandline. IMPORTANT:AfteryousetupadeviceorpartitionforuseasaniSCSItarget,youneveraccessit directlyviaitslocalpath.Donotspecifyamountpointforitwhenyoucreateit. PartitioningDevicesonpage 131 PartitioningDevicesinaVirtualEnvironmentonpage 132

Partitioning Devices
1 Loginastherootuser,thenopenYaST. 2 SelectSystem>Partitioner. 3 ClickYestocontinuethroughthewarningaboutusingthePartitioner. 4 ClickAddtocreateapartition,butdonotformatit,anddonotmountit.

iSCSItargetscanuseunformattedpartitionswithLinux,LinuxLVM,orLinuxRAIDfilesystem IDs.
4a SelectPrimaryPartition,thenclickNext. 4b Specifytheamountofspacetouse,thenclickNext. 4c SelectDonotformat,thenspecifythefilesystemIDtype. 4d SelectDonotmount. 4e ClickFinish. 5 RepeatStep 4foreachareathatyouwanttouselaterasaniSCSILUN. 6 ClickAccepttokeepyourchanges,thencloseYaST.

Mass Storage over IP Networks: iSCSI

131

Partitioning Devices in a Virtual Environment


YoucanuseaXenguestserverastheiSCSItargetserver.Youmustassignthestoragespaceyouwant tousefortheiSCSIstoragedevicestotheguestvirtualmachine,thenaccessthespaceasvirtualdisks withintheguestenvironment.Eachvirtualdiskcanbeaphysicalblockdevice,suchasanentiredisk, partition,orvolume,oritcanbeafilebackeddiskimagewherethevirtualdiskisasingleimagefile onalargerphysicaldiskontheXenhostserver.Forthebestperformance,createeachvirtualdisk fromaphysicaldiskorapartition.Afteryousetupthevirtualdisksfortheguestvirtualmachine, starttheguestserver,thenconfigurethenewblankvirtualdisksasiSCSItargetdevicesbyfollowing thesameprocessasforaphysicalserver. FilebakeddiskimagesarecreatedontheXenhostserver,thenassignedtotheXenguestserver.By default,Xenstoresfilebackeddiskimagesinthe/var/lib/xen/images/vm_namedirectory,where vm_nameisthenameofthevirtualmachine. Forexample,ifyouwanttocreatethediskimage/var/lib/xen/images/vm_one/xen-0withasize of4GB,firstmakesurethatthedirectoryisthere,thencreatetheimageitself.
1 Logintothehostserverastherootuser. 2 Ataterminalconsoleprompt,enterthefollowingcommands mkdir -p /var/lib/xen/images/vm_one dd if=/dev/zero of=/var/lib/xen/images/vm_one/xen-0 seek=1M bs=4096 count=1 3 AssignthefilesystemimagetotheguestvirtualmachineintheXenconfigurationfile. 4 Loginastherootuserontheguestserver,thenuseYaSTtosetupthevirtualblockdeviceby

usingtheprocessinPartitioningDevicesonpage 131.

13.2.2

Creating iSCSI Targets with YaST


ToconfiguretheiSCSItarget,runtheiSCSITargetmoduleinYaST.Theconfigurationissplitinto threetabs.IntheServicetab,selectthestartmodeandthefirewallsettings.Ifyouwanttoaccessthe iSCSItargetfromaremotemachine,selectOpenPortinFirewall.IfaniSNSservershouldmanagethe discoveryandaccesscontrol,activateiSNSAccessControlandentertheIPaddressofyouriSNS server.Youcannotusehostnames,butmustusetheIPaddress.FormoreaboutiSNS,read Chapter 12,iSNSforLinux,onpage 121. TheGlobaltabprovidessettingsfortheiSCSIserver.Theauthenticationsethereisusedforthe discoveryofservices,notforaccessingthetargets.Ifyoudonotwanttorestricttheaccesstothe discovery,useNoAuthentication. Ifauthenticationisneeded,therearetwopossibilitiestoconsider.Oneisthataninitiatormustprove thatithasthepermissionstorunadiscoveryontheiSCSItarget.ThisisdonewithIncoming Authentication.TheotheristhattheiSCSItargetmustprovetotheinitiatorthatitistheexpected target.Therefore,theiSCSItargetcanalsoprovideausernameandpassword.Thisisdonewith OutgoingAuthentication.FindmoreinformationaboutauthenticationinRFC3720(http:// www.ietf.org/rfc/rfc3720.txt). ThetargetsaredefinedintheTargetstab.UseAddtocreateanewiSCSItarget.Thefirstdialogasks forinformationaboutthedevicetoexport.

132

SLES 11 SP1: Storage Administration Guide

Target TheTargetlinehasafixedsyntaxthatlookslikethefollowing:
iqn.yyyy-mm.<reversed domain name>:unique_id

Italwaysstartswithiqn.yyyymmistheformatofthedatewhenthistargetisactivated.Find moreaboutnamingconventionsinRFC3722(http://www.ietf.org/rfc/rfc3722.txt). Identifier TheIdentifierisfreelyselectable.Itshouldfollowsomeschemetomakethewholesystemmore structured. LUN ItispossibletoassignseveralLUNstoatarget.Todothis,selectatargetintheTargetstab,then clickEdit.Then,addnewLUNstoanexistingtarget. Path Addthepathtotheblockdeviceorfilesystemimagetoexport. Thenextmenuconfigurestheaccessrestrictionsofthetarget.Theconfigurationisverysimilartothe configurationofthediscoveryauthentication.Inthiscase,atleastanincomingauthenticationshould besetup. Nextfinishestheconfigurationofthenewtarget,andbringsyoubacktotheoverviewpageofthe Targettab.ActivateyourchangesbyclickingFinish. Tocreateatargetdevice:
1 OpenYaST,andloginastherootuser. 2 SelectNetworkServices>iSCSITarget.

YaSTopenstotheiSCSITargetOverviewpagewiththeServicetabselected.

3 IntheServiceStartarea,selectoneofthefollowing:

Whenbooting:Automaticallystarttheinitiatorserviceonsubsequentserverreboots.

Mass Storage over IP Networks: iSCSI

133

Manually(default):Starttheservicemanually.
4 IfyouareusingiSNSfortargetadvertising,selecttheiSNSAccessControlcheckbox,thentype

theIPaddress.
5 Ifdesired,openthefirewallportstoallowaccesstotheserverfromremotecomputers. 5a SelecttheOpenPortinFirewallcheckbox. 5b SpecifythenetworkinterfaceswhereyouwanttoopentheportbyclickingFirewallDetails,

selectingthecheckboxnexttoanetworkinterfacetoenableit,thenclickingOKtoaccept thesettings.
6 Ifauthenticationisrequiredtoconnecttotargetdevicesyousetuponthisserver,selectthe

Globaltab,deselectNoAuthenticationtoenableauthentication,thenspecifythenecessary credentialsforincomingandoutgoingauthentication. TheNoAuthenticationoptionisenabledbydefault.Foramoresecureconfiguration,youcan specifyauthenticationforincoming,outgoing,orbothincomingandoutgoing.Youcanalso specifymultiplesetsofcredentialsforincomingauthenticationbyaddingpairsofusernames andpasswordstothelistunderIncomingAuthentication.

7 ConfiguretheiSCSItargetdevices. 7a SelecttheTargetstab. 7b Ifyouhavenotalreadydoneso,selectanddeletetheexampleiSCSItargetfromthelist,

thenconfirmthedeletionbyclickingContinue.
7c ClickAddtoaddanewiSCSItarget.

TheiSCSItargetautomaticallypresentsanunformattedpartitionorblockdeviceand completestheTargetandIdentifierfields.
7d Youcanacceptthis,orbrowsetoselectadifferentspace.

YoucanalsosubdividethespacetocreateLUNsonthedevicebyclickingAddand specifyingsectorstoallocatetothatLUN.IfyouneedadditionaloptionsfortheseLUNs, selectExpertSettings.


7e ClickNext

134

SLES 11 SP1: Storage Administration Guide

7f RepeatStep 7ctoStep 7eforeachiSCSItargetdeviceyouwanttocreate. 7g (Optional)OntheServicetab,clickSavetoexporttheinformationabouttheconfigured

iSCSItargetstoafile. Thismakesiteasiertolaterprovidethisinformationtoconsumersoftheresources.
7h ClickFinishtocreatethedevices,thenclickYestorestarttheiSCSIsoftwarestack.

13.2.3

Configuring an iSCSI Target Manually


ConfigureaniSCSItargetin/etc/ietd.conf.AllparametersinthisfilebeforethefirstTarget declarationareglobalforthefile.Authenticationinformationinthisportionhasaspecialmeaning itisnotglobal,butisusedforthediscoveryoftheiSCSItarget. IfyouhaveaccesstoaniSNSserver,youshouldfirstconfigurethefiletotellthetargetaboutthis server.TheaddressoftheiSNSservermustalwaysbegivenasanIPaddress.Youcannotspecifythe DNSnamefortheserver.Theconfigurationforthisfunctionalitylookslikethefollowing:
iSNSServer 192.168.1.111 iSNSAccessControl no

ThisconfigurationmakestheiSCSItargetregisteritselfwiththeiSNSserver,whichinturnprovides thediscoveryforinitiators.FormoreaboutiSNS,seeChapter 12,iSNSforLinux,onpage 121.The accesscontrolfortheiSNSdiscoveryisnotsupported.JustkeepiSNSAccessControl no. AlldirectiSCSIauthenticationcanbedoneintwodirections.TheiSCSItargetcanrequiretheiSCSI initiatortoauthenticatewiththeIncomingUser,whichcanbeaddedmultipletimes.TheiSCSI initiatorcanalsorequiretheiSCSItargettoauthenticate.UseOutgoingUserforthis.Bothhavethe samesyntax:
IncomingUser <username> <password> OutgoingUser <username> <password>

Theauthenticationisfollowedbyoneormoretargetdefinitions.Foreachtarget,addaTarget section.ThissectionalwaysstartswithaTargetidentifierfollowed,bydefinitionsoflogicalunit numbers:


Target iqn.yyyy-mm.<reversed domain name>[:identifier] Lun 0 Path=/dev/mapper/system-v3 Lun 1 Path=/dev/hda4 Lun 2 Path=/var/lib/xen/images/xen-1,Type=fileio

IntheTargetline,yyyy-mmisthedatewhenthistargetisactivated,andidentifierisfreely selectable.FindmoreaboutnamingconventionsinRFC3722(http://www.ietf.org/rfc/rfc3722.txt). Threedifferentblockdevicesareexportedinthisexample.Thefirstblockdeviceisalogicalvolume (seealsoChapter 4,LVMConfiguration,onpage 27),thesecondisanIDEpartition,andthethird isanimageavailableinthelocalfilesystem.AlltheselooklikeblockdevicestoaniSCSIinitiator. BeforeactivatingtheiSCSItarget,addatleastoneIncomingUseraftertheLundefinitions.Itdoesthe authenticationfortheuseofthistarget. Toactivateallyourchanges,restarttheiscsitargetdaemonwithrcopen-iscsi restart.Checkyour configurationinthe/procfilesystem:
cat /proc/net/iet/volume tid:1 name:iqn.2006-02.com.example.iserv:systems lun:0 state:0 iotype:fileio path:/dev/mapper/system-v3 lun:1 state:0 iotype:fileio path:/dev/hda4 lun:2 state:0 iotype:fileio path:/var/lib/xen/images/xen-1

Mass Storage over IP Networks: iSCSI

135

TherearemanymoreoptionsthatcontrolthebehavioroftheiSCSItarget.Formoreinformation,see themanpageofietd.conf. Activesessionsarealsodisplayedinthe/procfilesystem.Foreachconnectedinitiator,anextra entryisaddedto/proc/net/iet/session:


cat /proc/net/iet/session tid:1 name:iqn.2006-02.com.example.iserv:system-v3 sid:562949957419520 initiator:iqn.200511.de.suse:cn=rome.example.com,01.9ff842f5645 cid:0 ip:192.168.178.42 state:active hd:none dd:none sid:281474980708864 initiator:iqn.2006-02.de.suse:01.6f7259c88b70 cid:0 ip:192.168.178.72 state:active hd:none dd:none

13.2.4

Configuring Online Targets with ietadm


WhenchangestotheiSCSItargetconfigurationarenecessary,youmustalwaysrestartthetargetto activatechangesthataredoneintheconfigurationfile.Unfortunately,allactivesessionsare interruptedinthisprocess.Tomaintainanundisturbedoperation,thechangesshouldbedoneinthe mainconfigurationfile/etc/ietd.conf,butalsomademanuallytothecurrentconfigurationwith theadministrationutilityietadm. TocreateanewiSCSItargetwithaLUN,firstupdateyourconfigurationfile.Theadditionalentry couldbe:
Target iqn.2006-02.com.example.iserv:system2 Lun 0 Path=/dev/mapper/system-swap2 IncomingUser joe secret

Tosetupthisconfigurationmanually,proceedasfollows:
1 Createanewtargetwiththecommandietadm --op new --tid=2 --params Name=iqn.2006-02.com.example.iserv:system2. 2 Addalogicalunitwithietadm --op new --tid=2 --lun=0 --params Path=/dev/mapper/ system-swap2. 3 Settheusernameandpasswordcombinationonthistargetwithietadm --op new --tid=2 -user --params=IncomingUser=joe,Password=secret. 4 Checktheconfigurationwithcat /proc/net/iet/volume.

Itisalsopossibletodeleteactiveconnections.First,checkallactiveconnectionswiththecommand
cat /proc/net/iet/session.Thismightlooklike: cat /proc/net/iet/session tid:1 name:iqn.2006-03.com.example.iserv:system sid:281474980708864 initiator:iqn.1996-04.com.example:01.82725735af5 cid:0 ip:192.168.178.72 state:active hd:none dd:none

TodeletethesessionwiththesessionID281474980708864,usethecommandietadm --op delete -tid=1 --sid=281474980708864 --cid=0.Beawarethatthismakesthedeviceinaccessibleonthe clientsystemandprocessesaccessingthisdevicearelikelytohang. ietadmcanalsobeusedtochangevariousconfigurationparameters.Obtainalistoftheglobal variableswithietadm --op show --tid=1 --sid=0.Theoutputlookslike:

136

SLES 11 SP1: Storage Administration Guide

InitialR2T=Yes ImmediateData=Yes MaxConnections=1 MaxRecvDataSegmentLength=8192 MaxXmitDataSegmentLength=8192 MaxBurstLength=262144 FirstBurstLength=65536 DefaultTime2Wait=2 DefaultTime2Retain=20 MaxOutstandingR2T=1 DataPDUInOrder=Yes DataSequenceInOrder=Yes ErrorRecoveryLevel=0 HeaderDigest=None DataDigest=None OFMarker=No IFMarker=No OFMarkInt=Reject IFMarkInt=Reject

Alloftheseparameterscanbeeasilychanged.Forexample,ifyouwanttochangethemaximum numberofconnectionstotwo,use
ietadm --op update --tid=1 --params=MaxConnections=2.

Inthefile/etc/ietd.conf,theassociatedlineshouldlooklikeMaxConnections 2. WARNING:Thechangesthatyoumakewiththeietadmutilityarenotpermanentforthesystem. Thesechangesarelostatthenextrebootiftheyarenotaddedtothe/etc/ietd.confconfiguration file.DependingontheusageofiSCSIinyournetwork,thismightleadtosevereproblems. Thereareseveralmoreoptionsavailablefortheietadmutility.Useietadm -htofindanoverview. TheabbreviationstherearetargetID(tid),sessionID(sid),andconnectionID(cid).Theycanalsobe foundin/proc/net/iet/session.

13.3

Configuring iSCSI Initiator


TheiSCSIinitiator,alsocalledaniSCSIclient,canbeusedtoconnecttoanyiSCSItarget.Thisisnot restrictedtotheiSCSItargetsolutionexplainedinSection 13.2,SettingUpaniSCSITarget,on page 131.TheconfigurationofiSCSIinitiatorinvolvestwomajorsteps:thediscoveryofavailable iSCSItargetsandthesetupofaniSCSIsession.BothcanbedonewithYaST. Section 13.3.1,UsingYaSTfortheiSCSIInitiatorConfiguration,onpage 137 Section 13.3.2,SettingUptheiSCSIInitiatorManually,onpage 140 Section 13.3.3,TheiSCSIClientDatabases,onpage 141

13.3.1

Using YaST for the iSCSI Initiator Configuration


TheiSCSIInitiatorOverviewinYaSTisdividedintothreetabs: Service:TheServicetabcanbeusedtoenabletheiSCSIinitiatoratboottime.Italsoofferstoset auniqueInitiatorNameandaniSNSservertouseforthediscovery.ThedefaultportforiSNSis 3205. ConnectedTargets:TheConnectedTargetstabgivesanoverviewofthecurrentlyconnectediSCSI targets.LiketheDiscoveredTargetstab,italsogivestheoptiontoaddnewtargetstothesystem.

Mass Storage over IP Networks: iSCSI

137

Onthispage,youcanselectatargetdevice,thentogglethestartupsettingforeachiSCSItarget device: Automatic:ThisoptionisusedforiSCSItargetsthataretobeconnectedwhentheiSCSI serviceitselfstartsup.Thisisthetypicalconfiguration. Onboot:ThisoptionisusedforiSCSItargetsthataretobeconnectedduringboot;thatis, whenroot(/)isoniSCSI.Assuch,theiSCSItargetdevicewillbeevaluatedfromtheinitrd onserverboots. DiscoveredTargets:DiscoveredTargetsprovidesthepossibilityofmanuallydiscoveringiSCSI targetsinthenetwork. ConfiguringtheiSCSIInitiatoronpage 138 DiscoveringiSCSITargetsbyUsingiSNSonpage 139 DiscoveringiSCSITargetsManuallyonpage 139 SettingtheStartupPreferenceforiSCSITargetDevicesonpage 140

Configuring the iSCSI Initiator


1 OpenYaST,andloginastherootuser. 2 SelectNetworkServices>iSCSIInitiator.

YaSTopenstotheiSCSIInitiatorOverviewpagewiththeServicetabselected.

3 IntheServiceStartarea,selectoneofthefollowing:

Whenbooting:Automaticallystarttheinitiatorserviceonsubsequentserverreboots. Manually(default):Starttheservicemanually.
4 SpecifyorverifytheInitiatorName.

SpecifyawellformediSCSIqualifiedname(IQN)fortheiSCSIinitiatoronthisserver.The initiatornamemustbegloballyuniqueonyournetwork.TheIQNusesthefollowinggeneral format:

138

SLES 11 SP1: Storage Administration Guide

iqn.yyyy-mm.com.mycompany:n1:n2

wheren1andn2arealphanumericcharacters.Forexample:
iqn.1996-04.de.suse:01:9c83a3e15f64

TheInitiatorNameisautomaticallycompletedwiththecorrespondingvaluefromthe/etc/
iscsi/initiatorname.iscsifileontheserver.

IftheserverhasiBFT(iSCSIBootFirmwareTable)support,theInitiatorNameiscompletedwith thecorrespondingvalueintheIBFT,andyouarenotabletochangetheinitiatornameinthis interface.UsetheBIOSSetuptomodifyitinstead.TheiBFTisablockofinformationcontaining variousparametersusefultotheiSCSIbootprocess,includingiSCSItargetandinitiator descriptionsfortheserver.


5 UseeitherofthefollowingmethodstodiscoveriSCSItargetsonthenetwork.

iSNS:TouseiSNS(InternetStorageNameService)fordiscoveringiSCSItargets,continue withDiscoveringiSCSITargetsbyUsingiSNSonpage 139. DiscoveredTargets:TodiscoveriSCSItargetdevicesmanually,continuewithDiscovering iSCSITargetsManuallyonpage 139.

Discovering iSCSI Targets by Using iSNS


Beforeyoucanusethisoption,youmusthavealreadyinstalledandconfiguredaniSNSserverin yourenvironment.Forinformation,seeChapter 12,iSNSforLinux,onpage 121.
1 InYaST,selectiSCSIInitiator,thenselecttheServicetab. 2 SpecifytheIPaddressoftheiSNSserverandport.

Thedefaultportis3205.
3 OntheiSCSIInitiatorOverviewpage,clickFinishtosaveandapplyyourchanges.

Discovering iSCSI Targets Manually


RepeatthefollowingprocessforeachoftheiSCSItargetserversthatyouwanttoaccessfromthe serverwhereyouaresettinguptheiSCSIinitiator.
1 InYaST,selectiSCSIInitiator,thenselecttheDiscoveredTargetstab. 2 ClickDiscoverytoopentheiSCSIInitiatorDiscoverydialog. 3 EntertheIPaddressandchangetheportifneeded.

Thedefaultportis3260.
4 Ifauthenticationisrequired,deselectNoAuthentication,thenspecifythecredentialstheIncoming

orOutgoingauthentication.
5 ClickNexttostartthediscoveryandconnecttotheiSCSItargetserver. 6 Ifcredentialsarerequired,afterasuccessfuldiscovery,useLogintoactivatethetarget.

YouarepromptedforauthenticationcredentialstousetheselectediSCSItarget.
7 ClickNexttofinishtheconfiguration.

Ifeverythingwentwell,thetargetnowappearsinConnectedTargets. ThevirtualiSCSIdeviceisnowavailable.
8 OntheiSCSIInitiatorOverviewpage,clickFinishtosaveandapplyyourchanges.

Mass Storage over IP Networks: iSCSI

139

9 YoucanfindthelocaldevicepathfortheiSCSItargetdevicebyusingthelsscsicommand: lsscsi [1:0:0:0] disk IET VIRTUAL-DISK 0 /dev/sda

Setting the Start-up Preference for iSCSI Target Devices


1 InYaST,selectiSCSIInitiator,thenselecttheConnectedTargetstabtoviewalistoftheiSCSItarget

devicesthatarecurrentlyconnectedtotheserver.
2 SelecttheiSCSItargetdevicethatyouwanttomanage. 3 ClickToggleStartUptomodifythesetting:

Automatic:ThisoptionisusedforiSCSItargetsthataretobeconnectedwhentheiSCSI serviceitselfstartsup.Thisisthetypicalconfiguration. Onboot:ThisoptionisusedforiSCSItargetsthataretobeconnectedduringboot;thatis, whenroot(/)isoniSCSI.Assuch,theiSCSItargetdevicewillbeevaluatedfromtheinitrd onserverboots.


4 ClickFinishtosaveandapplyyourchanges.

13.3.2

Setting Up the iSCSI Initiator Manually


BoththediscoveryandtheconfigurationofiSCSIconnectionsrequirearunningiscsid.When runningthediscoverythefirsttime,theinternaldatabaseoftheiSCSIinitiatoriscreatedinthe directory/var/lib/open-iscsi. Ifyourdiscoveryispasswordprotected,providetheauthenticationinformationtoiscsid.Becausethe internaldatabasedoesnotexistwhendoingthefirstdiscovery,itcannotbeusedatthistime.Instead, theconfigurationfile/etc/iscsid.confmustbeeditedtoprovidetheinformation.Toaddyour passwordinformationforthediscovery,addthefollowinglinestotheendof/etc/iscsid.conf:
discovery.sendtargets.auth.authmethod = CHAP discovery.sendtargets.auth.username = <username> discovery.sendtargets.auth.password = <password>

Thediscoverystoresallreceivedvaluesinaninternalpersistentdatabase.Inaddition,itdisplaysall detectedtargets.Runthisdiscoverywiththecommandiscsiadm -m discovery --type=st -portal=<targetip>.Theoutputshouldlooklike:


149.44.171.99:3260,1 iqn.2006-02.com.example.iserv:systems

TodiscovertheavailabletargetsonaiSNSserver,usethecommandiscsiadm --mode discovery -type isns --portal <targetip>

ForeachtargetdefinedontheiSCSItarget,onelineappears.Formoreinformationaboutthestored data,seeSection 13.3.3,TheiSCSIClientDatabases,onpage 141. Thespecial--loginoptionofiscsiadmcreatesallneededdevices:


iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems --login

Thenewlygenerateddevicesshowupintheoutputoflsscsiandcannowbeaccessedbymount.

140

SLES 11 SP1: Storage Administration Guide

13.3.3

The iSCSI Client Databases


AllinformationthatwasdiscoveredbytheiSCSIinitiatorisstoredintwodatabasefilesthatresidein /var/lib/open-iscsi.Thereisonedatabaseforthediscoveryoftargetsandoneforthediscovered nodes.Whenaccessingadatabase,youfirstmustselectifyouwanttogetyourdatafromthe discoveryorfromthenodedatabase.Dothiswiththe-m discoveryand-m nodeparametersof iscsiadm.Usingiscsiadmjustwithoneoftheseparametersgivesanoverviewofthestoredrecords:
iscsiadm -m discovery 149.44.171.99:3260,1 iqn.2006-02.com.example.iserv:systems

Thetargetnameinthisexampleisiqn.2006-02.com.example.iserv:systems.Thisnameis neededforallactionsthatrelatetothisspecialdataset.Toexaminethecontentofthedatarecord withtheIDiqn.2006-02.com.example.iserv:systems,usethefollowingcommand:


iscsiadm -m node --targetname iqn.2006-02.com.example.iserv:systems node.name = iqn.2006-02.com.example.iserv:systems node.transport_name = tcp node.tpgt = 1 node.active_conn = 1 node.startup = manual node.session.initial_cmdsn = 0 node.session.reopen_max = 32 node.session.auth.authmethod = CHAP node.session.auth.username = joe node.session.auth.password = ******** node.session.auth.username_in = <empty> node.session.auth.password_in = <empty> node.session.timeo.replacement_timeout = 0 node.session.err_timeo.abort_timeout = 10 node.session.err_timeo.reset_timeout = 30 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes ....

Toeditthevalueofoneofthesevariables,usethecommandiscsiadmwiththeupdateoperation. Forexample,ifyouwantiscsidtologintotheiSCSItargetwhenitinitializes,setthevariable node.startuptothevalueautomatic:


iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems --op=update -name=node.startup --value=automatic

RemoveobsoletedatasetswiththedeleteoperationIfthetargetiqn.200602.com.example.iserv:systemsisnolongeravalidrecord,deletethisrecordwiththefollowing command:
iscsiadm -m node -n iqn.2006-02.com.example.iserv:systems --op=delete

IMPORTANT:Usethisoptionwithcautionbecauseitdeletestherecordwithoutanyadditional confirmationprompt. Togetalistofalldiscoveredtargets,runtheiscsiadm -m nodecommand.

13.4

Troubleshooting iSCSI
Section 13.4.1,HotplugDoesntWorkforMountingiSCSITargets,onpage 142 Section 13.4.2,DataPacketsDroppedforiSCSITraffic,onpage 142 Section 13.4.3,UsingiSCSIVolumeswithLVM,onpage 142

Mass Storage over IP Networks: iSCSI

141

13.4.1

Hotplug Doesnt Work for Mounting iSCSI Targets


InSLES10,youcouldaddthehotplugoptiontoyourdeviceinthe/etc/fstabfiletomountiSCSI targets.Forexample:
/dev/disk/by-uuid-blah /oracle/db ext3 hotplug,rw 0 2

ForSLES11,thehotplugoptionnolongerworks.Usethenofailoptioninstead.Forexample:
/dev/sdb1 /mnt/mountpoint ext3 acl,user,nofail 0 0

Forinformation,seeTID7004427:/etc/fstabentrydoesnotmountiSCSIdeviceonbootup(http:// www.novell.com/support/php/search.do?cmd=displayKC&docType=kc&externalId=7004427).

13.4.2

Data Packets Dropped for iSCSI Traffic


Afirewallmightdroppacketsifitgetstobusy.ThedefaultfortheSUSEFirewallistodroppackets afterthreeminutes.IfyoufindthatiSCSItrafficpacketsarebeingdropped,youcanconsider configuringtheSUSEFirewalltoqueuepacketsinsteadofdroppingthemwhenitgetstoobusy.

13.4.3

Using iSCSI Volumes with LVM


UsethetroubleshootingtipsinthissectionwhenusingLVMoniSCSItargets. ChecktheiSCSIInitiatorDiscoveryOccursatBootonpage 142 CheckthatiSCSITargetDiscoveryOccursatBootonpage 142

Check the iSCSI Initiator Discovery Occurs at Boot


WhenyousetuptheiSCSIInitiator,makesuretoenablediscoveryatboottimesothatudevcan discovertheiSCSIdevicesatboottimeandsetupthedevicestobeusedbyLVM.

Check that iSCSI Target Discovery Occurs at Boot


RememberthatudevprovidesthedefaultsetupfordevicesinSLES11.Makesurethatallofthe applicationsthatcreatedeviceshaveaRunlevelsettingtorunatbootsothatudevcanrecognizeand assigndevicesforthematsystemstartup.Iftheapplicationorserviceisnotstarteduntillater,udev doesnotcreatethedeviceautomaticallyasitwouldatboottime. YoucancheckyourrunlevelsettingsforLVM2andiSCSIinYaSTbygoingtoSystem>SystemServices (Runlevel)>ExpertMode.Thefollowingservicesshouldbeenabledatboot(B): boot.lvm boot.openiscsi openiscsi

142

SLES 11 SP1: Storage Administration Guide

13.5

Additional Information
TheiSCSIprotocolhasbeenavailableforseveralyears.Therearemanyreviewsandadditional documentationcomparingiSCSIwithSANsolutions,doingperformancebenchmarks,orjust describinghardwaresolutions.Importantpagesformoreinformationaboutopeniscsiare: OpeniSCSIProject(http://www.openiscsi.org/) AppNote:iFolderonOpenEnterpriseServerLinuxClusterusingiSCSI(http://www.novell.com/ coolsolutions/appnote/15394.html) Thereisalsosomeonlinedocumentationavailable.Seethemanpagesforiscsiadm,iscsid, ietd.conf,andietdandtheexampleconfigurationfile/etc/iscsid.conf.

Mass Storage over IP Networks: iSCSI

143

144

SLES 11 SP1: Storage Administration Guide

14

14

Volume Snapshots

Afilesystemsnapshotisacopyonwritetechnologythatmonitorschangestoanexistingvolumes datablockssothatwhenawriteismadetooneoftheblocks,theblocksvalueatthesnapshottimeis copiedtoasnapshotvolume.Inthisway,apointintimecopyofthedataispreserveduntilthe snapshotvolumeisdeleted. Section 14.1,UnderstandingVolumeSnapshots,onpage 145 Section 14.2,CreatingLinuxSnapshotswithLVM,onpage 146 Section 14.3,MonitoringaSnapshot,onpage 146 Section 14.4,DeletingLinuxSnapshots,onpage 146

14.1

Understanding Volume Snapshots


Afilesystemsnapshotcontainsmetadataaboutanddatablocksfromanoriginalvolumethathave changedsincethesnapshotwastaken.Whenyouaccessdataviathesnapshot,youseeapointin timecopytheoriginalvolume.Thereisnoneedtorestoredatafrombackupmediaortooverwrite thechangeddata. InaXenhostenvironment,thevirtualmachinemustbeusinganLVMlogicalvolumeasitsstorage backend,asopposedtousingavirtualdiskfile. Linuxsnapshotsallowyoutocreateabackupfromapointintimeviewofthefilesystem.The snapshotiscreatedinstantlyandpersistsuntilyoudeleteit.Youcanbackupthefilesystemfromthe snapshotwhilethevolumeitselfcontinuestobeavailableforusers.Thesnapshotinitiallycontains somemetadataaboutthesnapshot,butnoactualdatafromtheoriginalvolume.Snapshotusescopy onwritetechnologytodetectwhendatachangesinanoriginaldatablock.Itcopiesthevalueitheld whenthesnapshotwastakentoablockinthesnapshotvolume,thenallowsthenewdatatobe storedintheoriginalblock.Asblockschangefromtheiroriginalvalue,thesnapshotsizegrows. Whenyouaresizingthesnapshot,considerhowmuchdataisexpectedtochangeontheoriginal volumeandhowlongyouplantokeepthesnapshot.Theamountofspacethatyouallocatefora snapshotvolumecanvary,dependingonthesizeoftheoriginalvolume,howlongyouplantokeep thesnapshot,andthenumberofdatablocksthatareexpectedtochangeduringthesnapshots lifetime.Thesnapshotvolumecannotberesizedafteritiscreated.Asaguide,createasnapshot volumethatisabout10%ofthesizeoftheoriginallogicalvolume.Ifyouanticipatethateveryblock intheoriginalvolumewillchangeatleastonetimebeforeyoudeletethesnapshot,thenthesnapshot volumeshouldbeatleastaslargeastheoriginalvolumeplussomeadditionalspaceformetadata aboutthesnapshotvolume.Lessspaceisrequiredifthedatachangesinfrequentlyoriftheexpected lifetimeissufficientlybrief. IMPORTANT:Duringthesnapshotslifetime,thesnapshotmustbemountedbeforeitsoriginal volumecanbemounted.

Volume Snapshots

145

Whenyouaredonewiththesnapshot,itisimportanttoremoveitfromthesystem.Asnapshot eventuallyfillsupcompletelyasdatablockschangeontheoriginalvolume.Whenthesnapshotis full,itisdisabled,whichpreventsyoufromremountingtheoriginalvolume. Removesnapshotsinalastcreated,firstdeletedorder.

14.2

Creating Linux Snapshots with LVM


TheLogicalVolumeManager(LVM)canbeusedforcreatingsnapshotsofyourfilesystem.
1 Openaterminalconsole,loginastherootuser,thenenter lvcreate -s -L 1G -n snap_volume source_volume_path

Forexample:
lvcreate -s -L 1G -n linux01-snap /dev/lvm/linux01

Thesnapshotiscreatedasthe/dev/lvm/linux01-snapvolume.

14.3

Monitoring a Snapshot
1 Openaterminalconsole,loginastherootuser,thenenter lvdisplay snap_volume

Forexample:
lvdisplay /dev/vg01/linux01-snap --- Logical volume --LV Name VG Name LV UUID LV Write Access LV snapshot status LV Status # open LV Size Current LE COW-table size COW-table LE Allocated to snapshot Snapshot chunk size Segments Allocation Read ahead sectors Block device /dev/lvm/linux01 vg01 QHVJYh-PR3s-A4SG-s4Aa-MyWN-Ra7a-HL47KL read/write active destination for /dev/lvm/linux01 available 0 80.00 GB 1024 8.00 GB 512 30% 8.00 KB 1 inherit 0 254:5

14.4

Deleting Linux Snapshots


1 Openaterminalconsole,loginastherootuser,thenenter lvremove snap_volume_path

Forexample:
lvremove /dev/lvm/linux01-snap

146

SLES 11 SP1: Storage Administration Guide

15

15

Troubleshooting Storage Issues

Thissectiondescribeshowtoworkaroundknownissuesfordevices,softwareRAIDs,multipathI/O, andvolumes. Section 15.1,IsDMMPIOAvailablefortheBootPartition?,onpage 147 Section 15.2,IssuesforiSCSI,onpage 147 Section 15.3,IssuesforMultipathI/O,onpage 147 Section 15.4,IssuesforSoftwareRAIDs,onpage 147

15.1

Is DM-MPIO Available for the Boot Partition?


DeviceMapperMultipathI/O(DMMPIO)issupportedforthebootpartition,beginninginSUSE LinuxEnterpriseServer10SupportPack1.

15.2 15.3 15.4

Issues for iSCSI


SeeChapter 15,TroubleshootingStorageIssues,onpage 147.

Issues for Multipath I/O


SeeSection 7.14,TroubleshootingMPIO,onpage 89.

Issues for Software RAIDs


SeeAppendix 8.3,TroubleshootingSoftwareRAIDs,onpage 94.

Troubleshooting Storage Issues

147

148

SLES 11 SP1: Storage Administration Guide

Documentation Updates

ThissectioncontainsinformationaboutdocumentationcontentchangesmadetotheSUSELinux EnterpriseServerStorageAdministrationGuidesincetheinitialreleaseofSUSELinuxEnterpriseServer 11.Ifyouareanexistinguser,reviewthechangeentriestoreadilyidentifymodifiedcontent.Ifyou areanewuser,simplyreadtheguideinitscurrentstate. Thisdocumentwasupdatedonthefollowingdates: Section A.1,December15,2011,onpage 149 Section A.2,September8,2011,onpage 150 Section A.3,July12,2011,onpage 151 Section A.4,June14,2011,onpage 151 Section A.5,May5,2011,onpage 151 Section A.6,January2011,onpage 152 Section A.7,September16,2010,onpage 153 Section A.8,June21,2010,onpage 153 Section A.9,May2010(SLES11SP1),onpage 154 Section A.10,February23,2010,onpage 156 Section A.11,December1,2009,onpage 157 Section A.12,October20,2009,onpage 158 Section A.13,August3,2009,onpage 159 Section A.14,June22,2009,onpage 159 Section A.15,May21,2009,onpage 160

A.1

December 15, 2011


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.1.1,ManagingMultipathI/OforDevices,onpage 150 Section A.1.2,ResizingFileSystems,onpage 150

Documentation Updates

149

A.1.1

Managing Multipath I/O for Devices


Location Change

Table 7-6, Multipath Attributes, on page 72 Recommendations were added for the no_path_retry and failback settings when multipath I/O is used in a cluster environment. Table 7-6, Multipath Attributes, on page 72 The path-selector option names and settings were corrected: round-robin 0 least-pending 0 service-time 0 queue-length 0

A.1.2

Resizing File Systems


Location Section 5.1, Guidelines for Resizing, on page 39 Section 5.4, Decreasing the Size of an Ext2, Ext3, or Ext4 File System, on page 42 Change The resize2fs command allows only the Ext3 file system to be resized if mounted. The size of an Ext3 volume can be increased or decreased when the volume is mounted or unmounted. The Ext2/4 file systems must be unmounted for increasing or decreasing the volume size.

A.2

September 8, 2011
Updatesweremadetothefollowingsection.Thechangesareexplainedbelow. Section A.2.1,ManagingMultipathI/OforDevices,onpage 150

A.2.1

Managing Multipath I/O for Devices


Location Change

Configuring Default Multipath Behavior in / The default getuid path for SLES 11 is /lib/udev/scsi_id. etc/multipath.conf on page 70

Managing I/O in Error Situations on page 88 Resolving Stalled I/O on page 89

In the dmsetup message commands, the 0 value represents the sector and is used when sector information is not needed.

150

SLES 11 SP1: Storage Administration Guide

A.3

July 12, 2011


Updatesweremadetothefollowingsection.Thechangesareexplainedbelow. Section A.3.1,ManagingMultipathI/OforDevices,onpage 151

A.3.1

Managing Multipath I/O for Devices


Location Section 7.2.3, Using LVM2 on Multipath Devices, on page 52 Section 7.8, Configuring Multipath I/O for an Existing Software RAID, on page 82 Change Running mkinitrd is needed only if the root (/) device or any parts of it (such as /var, /etc, /log) are on the SAN and multipath is needed to boot.

A.4

June 14, 2011


Updatesweremadetothefollowingsection.Thechangesareexplainedbelow. Section A.4.1,ManagingMultipathI/OforDevices,onpage 151 Section A.4.2,WhatsNewforStorageinSLES11,onpage 151

A.4.1

Managing Multipath I/O for Devices


Location path_grouping_policy in Table 7-6, Multipath Attributes, on page 72 Change The default setting changed from multibus to failover in SLES 11.

A.4.2

Whats New for Storage in SLES 11


Location Change

Section 2.2.12, Change from Multibus to This section is new. Failover as the Default Setting for the MPIO Path Grouping Policy, on page 24

A.5

May 5, 2011
Thisreleasefixesbrokenlinksandremovesobsoletereferences.

Documentation Updates

151

A.6

January 2011
Updatesweremadetothefollowingsection.Thechangesareexplainedbelow. Section A.6.1,LVMConfiguration,onpage 152 Section A.6.2,ManagingMultipathI/OforDevices,onpage 152 Section A.6.3,ResizingFileSystems,onpage 152

A.6.1

LVM Configuration
Location Section 4.3, Creating Volume Groups, on page 30 Change LVM2 does not restrict the number of physical extents. Having a large number of extents has no impact on I/O performance to the logical volume, but it slows down the LVM tools.

A.6.2

Managing Multipath I/O for Devices


Location Tuning the Failover for Specific Host Bus Adapters Change This section was removed. For HBA failover guidance, refer to your vendor documentation.

A.6.3

Resizing File Systems


Location Section 11.3.1, Decreasing the Size of the File System, on page 116 Change Decreasing the size of the file system is supported when the file system is unmounted.

152

SLES 11 SP1: Storage Administration Guide

A.7

September 16, 2010


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.7.1,LVMConfiguration,onpage 153

A.7.1

LVM Configuration
Location Creating LVM Partitions on page 29 Change The discussion and procedure were expanded to explain how to configure a partition that uses the entire disk. The procedure was modified to use the Hard Disk partitioning feature in the YaST Partitioner. All LVM Management sections Resizing a Volume Group on page 36 Resizing a Logical Volume with YaST on page 36 Deleting a Volume Group on page 38 Deleting an LVM Partition (Physical Volume) on page 38 Procedures throughout the chapter were modified to use Volume Management in the YaST Partitioner. This section is new. This section is new. This section is new. This section is new.

A.8

June 21, 2010


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.8.1,LVMConfiguration,onpage 153 Section A.8.2,ManagingMultipathI/O,onpage 154 Section A.8.3,ManagingSoftwareRAIDs6and10withmdadm,onpage 154 Section A.8.4,MassStorageonIPNetWork:iSCSI,onpage 154

A.8.1

LVM Configuration
Location Creating LVM Partitions on page 29 Change Details were added to the procedure.

Documentation Updates

153

A.8.2

Managing Multipath I/O


Location Configuring User-Friendly Names or Alias Names in /etc/multipath.conf on page 66 Change Using user-friendly names for the root device can result in data loss. Added alternatives from TID 7001133: Recommendations for the usage of user_friendly_names in multipath configurations (http://www.novell.com/support/ search.do?cmd=displayKC&docType=kc&externalId=7001133).

A.8.3

Managing Software RAIDs 6 and 10 with mdadm


Location Far Layout on page 107 Change Errata in the example were corrected.

A.8.4

Mass Storage on IP NetWork: iSCSI


Location Hotplug Doesnt Work for Mounting iSCSI Targets on page 142 Change This section is new.

A.9

May 2010 (SLES 11 SP1)


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.9.1,ManagingMultipathI/OforDevices,onpage 155 Section A.9.2,MassStorageoverIPNetworks:iSCSI,onpage 155 Section A.9.3,SoftwareRAIDConfiguration,onpage 156 Section A.9.4,WhatsNew,onpage 156

154

SLES 11 SP1: Storage Administration Guide

A.9.1

Managing Multipath I/O for Devices


Location Using LVM2 on Multipath Devices on page 52 SAN Timeout Settings When the Root Device Is Multipathed on page 53 Section 7.3.2, Multipath I/O Management Tools, on page 59 Change The example in Step 3 on page 52 was corrected. This section is new. The file list for a package can vary for different server architectures. For a list of files included in the multipath-tools package, go to the SUSE Linux Enterprise Server Technical Specifications > Package Descriptions Web page (http:// www.novell.com/products/server/techspecs.html?tab=1), find your architecture and select Packages Sorted by Name, then search on multipath-tools to find the package list for that architecture. If the SAN device will be used as the root device on the server, modify the timeout settings for the device as described in Section 7.2.6, SAN Timeout Settings When the Root Device Is Multipathed, on page 53. Added example output for -v3 verbosity.

Preparing SAN Devices for Multipathing on page 62

Verifying the Setup in the etc/ multipath.conf File on page 64

Enabling Multipath I/O at Install Time on an This section is new. Active/Active Multipath Storage LUN on page 80 Enabling Multipath I/O at Install Time on an This section is new. Active/Passive Multipath Storage LUN on page 80

A.9.2

Mass Storage over IP Networks: iSCSI


Location Step 7g in Section 13.2.2, Creating iSCSI Targets with YaST, on page 132 Change In the YaST > Network Services > iSCSI Target function, the Save option allows you to export the iSCSI target information, which makes it easier to provide this information to consumers of the resources. This section is new.

Section 13.4, Troubleshooting iSCSI, on page 141

Documentation Updates

155

A.9.3

Software RAID Configuration


Location For More Information on page 94 Change The Software RAID HOW-TO has been deprecated. Use the Linux RAID wiki (https://raid.wiki.kernel.org/index.php/ Linux_Raid) instead.

A.9.4

Whats New
Location Change

Section 2.1, Whats New in SLES 11 SP1, This section is new. on page 19

A.10

February 23, 2010


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.10.1,ConfiguringSoftwareRAIDfortheRootPartition,onpage 156 Section A.10.2,ManagingMultipathI/O,onpage 156

A.10.1

Configuring Software RAID for the Root Partition


Location Prerequisites for the Software RAID on page 95 Change Corrected an error in the RAID 0 definition..

A.10.2

Managing Multipath I/O


Location Scanning for New Devices without Rebooting on page 84 Scanning for New Partitioned Devices without Rebooting on page 86 Change Added information about using the rescan-scsi-bus.sh script to scan for devices without rebooting. Added information about using the rescan-scsi-bus.sh script to scan for devices without rebooting.

156

SLES 11 SP1: Storage Administration Guide

A.11

December 1, 2009
Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.11.1,ManagingMultipathI/OforDevices,onpage 157 Section A.11.2,ResizingFileSystems,onpage 157 Section A.11.3,WhatsNew,onpage 157

A.11.1

Managing Multipath I/O for Devices


Location Section 7.2.3, Using LVM2 on Multipath Devices, on page 52 Section 7.8, Configuring Multipath I/O for an Existing Software RAID, on page 82 prio_callout in Table 7-6, Multipath Attributes, on page 72 Multipath prio_callouts are located in shared libraries in /lib/ libmultipath/lib*. By using shared libraries, the callouts are loaded into memory on daemon startup. Change The -f mpath option changed to -f multipath: mkinitrd -f multipath

A.11.2

Resizing File Systems


Location Section 5.1.1, File Systems that Support Resizing, on page 39 Change The resize2fs utility supports online or offline resizing for the ext3 file system.

A.11.3

Whats New
Location Section 2.2.10, Location Change for Multipath Tool Callouts, on page 24 Section 2.2.11, Change from mpath to multipath for the mkinitrd -f Option, on page 24 Change This section is new.

This section is new.

Documentation Updates

157

A.12

October 20, 2009


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.12.1,LVMConfiguration,onpage 158 Section A.12.2,ManagingMultipathI/OforDevices,onpage 158 Section A.12.3,WhatsNew,onpage 158

A.12.1

LVM Configuration
Location Section 4.1, Understanding the Logical Volume Manager, on page 27 Change In the YaST Control Center, select System > Partitioner.

A.12.2

Managing Multipath I/O for Devices


Location Blacklisting Non-Multipathed Devices in / etc/multipath.conf on page 69 Change The keyword devnode_blacklist has been deprecated and replaced with the keyword blacklist.

Configuring Default Multipath Behavior in / Changed getuid_callout to getuid. etc/multipath.conf on page 70 Understanding Priority Groups and Attributes on page 72 path_selector on page 75 Changed getuid_callout to getuid.

Added descriptions of least-pending, length-load-balancing, and service-time options.

A.12.3

Whats New
Location Section 2.2.9, Advanced I/O LoadBalancing Options for Multipath, on page 24 Change This section is new.

158

SLES 11 SP1: Storage Administration Guide

A.13

August 3, 2009
Updatesweremadetothefollowingsection.Thechangeisexplainedbelow. Section A.13.1,ManagingMultipathI/O,onpage 159

A.13.1

Managing Multipath I/O


Location Section 7.2.5, Using --noflush with Multipath Devices, on page 53 Change This section is new.

A.14

June 22, 2009


Updatesweremadetothefollowingsections.Thechangesareexplainedbelow. Section A.14.1,ManagingMultipathI/O,onpage 159 Section A.14.2,ManagingSoftwareRAIDs6and10withmdadm,onpage 159 Section A.14.3,MassStorageoverIPNetworks:iSCSI,onpage 160

A.14.1

Managing Multipath I/O


Location Section 7.7, Configuring Multipath I/O for the Root Device, on page 79 Change Added Step 4 on page 82 and Step 6 on page 82 for System Z.

Section 7.10, Scanning for New Partitioned Corrected the syntax for the command lines in Step 2. Devices without Rebooting, on page 86 Section 7.10, Scanning for New Partitioned Step 7 on page 86 replaces old Step 7 and Step 8. Devices without Rebooting, on page 86

A.14.2

Managing Software RAIDs 6 and 10 with mdadm


Location Section 10.4, Creating a Degraded RAID Array, on page 108 Change To see the rebuild progress while being refreshed every second, enter

watch -n 1 cat /proc/mdstat

Documentation Updates

159

A.14.3

Mass Storage over IP Networks: iSCSI


Location Section 13.3.1, Using YaST for the iSCSI Initiator Configuration, on page 137 Change Re-organized material for clarity. Added information about how to use the settings for the Start-up option for iSCSI target devices:

Automatic: This option is used for iSCSI targets that are to


be connected when the iSCSI service itself starts up. This is the typical configuration.

Onboot: This option is used for iSCSI targets that are to be connected during boot; that is, when root (/) is on iSCSI. As
such, the iSCSI target device will be evaluated from the initrd on server boots.

A.15

May 21, 2009


Updatesweremadetothefollowingsection.Thechangesareexplainedbelow. Section A.15.1,ManagingMultipathI/O,onpage 160

A.15.1

Managing Multipath I/O


Location Storage Arrays That Are Automatically Detected for Multipathing on page 55 Change Testing of the IBM zSeries device with multipathing has shown that the dev_loss_tmo parameter should be set to 90 seconds, and the fast_io_fail_tmo parameter should be set to 5 seconds. If you are using zSeries devices, you must manually create and configure the /etc/multipath.conf file to specify the values. For information, see Configuring Default Settings for zSeries in / etc/multipath.conf on page 70. Multipathing is supported for the /boot device in SUSE Linux Enterprise Server 11 and later.

Section 7.3.1, Device Mapper Multipath Module, on page 57

Configuring Default Settings for zSeries in / This section is new. etc/multipath.conf on page 70 Section 7.7, Configuring Multipath I/O for the Root Device, on page 79 DM-MP is now available and supported for /boot and /root in SUSE Linux Enterprise Server 11.

160

SLES 11 SP1: Storage Administration Guide

You might also like