00 W 0188
00 W 0188
00 W 0188
GA32-0963-03
Note
Before using this information and the product it supports, read the information in Notices on page 269.
Contents
Figures . . . . . . . . . . . . . . vii
Tables . . . . . . . . . . . . . . . ix
. xii
. xii
. xii
.
.
.
.
xiii
xiii
xiii
xiv
. xv
. xv
. xv
. xv
. xvi
. xvi
. . 1
Introduction . . . . . . . . . . . . . .
Storage Manager software . . . . . . . . .
Storage Manager software components . . . .
Supported controller firmware . . . . . . .
Types of installation configurations . . . . . . .
Network configuration . . . . . . . . . .
Reviewing a sample network configuration . .
Setting up a management station . . . . .
Setting up a network-managed (out-of-band)
configuration . . . . . . . . . . . .
Setting up a host-agent-managed (in-band)
configuration . . . . . . . . . . . .
Direct-attached and SAN-attached configurations
Setting up a direct-attached configuration. . .
Setting up a SAN-attached configuration . . .
Setting up controller addresses for software
installation . . . . . . . . . . . . . . .
Setting up IP addresses for storage subsystem
controllers . . . . . . . . . . . . . .
Setting up an IP address with the DHCP/BOOTP
server. . . . . . . . . . . . . . . .
Identifying Ethernet MAC addresses . . . .
Assigning static TCP/IP addresses to a storage
subsystem using factory-default management port
TCP/IP address . . . . . . . . . . . .
Assigning static TCP/IP addresses storage
subsystem using an in-band management
connection . . . . . . . . . . . . . .
Assigning static TCP/IP addresses using the
storage subsystem controller serial port Service
Interface . . . . . . . . . . . . . . .
1
1
2
2
2
3
3
4
4
5
5
5
5
6
6
7
7
.
.
.
.
.
.
.
.
.
.
.
.
11
12
14
14
15
16
17
18
19
20
21
21
23
25
27
28
29
29
30
30
30
31
31
32
32
33
33
34
35
36
36
36
36
36
36
36
36
37
37
37
iii
. 38
. 38
.
.
. 38
. 39
. 40
.
.
.
.
.
.
.
.
41
43
43
45
.
.
.
.
.
.
.
.
.
.
46
46
47
47
48
iv
49
50
50
52
53
54
56
57
61
63
63
64
66
67
68
70
70
71
71
71
72
72
74
74
75
75
75
76
76
77
77
78
79
80
80
.
.
.
.
81
82
82
83
.
.
.
.
83
83
83
83
. 84
. 84
84
. 84
. 85
. 85
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
127
128
129
129
129
130
130
131
134
137
138
138
125
125
126
126
127
140
140
140
144
144
145
146
148
149
149
155
156
156
157
159
159
160
161
161
165
165
165
166
166
166
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
175
181
183
187
190
191
191
191
192
192
193
193
193
194
194
194
194
198
199
200
201
202
204
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
205
220
221
222
223
225
226
227
227
.
.
.
.
.
.
.
231
231
232
232
233
239
239
. 239
.
.
.
.
240
241
242
243
Sample configuration . . . . . . . . . . .
Software requirements . . . . . . . . . .
Management station . . . . . . . . . .
Host (VMware ESX Server) . . . . . . . .
Hardware requirements . . . . . . . . . .
VMware ESX Server restrictions . . . . . . .
Other VMware ESX Server host information . . .
Configuring storage subsystems for VMware ESX
Server . . . . . . . . . . . . . . . .
Cross-connect configuration for VMware
connections . . . . . . . . . . . . .
Mapping LUNs to a storage partition on
VMware ESX Server . . . . . . . . . .
Verifying the storage configuration for VMware
245
245
246
246
246
247
248
248
vi
.
.
.
.
.
.
. 253
. 253
.
.
.
.
.
.
.
253
259
260
260
260
260
260
248
249
249
Configuration limitations . . . . . .
Other PSSP and GPFS usage notes . . .
GPFS, PSSP, and HACMP cluster configuration
diagrams . . . . . . . . . . . . .
Using cluster services on HP-UX systems . . .
Using cluster services on Solaris systems . . .
General Solaris requirements . . . . . .
System dependencies . . . . . . . . .
Adding RDAC IDs . . . . . . . .
Single points of failure . . . . . . .
251
251
251
252
252
252
Notices . . . . . . . . . . . . . . 269
. 253
. 253
Index . . . . . . . . . . . . . . . 285
Trademarks . .
Important notes
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 271
. 271
Glossary . . . . . . . . . . . . . 273
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Figures
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
148
150
151
152
154
155
157
161
198
245
249
254
255
256
257
258
259
vii
viii
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Tables
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
Security authorizations . . . . . . . .
Full disk encryption terminology . . . . .
Proxy configuration file properties . . . .
Support Monitor icons . . . . . . . .
Support Monitor messages and descriptions
Critical events . . . . . . . . . . .
Problem index . . . . . . . . . . .
Recovery Step 2. . . . . . . . . . .
Recovery Step 4. . . . . . . . . . .
Recovery Step 5. . . . . . . . . . .
DDC MEL events . . . . . . . . . .
Disk array errors . . . . . . . . . .
QLogic model QLA234x, QLA24xx, QLE2462,
QLE2460, QLE2560, QLE2562, QMI2572,
QMI3572, QMI2582 . . . . . . . . .
QLogic model QL220x (for BIOS V1.81) host
bus adapter settings by operating system . .
Configuration settings for
FCE-1473/FCE-6460/FCX2-6562/FCC2-6562 .
Configuration settings for
FCE-1063/FCE2-1063/FCE-6410/FCE2-6410 .
Configuration settings for FCI-1063 . . . .
Configuration settings for FC64-1063
Configuration settings for QL2342 . . . .
Attributes for dar devices . . . . . . .
Attributes for dac devices . . . . . . .
Attributes for hdisk devices . . . . . . .
Example 1: Displaying the attribute settings
for a dar . . . . . . . . . . . . .
Example 2: Displaying the attribute settings
for a dac . . . . . . . . . . . . .
Example 3: Displaying the attribute settings
for an hdisk . . . . . . . . . . . .
Storage Manager alternate keyboard
operations . . . . . . . . . . . .
158
159
162
199
202
205
223
226
226
226
227
227
234
237
239
240
241
242
243
261
262
263
265
265
266
268
ix
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
xi
1. The terms host and host computer are used interchangeably throughout this document.
2. A host computer can also function as a management station.
Related documentation
In addition to the information in this document, the resources that are described in the following sections
are available.
To access these documents and other IBM System Storage documentation from the IBM Support Portal,
complete the following steps.
Note: The first time that you access the IBM Support Portal, you must choose the product category,
product family, and model numbers for your storage subsystems. The next time you access the IBM
Support Portal, the products you selected initially are preloaded by the website, and only the links for
your products are displayed. To change or add to your product list, click the Manage my product lists
link.
1. Go to http://www.ibm.com/support/entry/portal.
2. Under Choose your products, expand Hardware.
3. Click System Storage > Disk systems > Mid-range disk systems (for DS4000 or DS5000 storage
subsystems) or Entry-level disk systems (for DS3000 storage subsystems), and check the box for your
storage subsystem.
4. Under Choose your task, click Documentation.
5. Under See your results, click View your page.
6. In the Product documentation box, click the link for the publication that you want to access.
xii
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
xiii
xiv
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
xv
xvi
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Introduction
The IBM System Storage DS Storage Manager consists of a set of client and host tools that you can use to
manage the IBM DS3000, DS4000, and DS5000 Storage Subsystems from a management station.
The Storage Manager is supported on the following operating systems:
v AIX
v Windows 2003 and Windows 2008
v Linux (RHEL and SLES)
v HP-UX
v Solaris
The DS3000, DS4000, and DS5000 Storage Subsystems are also supported when they are attached to
NetWare, Apple Mac OS, VMware ESX Server, and System p Virtual IO Server (VIOS) hosts, as well as on
i5/OS as a guest client on VIOS. IBM does not provide host software for these operating systems. You
must install IBM DS Storage Manager on a management station which has one of the operating systems
listed above installed.
Information about i5/OS support can be found at the following website:
www.ibm.com/systems/i/os/
For additional information, see the System Storage Interoperation Center at the following website:
http://www.ibm.com/systems/support/storage/config/ssic
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Network configuration
Before you begin installing the Storage Manager software, make sure that the network components are set
up and operating properly and that you have all of the host and controller information that is necessary
for the correct operation of the software.
Note: When you connect the storage subsystem to an Ethernet switch, set the switch port settings to
auto-negotiate.
6.
7.
8.
Note: Throughout the remaining steps, you must record some information for future use, such as the
hardware Ethernet and IP addresses.
Determine the hardware Ethernet MAC address for each controller in storage subsystems connected to
the network. If you are using a default controller IP address, go to step 6. Otherwise, obtain the
TCP/IP address and host name for each of the controllers in the storage subsystems on the network
from the network administrator.
Set up the DHCP/BOOTP server to provide network configuration information for a specific
controller. If you are using static controller IP addresses, skip this step.
Verify that the TCP/IP software is installed.
Set up the host or domain name server (DNS) table.
Turn on the power to the devices that are connected to the network.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4.
5.
If you use Fibre Channel HBAs in your SAN-attached configuration, the HBA and the storage subsystem
host port connections should be isolated in fabric zones to minimize the possible interactions between the
ports in a SAN fabric environment. Multiple storage subsystems can be configured to the same set of
HBAs through a Fibre Channel, SAS, or Ethernet switch. For more information about Fibre Channel
zoning schemes, see Connecting HBAs in a Fibre Channel switch environment on page 94. Similar
zoning schemes can be implemented with SAS and Ethernet switches also.
Attention: A single-HBA configuration can result in loss of data access in the event of a path failure. If
you have a single HBA in a SAN-attached configuration, both controllers in the storage subsystem must
be connected to the HBA through a switch, and both controllers must be within the same SAN zone as
the HBA.
Complete the following steps to set up a SAN-attached configuration:
1. Connect the HBAs to the switch or switches.
2. Connect the storage subsystems to the switch or switches.
3. Set the required zoning or VLANs on the Fibre Channel switches or Ethernet switches, if applicable.
4. Use the Storage Manager automatic discovery feature to make sure that the storage subsystem is
discovered.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
configured and DHCP is enabled on the other port, the most recently supplied or obtained gateway
address will be used. Generally, this is the gateway address supplied by the DHCP server unless the
manual configuration for the other port is changed. In this case, the gateway address should be set to the
value provided by the controller, which should match the gateway address obtained from the DHCP
server. If DHCP is enabled on both ports, the DHCP servers attached to the two ports should be
configured to supply the same gateway address. If the DHCP servers apply different gateway addresses,
the most recently obtained gateway address will be used for both ports.
Any changes to remote login access affect both ports. In other words, if remote login access is enabled or
disabled on one port, it is also enabled or disabled on the other port. As with the gateway address, the
most recent configuration applied for remote login applies to both ports. For example, if remote login
access is manually enabled on port 1, it will also be enabled for port 2. If a DHCP server subsequently
supplies configuration parameters for port 2 that includes disabling remote login access, it will be
disabled for both ports.
If a controller has two management ports, the two Ethernet ports must be on different subnets. If both
ports are on the same subnet, or if they have the same network address (the logical AND of the IP
address and the subnet mask), Subnet Configuration Error event notification will occur.
Identifying the Ethernet MAC addresses on a DS4400, or DS4500 storage subsystem: To identify the
hardware Ethernet MAC addresses for DS4400 and DS4500 storage subsystems, complete the following
steps:
1. Remove the front bezel from the storage subsystem and carefully pull the bottom of the bezel out to
release the pins. Then slide the bezel down.
2. On the front of each controller, look for a label with the hardware Ethernet MAC address. The
number is in the form xx.xx.xx.xx.xx.xx (for example, 00.a0.b8.20.00.d8).
3. Record each Ethernet MAC address.
4. To replace the bezel, slide the top edge under the lip on the chassis. Then push the bezel bottom until
the pins snap into the mounting holes.
Identifying the Ethernet MAC addresses on a DS4100, or DS4300 storage subsystem: To identify the
hardware Ethernet MAC address for machine types 3542 ( DS4100, or 1722 (DS4300), complete the
following steps:
1. Locate the Ethernet MAC address at the back of the unit, under the controller Fibre Channel host
ports. The number is in the form xx.xx.xx.xx.xx.xx (for example, 00.a0.b8.20.00.d8).
2. Record each Ethernet MAC address.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. If the controller BAUD rate setting is different from the terminal setting, send a "break" character to
cause the controller to switch to the next available BAUD rate setting. Repeat sending the "break"
character until the "Press space to set the BAUD rate" message is displayed.
v Controller firmware version 7.77.xx.xx or higher and its associated NVSRAM files installed.
Complete the following steps to view and assign new IP address to the controller management port:
1. Press Enter. If this character (->) is displayed, type Exit and press Enter. Otherwise, continue to next
step.
2. In the terminal emulator session, send the "break" character. For example, use CNTL+BREAK for
Microsoft Windows Hyperterm or ALT+B for Procomm.
3. Enter the uppercase letter S and press Enter when the following message is displayed: Press within 5
seconds: for <S> Service Interface, <BREAK> for baud rate.
4. Enter the password DSStorage (case sensitive) within 60 seconds of when this message is displayed:
Enter the password to access the Service Interface (60 second timeout).
Note: If the controller does not have controller firmware version 7.77.xx.xx or higher and its
associated NVSRAM files installed, this password will not be accepted, and you must follow one of
the two methods to change the IP configuration of controller Ethernet ports. See Assigning static
TCP/IP addresses to a storage subsystem using factory-default management port TCP/IP address on
page 8 and Assigning static TCP/IP addresses storage subsystem using an in-band management
connection on page 9 for more information.
5. Enter 1 or 2 to display or change the IP configuration when the following menu is displayed:
Service Interface Main Menu
==============================
1) Display IP Configuration
2) Change IP Configuration
3) Reset Storage Array Administrator Password
Q) Quit Menu
If option 2 is chosen, follow the prompt to set the IP configuration for the port that you selected. You
must reboot the controller for the settings to take effect.
Note: You must perform these steps on both controllers.
10
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
11
Toolbar
Device tab
(icons for common commands) (storage subsystem status)
Setup tab
(shortcuts to storage
manager tasks)
Enterprise Management
window title bar
Shortcuts
to common
tasks
Status bar
12
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Tree view
The tree view provides a hierarchical view of the nodes in the storage subsystem. The tree view shows
two types of nodes:
v Discovered Storage Subsystems
v Unidentified Storage Subsystems
The Discovered Storage Subsystems node and the Unidentified Storage Subsystems node are child nodes
of the Management Station node.
The Discovered Storage Subsystems node has child nodes that represent the storage subsystems that are
currently managed by the management station. Each storage subsystem is labeled with its machine name
and is always present in the tree view. When storage subsystems and hosts with attached storage
subsystems are added to the Enterprise Management window, the storage subsystems become child
nodes of the Discovered Storage Subsystems node.
Note: If you move the mouse over the Discovered Storage Subsystems node, a tooltip appears displaying
the controller IP address.
The Unidentified Storage Subsystems node shows storage subsystems that the management station
cannot access because of network connection problems, turned off subsystem, or a non-existent name.
You can perform these actions on the nodes in the tree view:
v Double-click the Management Station node and the Discovered Storage Subsystems node to expand or
collapse the view of the child nodes.
v Double-click a storage subsystem node to launch the Subsystem Management window for that storage
subsystem.
v Right-click the Discovered Storage Subsystems node to open a menu that contains the applicable
actions for that node.
The right-click menu for the Discovered Storage Subsystems node contains these options:
v Add Storage Subsystem
v Automatic Discovery
v Configure Alerts
v Refresh
These options are also included with the other options in the Edit and Tools menu options. For more
information, see the Using the Enterprise Management window online help topic.
Table view
In the table view, each storage subsystem is a single row in the table. The columns in the table view show
data about the managed storage subsystem.
Table 1. Data shown in the table view
Column
Description
Name
Type
Status
An icon and a text label that report the true status of the managed
storage subsystem
13
Description
The following connection types are possible:
v Out-of-Band: this storage subsystem is an out-of-band storage
subsystem.
v In-Band: this storage subsystem is an in-band storage subsystem that
is managed through a single host.
v Out-of-Band, In-Band: this storage subsystem is a storage subsystem
that is both out-of-band and in-band.
Click Details to see more information about any of these connections.
Comment
Any comments that you have entered about the specific managed
storage subsystem
Sort the rows in the table view in ascending order or descending order by either clicking a column
heading or by selecting one of these menu options:
v View > By Name
v View > By Status
v View > By Management Connection
v View > By Comment
To change the way that managed storage subsystems appear in the table view, complete one of the
following actions:
v To show all of the known managed storage subsystems in the table view, select the Management
Station node.
v To show only that storage subsystem in the table view, select a storage subsystem node in the tree
view.
Note: Selecting an Unidentified node in the tree view shows an empty table view.
14
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
v Configure drives from your storage subsystem capacity, define hosts and host groups, and grant host
or host group access to sets of drives called storage partitions
v Monitors the health of storage subsystem components and reports a detailed status using applicable
icons
v Access the applicable recovery procedures for a failed logical component or a failed hardware
component
v View the event log for the storage subsystem
v View profile information about hardware components, such as controllers and drives and get a
physical view of the drives in the hardware enclosures
v Access controller-management options, such as changing ownership of logical drives or placing a
controller online or offline
v Access drive-management options, such as assigning hot spares and locating the drive
v Monitor storage subsystem performance
v Configure copy services like Flashcopy, VolumeCopy, and Remote Mirroring
If the storage subsystem has controller firmware version 7.70.xx.xx, its Subsystem Management window
cannot be opened unless a strong password is provided. A strong password must be between 8 and 30
characters and contain at least one number, one lower-case letter, one upper-case letter, and one
non-alphanumeric character (for example, < > ! @ + #). Spaces are not permitted, and it is case-sensitive.
In storage subsystems with controller firmware other than 7.70.xx.xx, you are prompted to provide this
password, if none is specified for the storage subsystem, whenever you attempt to open a Subsystem
Management window for this storage subsystem. IBM recommends creating a subsystem management
password to prevent unauthorized changes to the Subsystem Management configuration.
15
Storage
subsystem
name
Subsystem
Management
window title bar
Subsystem
Management
tabs
Toolbar
(icons for
common
commands)
Storage
subsystem
name and
status
Shortcuts
to common
tasks
16
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Unconfigured Capacity
This node represents the storage subsystem capacity that is not configured
into an array.
Note: Multiple Unconfigured Capacity nodes might appear if your
storage subsystem contains mixed drive types. Each drive type has an
associated Unconfigured Capacity node shown under the Total
Unconfigured Capacity node if unassigned drives are available in the
drive enclosure.
Array
17
Controller Status
The status of each controller is indicated by an icon on the Physical tab. The following table describes the
controller icons.
Table 3. Controller status icons
Icon
Status
Online, Optimal
Offline
Service Mode
Slot Empty
Association
v The blue association dot that is shown adjacent to a controller in the controller enclosure indicates the
current owner of a selected logical drive on the Logical tab.
v The blue association dot that is adjacent to a drive indicates that the drive is associated with a selected
logical drive on the Logical tab.
View
The View button on each enclosure shows the status of the secondary components within the enclosure.
Storage enclosures
For each storage enclosure that is attached to the storage subsystem, a storage enclosure appears on the
Physical tab.
If your storage subsystem contains mixed drive types, a drive type icon appears on the left of the storage
enclosure to indicate the type of drives in the enclosure. The following table describes the different drive
type icons that might appear.
18
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Status
This storage enclosure contains only Fibre Channel drives.
This storage enclosure contains only Full Disk Encryption
(FDE) security-capable drives.
This storage enclosure contains only Serial Attached SCSI
(SAS) drives.
This storage enclosure contains only Serial ATA (SATA)
drives.
Topology pane
The Topology pane shows a tree-structured view of logical nodes that are related to storage partitions.
Click the plus (+) sign or the minus (-) sign adjacent to a node to expand or collapse the view. You can
right-click a node to display a menu that contains the applicable actions for that node.
The storage subsystem, or the root node, has four types of child nodes.
Table 5. Types of nodes in the Topology pane
Child nodes of the root node
Undefined Mappings
Default Group
Host Group
Host
19
The Storage Partition icon, when it is present in the Topology pane, indicates that a storage partition has
been defined for the Default Group, a host group, or a host. This icon also appears in the status bar when
storage partitions have been defined.
Description
The user-supplied logical drive name.
The factory-configured access logical drive also appears in this column.
Note: An access logical drive mapping is not required for storage
subsystem with an in-band connection and might be removed.
Accessible by
The Default Group, a defined host group, or a defined host that has been
granted access to the logical drive in the mapping.
LUN
The LUN that is assigned to the specific logical drive that the host or
hosts use to access the logical drive.
Type
The type of logical drive: standard logical drive or snapshot logical drive.
You can right-click a logical drive name in the Defined Mappings pane to open a menu. The menu
contains options to change and remove the mappings.
The information that is shown in the Defined Mappings pane varies according to which node you select
in the Topology pane, as shown in the following table.
Table 7. Node information by type of node
Node selected
All mappings that are currently defined for the Default Group (if any).
All mappings that are currently defined for the Host Group.
Host node that is a child node of a Host All mappings that are currently defined for the Host Group, plus any
Group node
mappings that are specifically defined for a specific host.
HBA Host Ports node or individual host All mappings that are currently defined for the HBA host port associated
port node outside of the Default Group host.
20
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
21
22
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Preinstallation requirements
This section describes the requirements that must be met before the Storage Manager software with the
Support Monitor tool can be installed.
For Storage Manager installations on UNIX, your system must have graphics capability to use the
installation wizard. If your system does not have graphics capability, you can use the shell command to
install Storage Manager without graphics. See Installing Storage Manager and Support Monitor with a
console window in Linux, AIX, HP-UX, and Solaris on page 27 for more information.
You can also skip this section and install the stand-alone host software packages by using the procedures
that are described in Installing Storage Manager packages manually on page 28. All of the packages are
included with the installation DVD.
The supported management-station operating systems for Storage Manager are:
v AIX
v Windows 7, Windows Vista, Windows XP (Service Pack 2), Windows 2008, Windows 2008 R2, and
Windows 2003
v Linux: RHEL and SLES (x86, x86_64, Linux on Power (ppc) and IA64 editions)
v HP-UX (PA-RISC and IA64 editions)
v SUN Solaris (SPARC and x86 editions)
The Support Monitor tool must be installed on the same management stations as the Storage Manager
software. The supported management station operating systems for Support Monitor are:
v Microsoft Windows 2003 (Service Pack 2), Windows 2008, Windows 2008 R2, Windows XP (Service
Pack 2), and Windows Vista (x86, x64, and IA64 editions)
v Red Hat Enterprise Linux 4 and 5 (x86, x86_64, and IA64 editions)
v SUN Solaris 10 (Sparc and x86 editions)
v IBM AIX 5.2, AIX 5.3 and AIX 6.1.
Copyright IBM Corp. 2012
23
Important: If a MySQL database application or Apache Tomcat web server application is installed on the
management station, it must be uninstalled before the Support Monitor can be installed.
Note: With Storage Manager version 10.50.xx.xx, controller firmware 5.41.xx.xx and later are supported.
Controller firmware versions earlier than 5.41.xx.xx are no longer supported or managed.
The management station must also meet the following hardware, software, and configuration
requirements:
v Microprocessor speed of 1.6 GHz or faster.
v Minimum of 2 GB of system memory. If any other applications are installed in the management station,
additional memory might be required.
v Minimum of 1.5 GB of free disk space for the tool and for the saved support bundles.
v The TCP/IP stack must be enabled. If Support Monitor is installed, the management station Ethernet
port TCP/IP address must be static, and its Ethernet port must be on the same Ethernet subnet as the
monitored storage subsystem Ethernet management ports. DHCP server IP address is not supported. If
Support Monitor is not installed, the IP address does not have to be static.
v The following requirements apply only to the Support Monitor tool:
Make sure that your storage subsystem meets the subsystem model and controller firmware version
requirements that are listed in the following table.
Table 8. Storage Monitor-compatible subsystems and controller firmware
Storage subsystem
Storage Monitor-compatibility
DS3200e
DS3300
DS3400
DS3500
Yes
DS3950
Yes
DS4100
No
DS4200
Yes
DS4300
Yes
DS4400
No
DS4500
Yes
DS4700
Yes
DS4800
Yes
DS5020
Yes
DS5100
Yes
DS5300
Yes
DCS3700
Yes
To use Support Monitor, one of the following web browsers must be installed:
- Internet Explorer 7.0 or later
- Netscape version 6.0 or later
- Mozilla version 1.0 or later
- Firefox version 3.0 or later
Any installed MySQL database application on the management station must be manually
uninstalled before you install the Support Monitor tool.
Any installed Apache Tomcat web server software on the management station must be manually
uninstalled before you install the Storage Manager Profiler Support Monitor tool.
24
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The Support Monitor uses port 162 by default to receive event data from the server. To prevent port
conflicts with other applications running on the server, make sure that no other applications use
port 162.
25
2. Double-click the IBM DS Storage Manager package (or SMIA) executable icon.
3. Follow the instructions in the Installation wizard to install the Storage Manager software with the
Storage Manager Profiler Support Monitor tool. If you accept the default installation directory, the
Storage Manager Profiler Support Monitor is installed in C:\Program Files...\IBM_DS\
IBMStorageManagerProfiler Server.
4. When you select the installation type, you can choose one of the following options:
Attention: Storage Manager SMIA package version 10.77.xx.xx and later will not install the MPIO
DSM driver to support multipath in the host installation type or in the typical installation type when
the SMIA package is installed in the server version of Microsoft Windows operating systems. There is
a separate SMIA package for installing the MPIO DSM.
v Typical (Full) Installation: Installs Storage Manager software packages that are necessary for both
managing the storage subsystem from the host and providing I/O connectivity to the storage
subsystem
v Management Station: Installs the packages that are required to manage and monitor the storage
subsystem (SMclient)
v Host: Installs the packages that are required to provide I/O connectivity to the storage subsystem
(SMagent and SMutil)
v Custom: Allows you to select which packages you want to install. To install the Storage Manager
without the Support Monitor tool, select the customer installation and clear the Support Monitor
box.
5. Configure any antivirus software not to scan the MySQL directory. In Windows operating-system
environments, the directory is:
C:\Program Files...\IBM_DS\ IBMStorageManagerProfiler Server/mysql
6. Install the MPIO DSM drive as required to support multipath by double-clicking the IBM DS Storage
Manger MPIO DSM package and following the instructions in the installation wizard.
Note: This step applies to storage manager version 10.77.xx.xx and later only.
7. Click Start > All Programs > DS Storage Manager 10 client > Storage Manager 10 client to start the
Storage Manager client program. Add the storage subsystems that you want to manage and monitor
in the Enterprise Management window of the Storage Manager Client program.
During the installation, the question Automatically Start Monitor? is displayed. This refers to the
Microsoft Windows Event Monitor service. The Event Monitor must be enabled for both the automatic
ESM synchronization and the automatic support bundle collection of critical events. To enable the Event
Monitor, select Automatically Start Monitor.
The only time that you have to configure the Storage Manager Profiler Support Monitor tool is when you
want to change the support bundle collection time for the monitored storage subsystems. The Storage
Manager Profiler Support Monitor tool automatically collects the support bundles from the storage
subsystems that were added to the Enterprise Management window of the Storage Manager Client
program daily at 2 a.m.
To complete the Storage Manager installation, see Completing the Storage Manager installation on page
31.
Installing Storage Manager and Support Monitor on Linux, AIX, HP-UX, or Solaris
If your management station has a Unix-based operating system, such as Linux, AIX, HP-UX, or Solaris,
complete the following steps to install Storage Manager (including the Support Monitor tool) with the
installation wizard:
1. Download the files from the Storage Manager DVD, or from the System Storage Disk Support
website, to the root file system on your system.
2. Log in as root.
26
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
3. If the Storage Manager software package .bin file does not have executable permission, use the chmod
+x command to make it executable.
4. Execute the .bin file and follow the instructions in the Installation wizard to install the software.
If you accept the default installation directory, the Storage Manager Profiler Support Monitor is
installed in /opt/IBM_DS/IBMStorageManagerProfiler_Server.
When you select the installation type, you can choose one of the following options:
v Typical (Full) Installation: Installs all Storage Manager software packages that are necessary for
both managing the storage subsystem from this host and providing I/O connectivity to the storage
v Management Station: Installs the packages that are required to manage and monitor the storage
subsystem (SMruntime and SMclient)
v Host: Installs the packages that are required to provide I/O connectivity to the storage subsystem
(SMruntime, SMagent, and SMutil)
v Custom: Allows you to select which packages you want to install. To install the Storage Manager
without the Support Monitor tool, select the customer installation and clear the Support Monitor
check box.
5. Configure any antivirus software to not scan the MySQL directory. In Unix-type operating-system
environments, the directory is:
/opt/IBM_DS/IBMStorageManagerProfiler_Server/mysql
6. Type SMclient in the console window and press Enter to start the Storage Manager Client program.
Add the storage subsystems that you want to manage and monitor to the Enterprise Management
window of the Storage Manager Client program.
During the installation, the question Automatically Start Monitor? is displayed. This refers to the Event
Monitor service. The Event Monitor must be enabled for both the automatic ESM synchronization and the
automatic support bundle collection of critical events. To enable the Event Monitor, select Automatically
Start Monitor.
The only time that you have to configure the Storage Manager Profiler Support Monitor tool is when you
want to change the support bundle collection time for the monitored storage subsystems. The Storage
Manager Profiler Support Monitor tool automatically collects the support bundles from the storage
subsystems that were added to the Enterprise Management window of the Storage Manager Client
program daily at 2 a.m.
To complete the Storage Manager installation, see Completing the Storage Manager installation on page
31.
27
Deutsch
English
Espaol
Franais
Italiano
Portugus
(Brasil)
===============================================================================
Installing...
------------[==================|==================|==================|==================]
[------------------|------------------|------------------|------------------]
... ... ...
28
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. There is no manual installation option for Windows operating systems. For all installations of Storage
Manager on Windows, the individual software packages are included in a single Storage Manager
software installer.
Package
SMruntime
SMesm
SMclient
SMagent
SMutil
SMclient is dependent on SMruntime, which is a Java compiler for SMclient. SMruntime must be
installed first.
Package name
Install command
AIX
SMruntime.AIX-10.xx.xx.xx.bff
HP-UX
SMruntime_10.xx.xx.xx.depot
#swinstall -s /cdrom/HP-UX/
SMruntime_10.xx.xx.xx.depot
Solaris
SMruntime-SOL-10.xx.xx.xx.pkg
#pkgadd -d path/SMruntime-SOL-10.xx.xx.xx.pkg
Linux on POWER
SMruntime-LINUX-10.xx.xx.xxx.i586.rpm
2. Verify that the installation was successful by typing the command appropriate for your operating
system.
29
Verify command
AIX
HP-UX
# swverify -v <SMpackage>
Solaris
# pkginfo -l <SMpackage>
Linux on POWER
If the verification process returns an error, contact your IBM service representative.
30
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
31
known as in-band management), make sure that the Fibre Channel, SAS, or iSCSI connection
between the host and storage subsystems is made.
d. Make sure that all of the preparation steps for setting up the storage subsystem for a network
managed system are completed. Use the Add Device option to add the IP addresses of the
storage subsystem. Add both IP addresses of the controller; otherwise, a partially-managed
device error message is displayed when you attempt to manage the storage subsystem.
Note: To use the auto-discovery method, the storage subsystem and this host must be on the
same subnet. Otherwise, use the manual method to add a storage subsystem.
v If you are using the host-agent-management method, complete the following steps:
a. Make sure that the SMagent is installed in the host.
b. Verify that you have a Fibre Channel, SAS, or iSCSI connection from the storage subsystems to
the host on which the SMagent installed. Check the SAN switch zoning or VLAN configuration
as required.
c. Verify that all of the preparation steps are complete, and then perform the following steps:
1) Run the hot_add utility.
2) Restart the SMagent.
3) Right-click the host, and click Tools > Rescan Hosts in the Enterprise Management window.
Note: In certain situations, a storage subsystem might be duplicated in the Device tab tree view
after an automatic discovery. You can remove a duplicate storage management icon from the device
tree by using the Remove Device option in the Enterprise Management window.
4. Verify that the status of each storage subsystem is Optimal. If a device shows a status of
Unresponsive, right-click the device and select Remove Device to delete it from the management
domain. Verify that the storage subsystem is powered up and complete start-of-day process. Then use
the Add Device option to add it to the management domain again. See the Enterprise Management
window online help for instructions for removing and adding devices.
32
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Subsystem > Change Password. After you set the password for each storage subsystem, you are
prompted for that password the first time you attempt a destructive operation in the Subsystem
Management window. You are prompted for the password only once during a single management
session, and the password does not time out. The password does not have password-strength
requirements.
For Storage Manager version 10.50.xx.xx and later with controller firmware version 7.50.xx.xx and later,
you are prompted with a window to set the subsystem management password every time you start the
Subsystem Management window of the storage subsystem that did not have the password set. In
addition, the password times out after a certain duration of Subsystem Management window inactivity.
The password requirement is stricter with Storage Manager version 10.70.xx.xx and later with controller
firmware version 7.70.xx.xx and later; the Subsystem Management window will not open if the password
is not set. The password must be between 8 and 30 characters and contain at least one number, one
lower-case letter, one upper-case letter, and one non-alphanumeric character (for example, < > ! @ + #).
Spaces are not permitted, and it is case-sensitive.Only storage subsystems with controller firmware
version 7.70.xx.xx and later do not allow the subsystem management window to be opened if the
subsystem management password is not set. There are no such restrictions for controller firmware
versions.
Important: Make sure that the password information is kept in a safe and accessible place. Contact IBM
technical support for help if you forget the password to the storage subsystem.
There is a 30-character limit. All leading and trailing spaces are deleted from the name.
Use a unique, meaningful naming scheme that is easy to understand and remember.
Avoid arbitrary names or names that might quickly lose their meaning.
The software adds the prefix Storage Subsystem when it displays storage subsystem names. For
example, if you name a storage subsystem Engineering, it is displayed as Storage Subsystem
Engineering.
33
DS3300
DS3500
DS3950
DS5020
DS5100/5300
34
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The following iSCSI options are available from the Storage Subsystem management menu and are
described in the following sections:
Note: The menu selection in these iSCSI options changes according to the version of the controller
firmware. Refer to the online help for the appropriate menu options.
v Changing target authentication
v Entering mutual authentication permissions on page 36
v
v
v
v
v
35
Using DHCP
Do not use DHCP for the target portals. If you use DHCP, you must assign DHCP reservations so that
leases are maintained consistently across restarts of the storage subsystem. If static IP reservations are not
provided, the initiator ports can lose communication to the controller and might not be able to reconnect
to the device.
36
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
8. Click OK.
9. Select Config Parameters.
10. Scroll until you see ISID. For connection 0, the last character that is listed must be 0. For connection
1, it must be 1, for connection 2, it must be 2, and so on.
11. Repeat steps 6 through 10 for each connection to the target that you want to create.
12. After all of the sessions are connected, select Save Target Settings. If you are using the QLogic iSCSI
Single-Port or Dual-Port PCIe HBA for IBM System x to support IPv6, you must allow the host bus
adapter firmware to assign the local link address.
Using IPv6
The storage subsystem iSCSI ports support the Internet Protocol version 6 (IPv6) TCP/IP. Note that only
the final four octets can be configured if you are manually assigning the local link address. The leading
four octets are fe80:0:0:0. The full IPv6 address is required when you are attempting to connect to the
target from an initiator. If you do not provide the full IPv6 address, the initiator might fail to be
connected.
37
38
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. Before you upgrade a DS4800, DS4700, or a DS4200 storage subsystem to controller firmware version
07.1x.xx.xx and later from the current installed firmware version of 6.xx.xx.xx and earlier, see the
procedures in Using the IBM System Storage Controller Firmware Upgrade Tool on page 41.
3. IBM supports the storage subsystem controller and ESM firmware download with I/O, sometimes
called concurrent firmware download, with some storage subsystems. Before you proceed with
concurrent firmware download, review the readme file that is packaged with the firmware code or
your operating-system Storage Manager host software for any restrictions.
4. Suspend all I/O activity while you download firmware and NVSRAM to a storage subsystem with a
single controller. If you do not suspend I/O activity, the host server will have failed I/O requests
because you have redundant controller connections between the host server and the storage
subsystem.
5. Always check the storage subsystem controller firmware readme file for any controller firmware
dependencies and prerequisites before you apply the firmware updates to the storage subsystem.
Updating any components of the storage subsystem firmware without complying with the
dependencies and prerequisites might cause downtime (to fix the problems or recover).
6. Downgrading the controller firmware is not a supported function. This option should be used only
under the direction of IBM Support. Downgrading from 07.xx to 06.xx firmware levels is not
supported and will return an error if attempted.
If your existing controller firmware is 06.1x.xx.xx or later, you have the option to select the NVSRAM for
download at the same time that you upgrade or download the new controller firmware. Additionally,
you have the option to download the firmware and NVSRAM immediately but activate it later, when it
might be more convenient. See the online help for more information.
Note: The option to activate firmware at a later time is not supported on the DS4400 storage subsystem.
39
Product ID:
ST3750640NS
Package version:
EP58
Firmware version:
3.AEP
ATA Translator
Product ID:
Vendor:
Firmware Version:
43W9715 42D0003IBM
BR-2401-3.0
SLI
LP1158
Method two:
Complete the applicable procedure from the following options to obtain the specified firmware version.
To obtain the controller firmware version:
Right-click the Controller icon on the Physical tab of the Subsystem Management window and
select Properties. The Controller Enclosure properties window opens and displays the properties
for that controller.
You must perform this action for each controller.
To obtain the drive firmware version:
Right-click the Drive icon on the Physical tab of the Subsystem Management window and select
Properties. The Drive Properties window opens and displays the properties for that drive.
You must perform this action for each drive.
To obtain the ESM firmware version:
1. On the Physical tab of the Subsystem Management window, click the Drive Enclosure
Component icon (which is the icon farthest to the right). The Drive Enclosure Component
Information window opens.
2. Click the ESM icon in the left pane. The ESM information is displayed in the right pane of the
Drive Enclosure Component Information window.
3. Locate the firmware version of each ESM in the storage enclosure.
40
download firmware version 06.1x.xx.xx or later, and NVSRAM, complete the following steps:
From the Enterprise Management window, select a storage subsystem.
Click Tools > Manage Device. The Subsystem Management window opens.
Click Upgrade > Controller firmware > Upgrade. The Download Firmware window opens.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note: If the controller firmware is 7.77.xx.xx or later, the system automatically runs a pre-upgrade
check, which takes several minutes. The controller firmware upgrade proceeds only if the pre-upgrade
check is satisfactory. Storage subsystems with controller firmware versions 06.1x.xx.xx and later
support downloading of the NVSRAM file together with the firmware file. This download feature is
not supported in storage subsystems with controller firmware 05.4x.xx.xx or earlier. If your existing
controller firmware is version 05.4x.xx.xx or earlier, only a window for downloading firmware is
displayed.
4. Click Browse next to the Selected firmware file field to identify and select the file with the new
firmware.
5. Select Download NVSRAM file with firmware and click Browse next to the Selected firmware file
field to identify and select the file and identify and select the correct NVSRAM file name. Unless your
configuration has unique conditions, upgrade the NVSRAM at the same time as the controller
firmware. If you choose to transfer and activate immediately, do not select Transfer files but don't
activate them (activate later). Otherwise, select the check box to select Transfer files but don't
activate them (activate later). To activate the firmware at a later time, select the menu option to
activate the controller firmware in the Subsystem Management window.
41
Checking the device health conditions: To determine the health condition of your device, complete the
following steps:
1. From the Array Management window in the Storage Manager, right-click the storage subsystem.
Storage Manager establishes communication with each managed device and determines the current
device status.
There are six possible status conditions:
v Optimal: Every component in the managed device is in optimal working condition.
v Needs Attention: There is a problem with the managed device that requires intervention to correct
it.
v Fixing: A Needs Attention condition has been corrected, and the managed device is currently
changing to an Optimal status.
v Unresponsive: The management station cannot communicate with the device or with one controller
or both controllers in the storage subsystem.
v Contacting Device: Storage Manager is establishing contact with the device.
v Needs Upgrade: The storage subsystem is running a level of firmware that is no longer supported
by Storage Manager.
2. If the status is Needs Attention, write down the indicated condition. Contact an IBM technical support
representative for fault resolution.
Note: The Recovery Guru in Storage Manager also provides a detailed explanation of, and recovery
procedures for, the conditions.
Opening and using the Controller Firmware Upgrade Tool: To use the Controller Firmware Upgrade
Tool, click Tools > Firmware Upgrade in the Enterprise Management window. The Firmware Upgrade
window opens. The Firmware Upgrade Tool automatically completes a diagnostic check on these
subsystems to determine if they are healthy enough to perform a controller firmware upgrade.
Note:
v For any condition other than Optimal, you must contact IBM support for assistance. See Software
service and support on page xv for additional information.
v You can only upgrade from a major release to a major release (for example, 06.xx. to 07.xx) with this
tool. Do not attempt to perform this type of firmware upgrade in the Subsystem Management window.
v After you have upgraded to the 07.xx firmware level, you do not need to use the firmware upgrade
tool. Use the Subsystem Management window to perform any future firmware upgrades.
For more information about using the tool, click the Help button in the Controller Firmware Upgrade
Tool.
Adding a storage subsystem: To add a storage subsystem by using the Controller Firmware Upgrade
Tool, complete the following steps:
1. Click Add. The Select Addition Method window opens.
2. Click Automatic or Manual.
3. Click OK to begin adding storage subsystems.
4. If you want to see issues with an added storage subsystem that might impede firmware upgrading,
click View Log.
Downloading the firmware:
1. Select the storage subsystem that you want to activate. The Download button is enabled.
2. From the Enterprise Management window toolbar, click Tools > Upgrade Firmware. The Download
Firmware window opens.
3. Click Browse and select the controller firmware file that you want to download from its directory.
42
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4. Click Browse and select the NVSRAM file from its directory.
5. Click OK. The firmware download begins. A status bar is displayed in the Controller Firmware
Upgrade window.
Viewing the IBM System Storage Controller Firmware Upgrade Tool log file: The Controller Firmware
Upgrade Tool log file documents any issues with the storage system that might prevent you from
updating the firmware. If you encounter any problems when you upgrade the firmware, click View Log
to open the log file. Correct the issues in the log file before you try to download the firmware again.
43
1. The following procedures assume that you have the latest controller firmware version. If you have an
earlier firmware version, see Finding Storage Manager software, controller firmware, and readme
files on page xiii to obtain the applicable firmware version documentation.
2. IBM supports firmware download with I/O, sometimes referred to as concurrent firmware download.
This feature is not supported for drive firmware. You must schedule downtime for drive and ATA
translator firmware upgrade.
To download drive firmware for Storage Manager, complete the following steps:
1. Before you start the drive-firmware download process, complete the following tasks:
v Complete a full backup of all data on the drives that you select for the firmware upgrade.
v Unmount the file systems on all logical drives that access the drives that you select for the
firmware upgrade.
v Stop all I/O activity before you download drive firmware to a storage subsystem.
2. From the Enterprise Management window, select a storage subsystem.
3. On the Enterprise Management window menu bar, click Tools > Manage Device. The Subsystem
Management window opens.
4. On the Subsystem Management window menu bar, click Upgrade > Drive/ATA translator firmware
upgrade. The Download Drive Firmware wizard window opens to the Introduction page. Read the
instructions and click Next.
Note: Storage Manager offers you the option to download and update up to four different firmware
file types simultaneously.
5. Click Add to locate the server directory that contains the firmware that you plan to download.
6. Select the firmware file that you plan to download and click OK. The file is listed in the Selected
Packages window.
7. Repeat steps 5 and 6 for upto four drive types that you plan to download firmware and click Next.
Additional files are listed in the Selected Packages window.
8. Click Browse, and repeat step 7 until you have selected each firmware file that you plan to
download.
9. After you specify the firmware packages for download, click Next.
10. In the Select Drive window, click the Compatible Drives tab. The Compatible Drives page contains a
list of the drives that are compatible with the firmware package types that you selected. From that
list, select the drives to which you plan to download the drive firmware that you selected in steps 7
and 8. You can press and hold the Ctrl key while you select multiple drives individually, or you can
press and hold the Shift key while you select multiple drives that are listed in series.
Note: The firmware that you plan to download must be listed on the Compatible Drives page. If the
product ID of your drives matches the firmware type and it is not listed as compatible on the page,
contact your IBM technical support representative for additional instructions.
11. Click Finish to initiate download of the drive firmware to each compatible drive that you selected in
step 10.
12. When the Download Drive Firmware message opens with the question Do you want to continue?,
type yes and click OK to start the drive firmware download. The Download Progress window
opens. Do not intervene until the download process is completed. Each drive that is scheduled for
firmware download is designated as in progress until successful or failed.
13. If a drive is designated as failed, complete the following steps:
a. Click Save as to save the error log.
b. On the Subsystem Management window menu bar, click the menu option to display the storage
subsystem event log and complete the following tasks that are necessary to save the event log
before you contact your IBM service representative and proceed to the next step.
1) Click Select all.
44
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
45
Complete the following tasks to enable a premium feature on your storage subsystem:
v Obtaining the premium feature enable identifier
v Generating the feature key file
v Enabling the premium feature on page 47
Note: The procedure for enabling a premium feature depends on your version of the Storage Manager.
v Disabling premium features on page 47
To obtain the storage subsystem premium feature identifier string, make sure that your controller unit
and storage enclosures are connected, the power is turned on, and they are managed using the SMclient.
46
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Enabling the premium feature and feature pack in Storage Manager 10.x or later
To enable a premium feature in Storage Manager version 10.x or later, complete the following steps:
1. In the Subsystem Management window, click Storage Subsystem > Premium Features.... The
Premium Features and Feature Pack Information window opens.
2. To enable a premium feature from the list, click Enable or Use key file, depending on the version of
the controller firmware. A window opens that allows you to select the premium feature key file to
enable the premium feature. Follow the on-screen instructions.
3. Verify that the premium feature is enabled by inspecting the displayed list of premium features in the
Premium Features and Feature Pack Information window.
4. Click Close to close the Premium Features and Feature Pack Information window.
47
Note:
1. For DS3000 storage subsystems with controller firmware version 7.35 or earlier, you cannot use the
Storage Manager interface to disable a premium feature. Instead, you must use the Storage Manager
command-line (SMcli) scripts to disable the premium feature.
2. If you want to enable the premium feature in the future, you must reapply the Feature Key file for
that feature.
3. You can disable the Remote Mirror Option without deactivating the feature. If the feature is disabled
but activated, you can perform all mirroring operations on existing remote mirrors. However, when
the feature is disabled, you cannot create any new remote mirrors. For more information about
activating the Remote Mirror Option, see the IBM System Storage DS Storage Manager Copy Services
User's Guide or see Using the Activate Remote Mirroring Wizard in the Storage Manager online
help.
4. If a premium feature becomes disabled, you can access the website and repeat this process following
the re-activating premium features option.
48
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Near the end of this chapter, the following topics provide optional information that might apply to
configuring your storage subsystems:
v Configuring the IBM System Storage DS5100 and DS5300 for IBM i on page 72
Note: This section applies only to storage configurations that use the IBM i operating system.
v Configuring and using optional premium features on page 74
Note: This section applies only to storage subsystems that have premium features.
v Using other features on page 76
v Tuning storage subsystems on page 80
Note: By default, the Setup tab in the Enterprise Management window opens first when you start
Storage Manager. See Enterprise Management window on page 11 for a detailed description of the
Enterprise Management window.
49
v In case of multiple HBAs in the host connected to the storage subsystem, create a single host partition
to include all. Use the host group definition only to group a set of hosts that share the same set of
logical drives.
v In a cluster partition, perform logical drive mappings on the host group level so that all of the hosts
can recognize the same storage. In a normal partition, perform logical drive mappings on the host
level.
v To set up and assign IBM i LUNs on the DS5300 and DS5100 storage subsystems with the wizard, see
Configuring the IBM System Storage DS5100 and DS5300 for IBM i on page 72 for information
specific to IBM i configuration.
50
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
SAS
For enclosures with FC mid plane, the SAS drive requires the FC-SAS interposer that helps you
insert into a drive slot with a FC connector. This category also includes NL SAS drives.
Note: The SAS drives and the FC-SAS interposers are sold as a single identity and is known as
FC-SAS drive.
In addition to the differences in the disk drive types and interfaces, there are few differences with respect
to the drive capabilities like T10 Protection Information (T10PI) or Full Disk Encryption/Self- Encryption
(FDE/SED) capabilities. Drive capacities are available for most of the supported drive media types, drive
interfaces, and drive capabilities. The DS subsystem does not support all types of drive media. For more
information on the type of drives available and supported for a given storage subsystem, refer to DS
subsystem RFAs. You can also refer the DS storage subsystem Installation, User's and maintenance guide
for more information on the FRU part list for the drives supported in a storage subsystem model. A
summary of supported drive type, drive interface and drive capability is shown in Table 12.
Table 12. Summary of supported drive types, interfaces and capabilities
Supported Disk
Media Type
Spinning hard disk
drives
Drive Interface
Drive capability
FDE capable
Non-FDE capable
SATA
N/A
N/A
N/A
Fibre-Channel (FC)
PI capable
Yes
Yes
Non-PI capable
Yes
Yes
PI capable
Yes
Yes
Non-PI capable
Yes
Yes
FC
Non-PI capable
N/A
Yes
SAS
Non-PI capable
N/A
Yes
NL SAS/SAS
In the Subsystem Management, the Physical tab has a button that helps identify various drive types in a
given enclosure as shown in Figure 5 on page 52. When you click the button, all drives that meet the
button definition are highlighted in the physical view pane. In the enclosure, if all the drives meet the
button definition, the button will be disabled. For example, in Figure 5 on page 52 all the drives in
.
enclosure 3 are of FC interface, hence the Fibre Channel button is displayed as
51
52
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
v If a drive of required rotational speed is not available, IBM might supply a similar drive with higher
rotational speed as a replacement FRU. The performance of the array is not affected when a higher
rotational speed drive is used as a replacement.
v In a RAID array, drives of different capabilities like T10PI or FDE can mix with the drives without
these capabilities in the same RAID array only if one of the drive capabilities is not enabled for that
RAID array. For example, you can create a RAID array with both T10PI supported drives and
non-T10PI supported drives. However, the created array will not be able to operate with T10PI
functionality enabled.
v Drives with lower rotational speed can be used as spares for array with drives having higher rotational
speeds. It is recommended that you do not mix drives with different rotational speeds in the same
RAID array because the performance of an array might be degraded with lower rotation speed drives.
v Drives with different sizes can be mixed in a RAID array. However, the array will be created with all
drives having size of the smallest drive size in the RAID array.
v A RAID array with additional capabilities enabled, like FDE and T10PI, cannot have drives without the
enabled capabilities used as spares for a failed drive in the RAID array. For example, a RAID array
with T10PI and FDE enabled requires a drive that has T10PI and FDE capabilities as a hot-spare drive.
53
54
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Byte #2
Byte #3
Byte #4
Byte #5
Byte #6
Byte #7
Byte #8
v 16-bit CRC
v May be owned by
application client
(initiator) or device
server (target)
The DS storage subsystem supports the T10PI Type1 host protection scheme. Figure 7 shows where
Protection Information metadata is checked from the application in the host to the drive in the storage
subsystem.
Figure 8 on page 56 shows the properties of a non-FDE T10PI drive in the physical view of the subsystem
management window.
55
56
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
and AIXAVT host type regions to extend the T10PI functionality to the server. Without this bit set
in the host type region, T10PI functionality is enabled between the subsystem controller and
drives only.
Note: Refer to the SSIC for information about the types of FC adapters supported along with the
required device driver, firmware version, and the versions of AIX operating systems that provide
T10PI support at the server.
57
Note: An additional parameter also exists on the appropriate SMcli commands to indicate whether a
logical drive is created with T10PI enabled.
Figure 10 on page 59 shows a RAID array and its logical drive that has T10PI functionality enabled.
The shield icon indicates that the array is a T10PI capable RAID array.
58
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note: You do not have to create all logical drives in a T10PI-capable RAID array with T10PI enabled. For
example, logical drive 4 of RAID array 4 is not T10PI enabled but logical drives 2 and 5 are T10PI
enabled as shown in Figure 11 on page 60. However, because you can only enable T10PI functionality at
the time of creation, it is recommended to create a logical drive with T10PI enabled and then disable it at
later time, if necessary.
59
Figure 11. Example - Logical Drive 4 of RAID array 4 - T10PI not enabled
60
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
61
The FDE capable RAID array is shown with either the unlock icon if the array is not secured and with
the lock icon if the array is secured; as shown in Figure_10. For more information about FDE drives, see
Chapter 6, Working with full disk encryption, on page 143.
62
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Creating an array
To create an array from unconfigured capacity nodes, complete the following steps in the Subsystem
Management window:
1. Use either of the following two methods to create a new array:
v Select Total Unconfigured Capacity, and click Array > Create.
v Select and right-click Total Unconfigured Capacity, and click Create Array.
The Introduction (Create Array) window opens.
2. Click Next. The Array Name & Drive Selection (Create Array) window opens.
3. Take the applicable action for the following fields:
v Array name: Enter a name for the new array. The name can be a maximum of 30 characters.
v Drive selection: Select Automatic or Manual (Advanced).
63
Automatic
Choose from a list of automatically generated drive and capacity options. This option is
preselected by default.
Manual (Advanced)
Choose specific drives to obtain capacity for the new array.
v Click Next. The RAID Level and Capacity (Create Array) window opens.
Specify the RAID level (redundancy protection).
Select the number of drives (overall capacity) for the new array.
Click Finish. The Array Created window opens.
If you want continue the process to create a logical drive, click Yes; if you want to wait to create a
logical drive at another time, click No.
4.
5.
6.
7.
RAID-3
RAID-5
RAID-6
RAID-10
Each level provides different performance and protection features. RAID-1, RAID-3, RAID-5, and RAID-6
write redundancy data to the drive media for fault tolerance. The redundancy data might be a copy of
the data (mirrored) or an error-correcting code that is derived from the data. If a drive fails, the
redundant data is stored on a different drive from the data that it protects. The redundant data is used to
reconstruct the drive information on a hot-spare replacement drive. RAID-1 uses mirroring for
redundancy. RAID-3, RAID-5, and RAID-6 use redundancy information, sometimes called parity, that is
constructed from the data bytes and striped along with the data on each disk.
Table 14. RAID level descriptions
RAID level
Short description
Detailed description
RAID-0
Note: RAID-0 does
not provide data
redundancy.
Non-redundant,
striping mode
64
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Short description
Detailed description
RAID-1 or RAID-10
Striping/Mirroring
mode
v A minimum of two drives are required for RAID-1: one for the
user data and one for the mirrored data. The DS3000, DS4000, or
DS5000 storage subsystem implementation of RAID-1 is a
combination of RAID-1 and RAID-10, depending on the number
of drives that are selected. If only two drives are selected, RAID-1
is implemented. If you select four or more drives (in multiples of
two), RAID 10 is automatically configured across the array; two
drives are dedicated to user data, and two drives are dedicated to
the mirrored data.
v RAID-1 provides high performance and the best data availability.
On a RAID-1 logical drive, data is written to two duplicate disks
simultaneously. On a RAID-10 logical drive, data is striped across
mirrored pairs.
v RAID-1 uses disk mirroring to make an exact copy of data from
one drive to another drive. If one drive fails in a RAID-1 array,
the mirrored drive takes over.
v RAID-1 and RAID-10 are costly in terms of capacity. One-half of
the drives are used for redundant data.
RAID-3
High-bandwidth
mode
RAID-5
RAID-6
Block-level striping
with dual distributed
parity
Note: One array uses a single RAID level, and all redundancy data for that array is stored within the
array.
Chapter 4. Configuring storage
65
The capacity of the array is the aggregate capacity of the member drives, minus the capacity that is
reserved for redundancy data. The amount of capacity that is needed for redundancy depends on the
RAID level that is used.
To perform a redundancy check, click Advanced > Recovery > Check array redundancy. The redundancy
check performs one of the following actions:
v Scans the blocks in a RAID-3, RAID-5, or RAID-6 logical drive and checks the redundancy information
for each block
v Compares data blocks on RAID-1 mirrored drives
Important: When you select Check array redundancy, a warning message opens that informs you to use
the option only when you are instructed to do so by the Recovery Guru. It also informs you that if you
have to check redundancy for any reason other than recovery, you can enable redundancy checking
through Media Scan.
Name Type a name that is unique in the storage subsystem, up to a maximum of 30 characters.
3. Under Advanced logical drive parameters, select one of the following options:
Use recommended settings
Select this option to create the logical drive, using the storage subsystem default settings.
After you select Use recommended settings, click Next. Proceed to step 5.
Customize settings (I/O characteristics and controller ownership)
Choose this option to customize your I/O characteristics, controller ownership, and
logical-drive-to-LUN mapping settings. After you select Customize settings, click Next.
Proceed to step 4.
4. In the Advanced logical drive parameters window, specify the applicable I/O characteristics
(characteristics type, segment size, and cache read-ahead multiplier) and click Next. The Specify
Logical Drive-to-LUN Mapping (Create Logical Drive) window opens.
Note: The I/O characteristics settings can be set automatically or they can be specified manually,
according to one of the following logical drive usages: file system, database, or multimedia.
5. In the Specify Logical Drive-to-LUN Mapping (Create Logical Drive) window, specify the logical
drive-to-LUN mapping.
66
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The logical drive-to-LUN mapping preference can be one of the following two settings:
Default mapping
The Automatic setting specifies that a LUN is automatically assigned to the logical drive,
using the next available LUN within the default host group. This setting grants logical drive
access to host groups or host computers that have no specific logical drive-to-LUN mappings
(those that were designated by the default host group node in the Topology view). If the
Storage Partition feature is not enabled, you must specify the Automatic setting. In addition,
you can also change the host type to match the host operating system.
6.
7.
8.
9.
10.
After you create logical drives with automatic logical drive-to-LUN mappings, follow the applicable
instructions for your operating system in Identifying devices on page 120 to discover the new logical
drive.
67
capacity of the largest drive in the storage subsystem. For maximum data protection, you must use only
the largest capacity drives for hot-spare drives in mixed capacity hard drive configurations. There is also
an option to manually unassign individual drives.
If a drive fails in the array, the hot spare can be substituted automatically for the failed drive without
requiring your intervention. If a hot spare is available when a drive fails, the controller uses redundancy
data to reconstruct the data onto the hot spare.
Note: Drives with different interface protocols or technologies cannot be used as hot-spares for each
other. For example, SATA drives and Fibre Channel drives cannot act as hot spares for each other.
68
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note:
In the Veritas Storage Foundation Linux environment, the default host type must be set to 13.
Host type VMWARE has been added to NVSRAM as an additional host type. DS4200 and DS4700 will
use index 21.
All other supported systems will use index 16.
Although not required, if using a Linux host type for a VMWARE host, it is recommended to move to the
VMWARE host type since any upgrading of controller firmware and NVSRAM would continue to require
running scripts, whereas using the VMWARE host type does not require running scripts.
v The controllers do not need to be rebooted after the change of host type
Chapter 4. Configuring storage
69
70
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
1. During host-port definition, you must set each host type to the applicable operating system so that
the firmware on each controller can respond correctly to the host.
2. You must enable storage partitioning, which is a premium feature. You must use the partition key that
you saved at installation or go to the IBM webpage for feature codes to reactivate and obtain a new
feature key. For more information about premium features, see Storage Manager premium features
on page 45.
71
6. Repeat step 5 for each LUN that you want to map to the partition.
Note: You can also use the Storage Partitioning wizard feature of the Storage Manager Task Assistant to
map LUNs to a new storage partition.
Configuring the IBM System Storage DS5100 and DS5300 for IBM i
Use the information in the following sections, in combination with the Configuring disk storage on
page 63 and Defining a host group on page 70 sections, to set up and assign IBM i LUNs on the
DS5100 and DS5300 storage subsystems with the Storage Manager software.
72
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
73
About FlashCopy
A FlashCopy logical drive is a logical point-in-time image of a logical drive, called a base logical drive. A
FlashCopy logical drive has the following features:
v It is created quickly and requires less disk space than an actual logical drive.
v It can be assigned a host address, so that you can perform backups with the FlashCopy logical drive
while the base logical drive is online and accessible.
v You can use the FlashCopy logical drive to perform application testing or both scenario development
and analysis. This does not affect the actual production environment.
v The maximum number of allowed FlashCopy logical drives is one-half of the total logical drives that
are supported by your controller model.
For additional information about the FlashCopy feature and how to manage FlashCopy logical drives, see
the Storage Manager Subsystem Management window online help.
Important: The FlashCopy drive cannot be added or mapped to the same server that has the base
logical drive of the FlashCopy logical drive in a Windows 2000, Windows Server 2003, or NetWare
environment. You must map the FlashCopy logical drive to another server.
74
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Using VolumeCopy
The VolumeCopy feature is a firmware-based mechanism for replicating logical drive data within a
storage subsystem. This feature is designed as a system-management tool for tasks such as relocating
data to other drives for hardware upgrades or performance management, data backup, or restoring
snapshot logical drive data. Users submit VolumeCopy requests by specifying two compatible drives. One
drive is designated as the source and the other as the target. The VolumeCopy request is persistent so that
any relevant result of the copy process can be communicated to the user. For more information about this
feature, contact your IBM reseller or marketing representative.
75
76
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Attention: For maximum data integrity, do not enable the write-caching without batteries parameter,
because data in the cache is lost during a power outage if the controller enclosure does not have working
batteries. Instead, contact IBM service to get a battery replacement as soon as possible to minimize the
time that the storage subsystem is operating with write-caching disabled.
77
RAID-3, RAID-5, or RAID-6 logical drives, a redundancy data check scans the data blocks,
calculates the redundancy data, and compares it to the read redundancy information for each
block. It then repairs any redundancy errors, if required. For a RAID-1 logical drive, a
redundancy data check compares data blocks on mirrored drives and corrects any data
inconsistencies.
Do not use this setting on older DS storage subsystems such as the DS4500, DS4400, DS4300, or
DS4100; redundancy checking has a negative effect on storage subsystem performance.
For newer storage subsystems, such as the DS5100, DS5300, DS5020, or DS3950, this setting does
not cause performance degradation.
When it is enabled, the media scan runs on all of the logical drives in the storage subsystem that meet
the following conditions:
v The logical drive is in an optimal status.
v There are no modification operations in progress.
v The Media Scan parameter is enabled.
Note: Media Scan must be enabled for the entire storage subsystem and enabled on each logical drive
within the storage subsystem to protect the logical drive from failure due to media errors.
Media Scan reads only data stripes, unless there is a problem. When a block in the stripe cannot be read,
the read comment is retried a certain number times. If the read continues to fail, the controller calculates
what that block must be and issues a write-with-verify command on the stripe. As the disk attempts to
complete the write command, if the block cannot be written, the drive reallocates sectors until the data
can be written. Then the drive reports a successful write and Media Scan checks it with another read.
There must not be any additional problems with the stripe. If there are additional problems, the process
repeats until there is a successful write, or until the drive is failed because of many consecutive write
failures and a hot-spare drive takes over. Repairs are made only on successful writes, and the drives are
responsible for the repairs. The controller issues only write-with-verify commands. Therefore, data stripes
can be read repeatedly and report bad sectors, but the controller calculates the missing information with
RAID.
In a dual-controller storage subsystem, there are two controllers that handle I/O (Controllers A and B).
Each logical drive that you create has a preferred controller that normally handles I/O for it. If a
controller fails, the I/O for logical drives that is owned by the failed controller fails over to the other
controller. Media Scan I/O is not impacted by a controller failure, and scanning continues on all
applicable logical drives when there is only one remaining active controller.
If a drive is failed during the media-scan process because of errors, normal reconstruction tasks are
initiated in the controller operating system, and Media Scan attempts to rebuild the array using a
hot-spare drive. While this reconstruction process occurs, no more media-scan processing is done on that
array.
Note: Because additional I/O reads are generated for media scanning, there might be a performance
impact, depending on the following factors:
v The amount of configured storage capacity in the storage subsystem. The greater the amount of
configured storage capacity in the storage subsystem, the greater the performance impact.
v The configured scan duration for the media-scan operations. The longer the scan, the lower the
performance impact.
v The status of the redundancy check option (enabled or disabled). If redundancy check is enabled, the
performance impact is greater.
78
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
process discovers any errors and reports them to the storage subsystem major event log (MEL). The
following table lists the errors that are discovered during a media scan.
Table 15. Errors discovered during a media scan
Error
Description
Result
Redundancy mismatches
Unfixable error
The data could not be read, and parity or The error is reported to the event log.
redundancy information could not be used
to regenerate it. For example, redundancy
information cannot be used to reconstruct
data on a degraded logical drive.
79
For controller firmware version 7.60.39.00 and later, the redundancy check option is enabled as a
default setting for any newly created logical drives. If you want an existing logical drive that was
created before version 7.60.39.00 or later was installed to have the redundancy check option enabled,
you must enable the option manually.
Without redundancy check enabled, the controller reads the data stripe to confirm that all the data can
be read. If it reads all the data, it discards the data and moves to the next stripe. When it cannot read a
block of data, it reconstructs the data from the remaining blocks and the parity block and issues a
write with verify to the block that could not be read. If the block has no data errors, Media Scan takes
the updated information and verifies that the block was fixed. If the block cannot be rewritten, the
drive allocates another block to take the data. When the data is successfully written, the controller
verifies that the block is fixed and moves to the next stripe.
Note: With redundancy check, Media Scan goes through the same process as without redundancy
check, but, in addition, the parity block is recalculated and verified. If the parity has data errors, the
parity is rewritten. The recalculation and comparison of the parity data requires additional I/O, which
can affect performance.
Important: Changes to the Media Scan settings do not go into effect until the current media-scan cycle is
completed.
To change the Media Scan settings for an entire storage subsystem, complete the following steps:
1. Select the storage subsystem entry on the Logical or Physical tab of the Subsystem Management
window.
2. Click Storage Subsystem > Change > Media Scan Settings.
To change the Media Scan settings for a logical drive, complete the following steps:
1. Select the logical drive entry on the Logical or Physical tab of the Subsystem Management window.
2. Click Storage Subsystem > Change > Media Scan Settings.
80
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
controllers to monitor and the polling interval. You can also receive storage subsystem totals, which is
data that combines the statistics for both controllers in an active-active controller pair.
Table 16. Performance Monitor tuning options in the Subsystem Management window
Data field
Description
Total I/Os
Total I/Os that have been performed by this device since the beginning of the
polling session.
Read percentage
The percentage of total I/Os that are read operations for this device. Write
percentage is calculated as 100 minus this value.
Cache-hit percentage
The percentage of read operations that are processed with data from the cache,
rather than requiring a read from the logical drive.
During the polling interval, the transfer rate is the amount of data, in KB, that is
moved through the Fibre Channel I/O path in 1 second (also called throughput).
The maximum transfer rate that is achieved during the Performance Monitor
polling session.
The average number of I/O requests that are serviced per second during the
current polling interval (also called an I/O request rate).
The maximum number of I/O requests that are serviced during a 1-second
interval over the entire polling session.
Multi-path driver
AIX
MPIO
RDAC
Solaris
MPxIO
Round robin
RDAC
Windows
MPIO
81
ownership changes. The basic assumption for the round robin policy is that the data paths are equal.
With mixed host support, the data paths might have different bandwidths or different data transfer
speeds.
82
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The
The
The
The
The
RAID level
logical-drive modification priority
segment size
number of logical drives in the arrays or storage subsystem
fragmentation of files
Note: Fragmentation affects logical drives with sequential I/O access patterns, not random I/O access
patterns.
Enabling write-caching
Higher I/O write rates occur when write-caching is enabled, especially for sequential I/O access patterns.
Regardless of the I/O access pattern, be sure to enable write-caching to maximize the I/O rate and
shorten the application response time.
83
Applications with a low read percentage (write-intensive) do not perform as well on RAID-5 logical
drives because of the way that a controller writes data and redundancy data to the drives in a RAID-5
logical drive. If there is a low percentage of read activity relative to write activity, you can change the
RAID level of a logical drive from RAID-5 to RAID-1 for faster performance.
84
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
85
ds50_001138
Important: Use caution when you run the commands; the Script Editor does not prompt you for
confirmation of operations that are destructive, such as the Delete arrays and Reset Storage Subsystem
configuration commands.
Not all script commands are implemented in all versions of the controller firmware. The earlier the
firmware version, the smaller the set of available script commands. For more information about script
commands and firmware versions, see the Storage Manager Enterprise Management window.
For a list of available commands and their syntax, see the online Command Reference help.
To
1.
2.
3.
In the Script view, you can input and edit script commands. The Output view displays the results of the
operations. The Script view supports the following editing key strokes:
Ctrl+A
Selects everything in the window
Ctrl+C
Copies the marked text in the window into a Windows clipboard buffer
86
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Ctrl+V
Pastes the text from the Windows clipboard buffer into the window
Ctrl+X Deletes (cuts) the marked text in the window
Ctrl+Home
Moves the cursor to the top of the script window
Ctrl+End
Moves the cursor to the bottom of the script window
The following list shows general guidelines for using the Script Editor:
v All statements must end with a semicolon (;).
v Each command and its associated primary and secondary parameters must be separated by a space.
v The Script Editor is not case-sensitive.
v Each new statement must begin on a separate line.
v Comments can be added to your scripts to make it easier for you and other users to understand the
purpose of the command statements.
The Script Editor supports the two following comment formats:
v Text contained after two forward slashes (//) until an end-of-line character is reached
For example:
//The following command assigns hot spare drives.
set drives [1,2 1,3] hotspare=true;
The comment //The following command assigns hot spare drives. is included for clarification and is
not processed by the Script Editor.
Important: You must end a comment that begins with // with an end-of-line character, which you
insert by pressing the Enter key. If the script engine does not find an end-of-line character in the script
after processing a comment, an error message displays and the script fails.
v Text contained between the /* and */ characters
For example:
/* The following command assigns hot spare drives.*/
set drives [1,2 1,3] hotspare=true;
The comment /*The following command assigns hot spare drives.*/ is included for clarification and
is not processed by the Script Editor.
Important: The comment must start with /* and end with */. If the script engine does not find both a
beginning and ending comment notation, an error message displays and the script fails.
87
88
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
89
c. Select the HBA that you want to use for SAN booting and configure the BIOS so that the boot
LUN is designated as the preferred boot device. After the storage subsystem has discovered the
HBA WWPNs, you must configure them as the HBAs to the boot LUN, using the host-mapping
procedures.
Note:
1) The HBA must be logged in to the storage subsystem. Even though no LUN will be available
yet, you can use the BIOS to discover the storage subsystem.
2) For more information, see the documentation that came with your HBA.
d. Save the changes, exit BIOS, and restart the server. The BIOS can now be used to discover the
newly configured LUN.
4. Start the installation by booting from the installation media:
a. During the installation, your operating-system media asks which drive (or LUN) you want to
perform the installation. Select the drive that corresponds to your storage subsystem device.
Note: If you are prompted during the installation for third-party device drivers, select the HBA
driver that you have available on another form of media.
b. Choose the default option for disk partitioning.
Note: Make sure that the LUN you choose is large enough for the operating system. For Linux,
and most other operating systems, 20 GB is enough for the boot device. For swap partitions, make
sure that the size is at least the size of your server physical memory.
5. Complete the installation and finish the SAN boot procedure:
a. Restart the server again, and open the boot options menu. The boot device that you set up is
ready to be used.
b. Select the option to boot from a hard disk drive/SAN, and select the HBA that is associated with
the SAN disk device on which the installation was completed. The installation boot device is now
listed in the bootable devices that are discovered on the selected HBA.
c. Select the applicable device, and boot.
d. Set the installed boot device as the default boot device for the system.
Note: This step is not required. However, the installed boot device must be the default boot
device to enable unattended reboots after this procedure is complete.
e. Linux only To complete the installation on Linux, complete the following steps:
1) Verify that the persistent binding for /var/mpp/devicemapping is up-to-date. The
/var/mpp/devicemapping file tells RDAC which storage subsystem to configure first. If
additional storage subsystems will be added to the server, the storage subsystem with the
boot/root volume must always be first in the device mapping file. To update this file, execute
the following command:
# mppUpdate
2) After you run the # mppUpdate command, cat the /var/mpp/devicemapping file with the
following command:
# cat /var/mpp/devicemapping 0:<DS4x00 SAN Boot Device>
The storage subsystem for the boot/root volume must be at entry 0. If the boot/root volume is
not at entry 0, edit the file to reorder the storage subsystem entries so that the array for the
boot/root volume is at entry 0.
3) Execute the # mppUpdate command. The installation is now complete.
Additional paths between the storage subsystem and server can now be added. If the server is going to
be used to manage the storage subsystem, the Storage Manager can now be installed on the server.
90
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
For additional information about using multipath drivers, see Using multipath drivers to automatically
manage logical drive fail-over and fail-back.
Multipath driver
AIX
HP-UX
Linux
Mac OS
NetWare
Novell MPE
Solaris
SVC
SDD
VMware
NMP
Windows
With the exception of Windows MPIO, multipath driver files are not included on the Storage Manager
DVD. Check the SSIC and the Storage Manager readme file for the minimum file set versions that are
required for your operating system. To learn how to find the readme files on the web, see Finding
Storage Manager software, controller firmware, and readme files on page xiii. To install the multipath
driver, follow the instructions in Installing a multipath driver on page 95.
Multipathing refers to the ability of the host to recognize multiple paths to the storage device. This is
done by using multiple HBA ports or devices within the host server that are connected to SAN fabric
switches, which are also connected to the multiple ports on the storage devices. For the storage products
Chapter 5. Configuring hosts
91
that are referred to as DS3000, DS4000, or DS5000, these devices have two controllers within the storage
subsystem that manage and control the disk drives. These controllers behave in either active or passive
fashion. Ownership and control of a particular LUN is done by one controller. The other controller is in a
passive mode until a failure occurs, at which time the LUN ownership is transferred to that controller.
Each controller might have more than one fabric port for connectivity to the SAN fabric.
Figure 18 shows a sample multipath configuration for all supported operating systems except AIX
fcp_array and Solaris RDAC multipath configurations. Figure 19 on page 93 shows a sample multipath
configuration for the AIX fcp_array, Microsoft Windows RDAC (no longer supported), and Solaris RDAC
multipath configurations.
Server 1
HBA 1
Server 2
HBA 2
HBA 1
HBA 2
FC Switch
FC Switch
Controller A
Controller B
Storage subsystem
See Disk drives supported by IBM System Storage DS Storage Manager on page 50 for more information.
Figure 18. Host HBA to storage subsystem controller multipath sample configuration for all multipath drivers except
AIX fcp_array and Solaris RDAC
92
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Server 1
HBA 1
Server 2
HBA 2
HBA 1
HBA 2
FC Switch
FC Switch
Controller A
Controller B
Storage subsystem
Figure 19. Host HBA to storage subsystem controller multipath sample configuration for AIX fcp_array and Solaris
RDAC multipath drivers
Most multipath drivers can support multiple paths. Table 19 shows the number of paths each driver can
support. Note that the AIX fcp_array and Solaris RDAC can support only two paths, one to each
controller.
Table 19. Number of paths each multipath driver supports by operating system
Driver
Number of paths
Default
AIX MPIO
Unlimited
Not applicable
AIX RDAC
Not applicable
HP-UX native
65,536
Not applicable
HP-UX PVlinks
8,192
Not applicable
Linux MPP
Unlimited
Unlimited
Not applicable
Mac OS
Unlimited
Not applicable
Solaris MPxIO
Unlimited
Not applicable
Solaris RDAC
Not applicable
Unlimited
Not applicable
SVC
32
Not applicable
VMware
Not applicable
Unlimited
Not applicable
93
94
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
In this zoning scheme (denoted by the translucent bar), one HBA port is zoned to one controller host port.
Server
HBA 1
HBA 2
FC Switch
FC Switch
C1
(Controller 1)
C2
(Controller 2)
Storage subsystem
In this zoning scheme (denoted by the translucent bars), one HBA port is zoned to two controller host ports.
Server
HBA 1
HBA 2
FC Switch
FC Switch
C1
(Controller 1)
C2
(Controller 2)
Storage subsystem
95
96
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Important: Not all DS4000 and DS5000 controller firmware versions support this functionality. Only
DS4000 and DS5000 controller firmware versions 06.12.27.xx (and later) for DS4300 standard or turbo
models, and DS4500 storage subsystems or versions 6.16.8x.xx (and later) for DS4200, DS4700, and
DS4800 storage subsystems support the SCSIport Miniport device driver.
Before you install the device driver, check the readme file that is included in the device driver package
file, as well as the readme file included with the Storage Manager host software for Windows, to see
which device drivers and controller firmware versions are supported for DS3000, DS4000, or DS5000
storage subsystems. See Finding Storage Manager software, controller firmware, and readme files on
page xiii to learn how to access the most recent Storage Manager readme files on the web. Follow the
readme device driver installation instructions that are associated with your operating system.
Note: Read the device driver readme file for any required modifications to the default HBA BIOS and
host operating system registry settings to provide optimal performance. If you make any changes to the
HBA BIOS settings, the server must be rebooted for the changes to be enabled.
For more information, see the documentation that came with your Fibre Channel HBA.
97
Device mapper multipath (DMMP or DM-MP) is supported on SLES11, SLES11 SP1, RHEL 6.0, RHEL 6.1
or their later versions.
Minimum version
Download location
Kernel
kernel-default-2.6.27.29-0.1.1
http://download.novell.com/patch/
finder
scsi_dh_rdac driver
lsi-scsi_dh_rdac-kmp-default0.0_2.6.27.19_5-1
http://drivers.suse.com/driverprocess/pub/update/LSI/sle11/
common/x86_64/
device-mapper-1.02.27-8.6
http://download.novell.com/patch/
finder
Kpartx
kpartx-0.4.8-40.6.1
http://download.novell.com/patch/
finder
Multipath_tools
multipath-tools-0.4.8-40.6.1
http://download.novell.com/patch/
finder
Ensure that you install all the dependent packages before continuing further. For more details, refer to the
SUSE Linux Enterprise Server 11 Installation and Administration Guide in Novel/SuSe website.
Complete the following steps to install device mapper multipath on SLES11 base:
1. Use the media supplied with the operating system vendor to complete the installation of SLES 11.
2. Download and install the errata kernel 2.6.27.29-0.1.
3. Reboot to the 2.6.27.29-0.1 kernel.
4.
5.
6.
7.
Install device-mapper-1.02.27-8.6.
Install kpartx-tools-0.4.8-40.6.1.
Install multipath-tools-0.4.8-40.6.1.
Update and configure /etc/multipath.conf. A sample file is stored at /usr/share/doc/packages/
multipath-tools/multipath.conf.synthetic. Copy and rename this file to /etc/multipath.conf.
Refer to Working with Multipath.conf file on page 99 for more details.
8. Enable multipathd service using the following command: #chkconfig multipathd on.
9. Edit the /etc/sysconfig/kernel file to add scsi_dh_rdac to the INITRD_MODULES list. This should
add the scsi_dh_rdac to initrd.
10. Install lsi-scsi_dh_rdac-kmp-default-0.0_2.6.27.19_5-1.
11. Reboot the host.
98
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Installing the Device Mapper Multi-Path on RHEL 6.0, RHEL 6.1 or later
All the components required for DMMP are included in RHEL 6 and 6.1 installation media. By default,
DMMP is disabled. Complete the following steps to enable DMMP components on the host.
1. Use the media supplied with the operating system vendor to complete installation of RHEL 6.0, RHEL
6.1 or later.
2. Update and configure /etc/multipath.conf. A sample file is stored at /usr/share/doc/packages/
multipath-tools/multipath.conf.synthetic. Copy and rename this file to /etc/multipath.conf. Refer
to Working with Multipath.conf file for more details.
3. Enable multipathd service using the following command: #chkconfig multipathd on
4. Create an initramfs image using the scsi_dh_rdac driver:
a. Create a file scsi_dh_alua.conf in the /etc/modprobe.d/ directory.
b. In this file, add the following: alias scsi_hostadapter99 scsi_dh_rdac
5. Run the following command to create an initramfs image:#dracut -f /boot/initrd-$(uname
-r)-scsi_dh $(uname -r)
6. Update the boot loader configuration file (grub.conf , lilo.conf, or yaboot.conf) using initramfs.
7. Reboot the host to boot with the new initramfs image.
99
v For RHEL, the file is called multipath.conf.defaults and is stored in this directory:
/usr/share/doc/device-mapper-multipath-0.4.9/.
2. Rename the file multipath.conf.
3. Make the configuration changes described in this section to the new /etc/multipath.conf file. The
content of the sample multipath.conf file varies, depending on whether it is from SLES or RHEL
kernels.
Note: All entries for multipath devices are commented out initially. To uncomment, remove the first
character (#) from that section. You must uncomment the three sections - default, blacklist, and
devices.
The configuration file is divided into five sections:
defaults
Specifies all of the default values.
blacklist
Blacklists new installations. The default blacklist is listed in the commented-out section of
the/etc/multipath.conf file. Blacklist the device mapper multipath by WWID if you do not
want to use this functionality.
blacklist_exceptions
Specifies any exceptions to the items in the blacklist section.
devices
Lists all of the multipath devices with their matching vendor and product values.
multipaths
Lists all of the multipath devices with their matching WWID values.
To determine the attributes of a multipath device, check the multipaths section of the /etc/
multipath.conf file; then the devices section; and then the defaults section. Depending on the version of
the Linux kernel, the devices section of the sample multipath.conf file might already have settings
defined for your storage subsystem model product ID. All you need to do is verify that the settings
match the recommended settings listed below. Otherwise, you have to manually enter the devices settings
for your subsystem model product ID. If you have multiple storage subsystems with different product
IDs connected to the Linux host, add the device settings for each storage subsystem product ID in the
devices section of the /etc/ multipath.conf file. Sample settings for DS3500 (product ID 1746) and
DS5100/DS5300 (product ID 1818) in the devices section of the multipath.conf file in SLES operating
systems are shown below:
Note: If the Product ID exceeds four characters, use only the first four characters. In the following
example, although the Product ID is '1746 FAStT', the product is specified as '1746'. Similarly, '1818 FAStT'
is specified as '1818'.
Devices {
device {
vendor
product
path_grouping_policy
getuid_callout
path_selector
path_checker
features
hardware_handler
prio
failback
no_path_retry
rr_min_io
rr_weight
"IBM"
"1746"
group_by_prio
"/lib/udev/scsi_id -g -u -d /dev/%n"
"round-robin 0"
rdac
"2 pg_init_retries 50"
"1 rdac"
rdac
immediate
15
100
priorities
100
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
device {
vendor
product
path_grouping_policy
getuid_callout
path_selector
path_checker
features
hardware_handler
prio
failback
no_path_retry
rr_min_io
rr_weight
"IBM"
"1818"
group_by_prio
"/lib/udev/scsi_id -g -u -d /dev/%n"
"round-robin 0"
rdac
"2 pg_init_retries 50"
"1 rdac"
rdac
immediate
15
100
priorities
Sample settings for DS3500 (product ID 1746) and DS5100/DS5300 (product ID 1818) in the devices
section of the multipath.conf file in RHEL operating systems are shown below:
Devices {
device {
vendor
product
path_grouping_policy
getuid_callout
path_selector
path_checker
features
hardware_handler
prio
failback
no_path_retry
rr_min_io
rr_weight
"IBM"
"1746"
group_by_prio
"/lib/udev/scsi_id --whitelisted --device=/dev/%n"
"round-robin 0"
rdac
"2 pg_init_retries 50"
"1 rdac"
rdac
immediate
15
100
priorities
}
device {
vendor
product
path_grouping_policy
getuid_callout
path_selector
path_checker
features
hardware_handler
prio
failback
no_path_retry
rr_min_io
rr_weight
"IBM"
"1818"
group_by_prio
"/lib/udev/scsi_id --whitelisted --device=/dev/%n"
"round-robin 0"
rdac
"2 pg_init_retries 50"
"1 rdac"
rdac
immediate
15
100
priorities
If you have Access LUN (sometimes referred to as UTM LUN) mapped to the host partitions, include an
entry in the blacklist section of the/etc/multipath.conf file, so that the file is not managed by the
DMMP. The Storage manager host software uses the Access LUN for in-band management of the storage
subsystem. The entries should follow the pattern of the following example:
blacklist {
device {
vendor "*"
product "Universal Xport"
}
101
The following table describes the attributes and values in the devices section of the /etc/multipath.conf
file.
Table 21. Attributes and parameter values in the multipath.conf file
Attribute
Parameter value
Description
path_grouping_policy
group_by_prio
prio
rdac
getuid_callout
For SLES"/lib/udev/scsi_id -g -u
-d /dev/%n" For RHEL"/lib/udev/
scsi_id --whitelisted--device=/
dev/%n"
polling_interval
path_checker
rdac
path_selector
"round-robin 0"
hardware_handler
"1 rdac"
failback
immediate
features
no_path_retry
30
rr_min_io
100
102
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Table 21. Attributes and parameter values in the multipath.conf file (continued)
Attribute
Parameter value
Description
rr_weight
priorities
FAStT
In the preceding example, the multipath device node for this device is /dev/mapper/mpathp and
/dev/dm-0. The following table lists some basic options and parameters for the multipath command.
Table 22. Options and parameters for the multipath command
Command
Description
multipath -h
multipath -11
multipath -f map
multipath -F
Action
103
where package_version is the SLES or RHEL package version number. As a result, a directory called
linuxrdac-version# or linuxrdac is created.
4. Open the readme that is included in the linuxrdac-version# directory.
5. In the readme, find the instructions for building and installing the driver and complete all of the
steps.
Note: Be sure to restart the server before you proceed to the next step.
6. Type the following command to list the installed modules:
# lsmod
7. Verify that module entries are included in the following lsmod list.
Module entries for SLES or RHEL:
v mppVhba
v mppUpper
v lpfc (or qla2xxx for BladeCenter configurations)
v lpfcdfc (if ioctl module is installed)
Note: If you do not see the mpp_Vhba module, the likely cause is that the server was rebooted before
the LUNs were assigned, so the mpp_Vhba module was not installed. If this is the case, assign the
LUNs now, restart the server, and repeat this step.
104
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
root
root
254,
/proc/mpp/ DS4100-sys1:
total 0
dr-xr-xr-x
3 root
root
dr-xr-xr-x
3 root
root
-rw-r--r-1 root
root
-rw-r--r-1 root
root
-rw-r--r-1 root
root
-rw-r--r-1 root
root
-rw-r--r-1 root
root
-rw-r--r-1 root
root
0
0
0
0
0
0
0
0
/proc/mpp/ DS4100-sys1/controllerA:
total 0
dr-xr-xr-x
2 root
root
Oct
Oct
Oct
Oct
Oct
Oct
Oct
Oct
24
24
24
24
24
24
24
24
/proc/mpp/ DS4100-sys1/controllerA/lpfc_h6c0t2:
total 0
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
/proc/mpp/ DS4100-sys1/controllerB:
total 0
dr-xr-xr-x
2 root
root
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
02:56
controllerA
controllerB
virtualLun0
virtualLun1
virtualLun2
virtualLun3
virtualLun4
virtualLun5
LUN0
LUN1
LUN2
LUN3
LUN4
LUN5
/proc/mpp/ DS4100-sys1/controllerB/lpfc_h5c0t0:
total 0
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
-rw-r--r-1 root
root
0 Oct 24
02:56
02:56
02:56
02:56
02:56
02:56
LUN0
LUN1
LUN2
LUN3
LUN4
LUN5
Note: After you install the RDAC driver, the following commands and pages are available:
v mppUtil
v mppBusRescan
v mppUpdate
v RDAC
105
website. On that website, see information about installing the Celerity HBA driver and installing the
ATTO Configuration Tool in the ATTO Celerity MultiPaxath Director Installation and Operation Manual.
Important: After you configure a LUN, you must reboot your server for the LUN to be detected.
HP-UX PV-links
If an HP-UX system is attached with two host bus adapters to the storage subsystem, you can establish
redundant access to storage with physical volume links (PV-links), a feature of the HP-UX operating
system. PV-links achieve access redundancy with devices that have both primary and secondary paths to
the same device.
Important:
v There are two methods for establishing redundant access to storage using PV-links:
If you have controller firmware version 07.xx.xx.xx, 06.xx.xx.xx, or 05.xx.xx.xx, use the method
described in Using PV-links: Method 1.
If you have controller firmware version 04.xx.xx.xx, use the method described in Using PV-links:
Method 2 on page 107.
v SMutil must be installed on the host for either method.
106
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note: If you do not see the logical drives and logical drive accesses after you run the hot_add and
SMdevices commands, use the reboot command to restart the HP-UX host.
3. Determine the preferred and alternate paths for each logical drive by examining the output from the
SMdevices command, as shown in the previous example. Notice that each device is listed twice; one
instance is the preferred path and one instance is the alternate path.
Preferred path
In the following sample output, the preferred path is /dev/rdsk/c166t0d0.
/dev/rdsk/c166t0d0 [Storage Subsystem DS4000, Logical Drive
Accounting, LUN 0, Logical Drive WWN <600a0b80000f56d00000001e3eaead2b>,
Preferred Path (Controller-B):
In Use]
Alternate path
In the following sample output, the alternate path is /dev/rdsk/c172t0d0.
/dev/rdsk/c172t0d0 [Storage Subsystem DS4000, Logical Drive
Accounting, LUN 0, Logical Drive WWN <600a0b80000f56d00000001e3eaead2b>,
Alternate Path (Controller-A):
NotIn Use]
Accounting, LUN 0,
HR, LUN 1,
Finance, LUN 2,
Purchasing, LUN 3,
Development, LUN 4,
Chapter 5. Configuring hosts
107
Note: If you do not see the logical drives and logical drive accesses after you run the hot_add and
SMdevices commands, use the reboot command to restart the HP-UX host.
3. Determine the preferred and alternate paths for each logical drive by examining the output from the
SMdevices command, as shown in the example in preceding example.
Notice that each device is listed twice; one instance is the preferred path and one instance is the
alternate path. Also, notice that each device has a worldwide name (WWN). Part of the WWN of each
logical drive is unique for each controller in the storage subsystem. The WWNs for the logical drive
access in the preceding example differ in only five digits, f56d0 and f5d6c.
The devices in the preceding example are viewed through the controllers c166 and c172. To determine
the preferred path of a specific logical drive that is seen by the operating system, complete the
following steps:
a. Find the WWN for each logical drive access. In this case, Logical Drive Access 1 is associated with
c166 and has the WWN of f56d0.
/dev/rdsk/c166t3d7 [Storage Subsystem DS4000, Logical Drive Access, LUN 31,
Logical Drive WWN <600a0b80000f56d00000001b00000000>]
Logical Drive Access 2 is associated with c172 and has the WWN of f5d6c.
/dev/rdsk/c172t3d7 [Storage Subsystem DS4000, Logical Drive Access, LUN 31,
Logical Drive WWN <600a0b80000f5d6c0000002200000000>]
b. Identify the preferred device path name for the attached storage device by matching the logical
drive WWN to a logical drive access WWN. In this case, the WWN for LUN 0 is associated with
controller c166 and c172. Therefore, the preferred path for LUN 0 is /dev/rdsk/c166t0d0, which is
controller c166.
/dev/rdsk/c166t0d0 [Storage Subsystem DS4000, Logical Drive
Accounting, LUN 0, Logical Drive g<600a0b80000f56d00000001e3eaead2b>]
c. To keep a record for future reference, enter this path information for LUN 0 into a matrix (similar
to the one in Table 24).
Table 24. Sample record of logical drive preferred and alternate paths
LUN
Preferred path
Alternate path
Accounting
/dev/rdsk/c166t0d0
/dev/rdsk/c172t0d0
HR
/dev/rdsk/c172t0d1
/dev/rdsk/c166t0d1
Finance
dev/rdsk/c172t0d2
/dev/rdsk/c166t0d2
Purchasing
/dev/rdsk/c172t0d3
/dev/rdsk/c166t0d3
108
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Table 24. Sample record of logical drive preferred and alternate paths (continued)
LUN
Preferred path
Alternate path
Development
/dev/rdsk/c166t0d4
/dev/rdsk/c172t0d4
d. Repeat step 3.a through step 3.c for each logical drive that is seen by the operating system.
The system confirms the creation of the new physical logical drive.
2. Create arrays.
Note: For more information about how to create arrays, see the HP-UX documentation or manpages.
a. Make a directory for the array by typing the following commands. This directory must be in the
/dev directory.
#cd /dev
#mkdir /vg1
b. Create the group special file in the /dev directory for the array by typing the following command:
#mknod /dev/vg1/group c 64 0x010000
c. Create an array and define physical logical drive names (primary link) for the attached storage
device by typing the following command:
#vgcreate /dev/vg1/ /dev/dsk/c166t0d0
d. Define the secondary path name (alternate path) for the attached-storage device by typing the
following command:
#vgextend vg1 /dev/dsk/c172t0d0
Note: You can also use the vgextend command to add storage devices to an existing array. Add
the primary path first, and then add the alternate path, as shown in the following example.
1) Add the primary path for LUN1.
#vgextend vg1 /dev/dsk/c172t0d1
3. Create logical drives. For more information, see the HP-UX documentation.
4. Create file systems for the logical drives.
5. Repeat step 1 through step 4 to create additional arrays. For more information, see the HP-UX
documentation.
6. Verify the primary (preferred) and secondary (alternate) paths for each device by typing the following
command:
#vgdisplay -v vgname
109
110
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
You must configure the applications that engage the device directly to use the new names whenever the
MPxIO configuration is enabled or disabled.
In addition, the /etc/vfstab file and the dump configuration also contain references to device names.
When you use the stmsboot command to enable or disable MPxIO, as described in the following sections,
/etc/vfstab and the dump configuration are automatically updated with the new device names.
Acquiring the latest MPxIO driver version: The method of acquiring MPxIO depends upon which
version of Solaris you have installed:
Solaris 10
MPxIO is integrated within the Solaris 10 operating system and does not have to be installed
separately. Use Solaris 10 patches to update MPxIO with Solaris 10. The patches are available at
the Sun Technical Support website http://sunsolve.sun.com.
Note: You must install the regular kernel jumbo patch, because there are dependencies between
the various patches that make up the driver stack.
Solaris 8 and 9
Because MPxIO is not included with Solaris 8 and 9, you must download the required SAN suite
(Sun StorEdge SAN Foundation Suite) from the Sun Technical Support website
http://sunsolve.sun.com. On this webpage, click SAN 4.4 release Software/Firmware Upgrades
& Documentation.
Note: Use the install_it.ksh script that is provided to install the software.
Enabling the MPxIO failover driver: This section describes how to enable MPxIO by using the
stmsboot command. In addition to enabling MPxIO, this command also updates the device names in the
/etc/vfstab file and the dump configuration files during the next reboot.
Note: In Solaris 10, the stmsboot command is used to enable or disable MPxIO on all devices.
Before you begin:
1. Install the Solaris operating system, and the latest patches.
2. Make sure that the Solaris host type was selected when the host was defined.
Enabling MPxIO on Solaris 8 and 9
1. Install the latest version of Sun StorEdge SAN Foundation Suite and required patches, using the Sun
StorEdge install_it script. For more information, see the Sun StorEdge SAN Foundation Suite x.xx
Installation Guide (where x.xx is the version of the StorEdge software).
2. Edit the /kernel/drv/scsi_vhci.conf configuration file to make sure that the VID/PID is not
specified in this file. Also, make sure that the following entries are in the file:
mpxio-disable=no;
load-balance=none;
auto-failback=enable;
Chapter 5. Configuring hosts
111
Note: In a cluster environment where logical drives (LUNs) are shared between multiple Sun servers,
you might have to set the auto-failback parameter to disable to prevent the following phenomenon,
which can occur when one of the servers has a failed path to one of the shared LUNs.
If a host in a cluster server configuration loses a physical path to a storage subsystem controller,
LUNs that are mapped to the cluster group can periodically failover and then failback between cluster
nodes until the failed path is restored. This behavior is the result of the automatic logical drive
failback feature of the multipath driver. The cluster node with a failed path to a storage subsystem
controller issues a failover command for all LUNs that were mapped to the cluster group to the
controller that it can access. After a programmed interval, the nodes that did not have a failed path
issue a failback command for the LUNs because they can access the LUNs on both controllers. The
cluster node with the failed path is unable to access certain LUNs. This cluster node then issues a
failover command for all LUNs, which repeats the LUN failover-failback cycle.
For supported cluster services, see the System Storage Interoperation Center at www.ibm.com/
systems/support/storage/config/ssic
3. If you made any changes to the /kernel/drv/scsi_vhci.conf file in the previous step, save the file
and use the following command to restart the server:
# shutdown
g0
i6
Note: During the reboot, /etc/vfstab and the dump configuration are updated to reflect the device
name changes.
2. After the reboot, configure your applications to use new device names, as explained in Device name
change considerations for MPxIO on page 111.
3. If necessary, edit the /kernel/drv/fp.conf configuration file to verify that the following parameter is
set as follows:
mpxio-disable=no;
Edit the /kernel/drv/scsi_vhci.conf configuration file to verify that the following parameters are set
as follows:
load-balance=none;
auto-failback=enable;
4. If you made any changes to configuration files in the previous step, save the file, and use the
following command to restart the server:
112
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
# shutdown
-g0
-y
-i6
Type
vgs8514/hp
unknown
unknown
mult/hp
unknown
unknown
unknown
mult/hp
mult/hp
scsi-bus
CD-ROM
fc-private
disk
disk
disk
disk
disk
disk
ESI
fc-fabric
disk
disk
disk
fc-fabric
disk
disk
disk
fc-fabric
disk
disk
fc-fabric
disk
disk
disk
disk
usb-kbd
usb-mouse
unknown
unknown
Receptacle
connected
empty
empty
connected
empty
empty
empty
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
connected
empty
empty
Occupant
configured
unconfigured
unconfigured
configured
unconfigured
unconfigured
unconfigured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
configured
unconfigured
configured
configured
configured
configured
configured
unconfigured
unconfigured
Condition
ok
unknown
unknown
ok
unknown
unknown
unknown
ok
ok
unknown
unknown
unknown
unknown
unknown
unknown
unknown
unknown
unknown
unknown
unknown
unusable
unusable
unusable
unknown
unusable
unusable
unusable
unknown
unknown
unusable
unknown
unknown
unknown
unusable
unknown
ok
ok
ok
ok
2. You can also display information about the attachment points on a server. In the following example,
c0 represents a fabric-connected host port, and c1 represents a private, loop-connected host port. Use
the cfgadm command to manage the device configuration on fabric-connected host ports. By default,
the device configuration on private, loop-connected host ports is managed by the Solaris host.
Note: The cfgadm -1 command displays information about Fibre Channel host ports. Also use the
cfgadm -al command to display information about Fibre Channel devices. The lines that include a
port World Wide Name (WWN) in the Ap_Id field associated with c0 represent a fabric device. Use
the cfgadm configure and cfgadm unconfigure commands to manage those devices and make them
available to Solaris hosts.
Chapter 5. Configuring hosts
113
# cfgadm -l
Ap_Id
c0
c1
Type
fc-fabric
fc-private
Receptacle
connected
connected
Occupant
Condition
unconfigured unknown
configured
unknown
The Ap_ID parameter specifies the attachment point ID of the configured Fibre Channel devices. This
ID can be the controller number and WWN of a device (for example, c3::50020f230000591d).
See the output example in step 1. Also, see the cfgadm manpage for an explanation of attachment
points.
Note: An Ap_Id with type fc-private cannot be unconfigured. Only the type fc-fabric can be
configured and unconfigured.
4. Use the luxadm probe command to list all mapped LUNs:
# luxadm probe
luxadm probe
No Network Array enclosures found in /dev/es
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006ADE452CBC62d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006ADF452CBC6Ed0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE0452CBC7Ad0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE1452CBC88d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE2452CBC94d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE3452CBCA0d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE4452CBCACd0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE5452CBCB8d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE6452CBCC4d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE7452CBCD2d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE8452CBCDEd0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AE9452CBCEAd0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AEA452CBCF8d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AEB452CBD04d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AEC452CBD10d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006AED452CBD1Ed0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006B2A452CC65Cd0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006B2B452CC666d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006B2C452CC670d0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006B2D452CC67Ad0s2
Node WWN:200400a0b8111218 Device Type:Disk device
Logical Path:/dev/rdsk/c0t600A0B800011121800006B31452CC6A0d0s2
114
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
5. You can use the luxadm display logical path command to list more details about each mapped LUN,
including the number of paths to each LUN. The following example uses a logical path from the
previous example.
# luxadm display /dev/rdsk/c0t600A0B800011121800006B31452CC6A0d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c0t600A0B800011121800006B31452CC6A0d0s2
Vendor:
IBM
Product ID:
1742-900
Revision:
0914
Serial Num:
1T51207691
Uninitialized capacity: 1024.000 MBytes
Write Cache:
Enabled
Read Cache:
Enabled
Minimum prefetch:
0x0
Maximum prefetch:
0x0
Device Type:
Disk device
Path(s):
/dev/rdsk/c0t600A0B800011121800006B31452CC6A0d0s2
/devices/scsi_vhci/ssd@g600a0b800011121800006b31452cc6a0:c,raw
Controller
/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0
Device Address
201400a0b8111218,1e
Host controller port WWN
210100e08ba0fca0
Class
secondary
State
STANDBY
Controller
/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0
Device Address
201500a0b8111218,1e
Host controller port WWN
210100e08ba0fca0
Class
primary
State
ONLINE
Controller
/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0
Device Address
201400a0b8111218,1e
Host controller port WWN
210000e08b80fca0
Class
secondary
State
STANDBY
Controller
/devices/pci@7c0/pci@0/pci@8/SUNW,qlc@0/fp@0,0
Device Address
201500a0b8111218,1e
Host controller port WWN
210000e08b80fca0
Class
primary
State
ONLINE
#
115
Disabling the MPxIO multipath driver: To disable the MPxIO multipath driver, take the applicable
action for your version of Solaris:
v For Solaris 10, unconfigure all devices by using the cfgadm c unconfigure AP-id Ap-id command.
Then, run the stmsboot d command, and accept the default to Reboot the system now.
v For Solaris 8 and 9, unconfigure all devices by using the cfgadm c unconfigure AP-id Ap-id
command, and edit the /kernel/drv/scsi_vhci.conf configuration file to set the value of the
mpxio-disable parameter to yes. Restart the server.
To learn how to revert the patches or use the StorEdge software, see the Sun StorEdge SAN Foundation
Installation Software Guide at http://docs.sun.com.
Installing the RDAC failover driver on Solaris and modifying the configuration files
This section describes how to install RDAC on a Solaris host.
Before you begin:
1. RDAC is supported only on Solaris 8 and 9.
2. Because you cannot run both RDAC and MPxIO, make sure that MPxIO is disabled. Check the
configuration files (/kernel/drv/scsi_vhci.conf, /kernel/drv/fp.conf, or both) and make sure that
the value of the mpxio-disable parameter to set to Yes.
3. You must install an HBA driver package before you install RDAC. If you have a SAN-attached
configuration, you must also modify the HBA configuration file before you install RDAC. If you fail
to follow the procedures in this order, problems can occur.
4. If you modify the failover settings in the HBA configuration file after you install RDAC, you must
remove the RDAC from the host.
Important: In some configurations, a patch is required for RDAC to function correctly. Before you begin
the RDAC installation, check the Storage Manager readme file for Solaris to find out whether the patch is
required for your specific configuration. You can also find the latest RDAC versions and other important
information in the readme file. For more information about how to find the readme file on the web, see
Finding Storage Manager software, controller firmware, and readme files on page xiii.
where path/filename is the directory path and name of the package that you want to install.
The installation process begins.
Information about the packages that can be installed in the specified directory is displayed on the
command line, as in the following example:
The following packages are available:
1 RDAC
3. Type the value of the package that you are installing and press Enter. The installation process begins.
116
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4. The software automatically checks for package conflicts. If any conflicts are detected, a message is
displayed that indicates that some files are already installed and are in use by another package. The
following prompt is displayed:
Do you want to install these conflicting files [y, n, ?]
5. Type y and press Enter. The installation process continues. When the RDAC package is successfully
installed, the following message is displayed:
Installation of <RDAC> was successful.
6. Make sure that the variables in the configuration files for the JNI adapter cards have been set to the
correct values.
7. Type the following command to restart the Solaris host:
# shutdown
-g0
-y
-i6
where RDAC_driver_pkg_name is the name of the RDAC driver package that you want to remove.
2. Type the following command to verify that the RDAC drive package is removed:
# pkginfo RDAC_driver_pkg_name
where RDAC_driver_pkg_name is the name of the RDAC driver package that you removed.
3. Type the following command to restart the Solaris host:
# shutdown
-g0
-y
-i6
4. Type the following command to modify persistent bindings in the sd.conf file or edit the sd.conf file:
# vi /kernel/drv/jnic146x.conf or sd.conf
5. After you have finished making changes, type the following command to save the changes:
# :wq
where RDAC_driver_pkg_name is the name of the RDAC driver package that you want to install.
7. Type the following command to verify package installation:
# pkginfo RDAC_driver_pkg_name
where RDAC_driver_pkg_name is the name of the RDAC driver package that you installed.
8. Type the following command to restart the Solaris host:
# shutdown
-g0
-y
-i6
117
Note: You must restart the host after you modify the jnic146x.conf file, because the jnic146x.conf
driver is read only during the boot process. Failure to restart the host might cause some devices to be
inaccessible.
System requirements
Make sure that your server meets the following requirements for installing Veritas DMP:
v Solaris operating system
v Veritas Volume Manager 4.0, 4.1, 5.0, or 5.1
v Array Support Library (ASL), which enables Solaris to recognize the DS3000, DS4000, or DS5000
machine type
Note: The ASL might be a separate file that is available from Symantec, or it might be integrated with
Volume Manager, depending on the version of Storage Foundation.
118
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
#
# Copyright (c) 1992, Sun Microsystems, Inc.
#
# ident "@(#)sd.conf 1.9 98/01/11 SMI"
name="sd" class="scsi" class_prop="atapi"
target=0 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=1 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=2 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=3 lun=0;
b. Use the vi Editor to add target and LUN definitions. In the following example, it is assumed that
the Solaris host is attached to one storage subsystem with three LUNs mapped to the subsystem
storage partition. In addition, the access LUN must be mapped to the partition.
#
# Copyright (c) 1992, Sun Microsystems, Inc.
#
# ident "@(#)sd.conf 1.9 98/01/11 SMI"
name="sd" class="scsi" class_prop="atapi"
target=0 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=1 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=2 lun=0;
name="sd" class="scsi" class_prop="atapi"
target=3 lun=0;
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
name="sd"
class="scsi"
class="scsi"
class="scsi"
class="scsi"
class="scsi"
class="scsi"
class="scsi"
class="scsi"
target=0
target=0
target=0
target=0
target=1
target=1
target=1
target=1
lun=1;
lun=2;
lun=3;
lun=31;
lun=1;
lun=2;
lun=3;
lun=31;
c. Type the following command to save the new entries in the /kernel/drv/sd.conf file:
# :wq
3. Type the following command to verify that RDAC is not installed on the host:
# pkginfo -l RDAC
119
Important: Before you install Veritas Storage Foundation Solaris with Veritas Volume Manager and
DMP, make sure that you have the required license keys. This document does not describe how to
install the Veritas product. For more information, see the Symantec documentation
athttp://www.symantec.com/business/support/.
8. Type the following command to restart the Solaris host:
# shutdown
-g0
-y
-i6
-g0
-y
-i6
See the Symantec Veritas documentation for information about how to complete the following tasks:
v Start the Veritas Volume Manager
v Set up disk groups
v Create volumes
v Create file systems
v Mount file systems
Identifying devices
After you have installed the multipath driver or verified that the multipath driver is already installed,
use the SMdevices utility to identify a storage subsystem logical drive that is associated with an
operating-system device.
120
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
In Use]
In Use]
In Use]
In Use]
In Use]
The disk array router (dar) device represents the entire array, including the current and the
deferred paths to all LUNs (hdisks).
dac
The disk array controller (dac) devices represent a controller within the storage subsystem. There
are two dacs in the storage subsystem. With MPIO, the dac device is shown only if a UTM device
is assigned.
The universal transport mechanism (utm) device is used only with in-band management
configurations, as a communication channel between the SMagent and the storage subsystem.
121
Note: The utm device might be listed in command output, regardless of whether you have an
in-band management configuration. For example, a utm might be listed when you run the lsattr
command on a dac.
Note: In a SAN configuration, the devices do not log in to the SAN switch until you run the cfgmgr
command.
3. Type the following command:
# lsdev -Cc disk
4. Examine the output of the lsdev -Cc disk command to make sure that the RDAC software recognizes
the storage subsystem logical drives, as shown in the following list:
v
v
v
v
v
Each
Each
Each
Each
Each
DS4200
DS4300
DS4400
DS4500
DS4700
logical
logical
logical
logical
logical
drive
drive
drive
drive
drive
is
is
is
is
is
recognized
recognized
recognized
recognized
recognized
as
as
as
as
as
an
an
an
an
an
v Each DS4800 logical drive is recognized as an 1815 DS4800 Disk Array Device.
Important: You might discover that the configuration process has created two dacs and two dars on
one storage subsystem. This situation can occur when the host is using a partition that does not have
any associated LUNs. When that happens, the system cannot associate the two dacs under the correct
dar. If there are no LUNs, the system generates two dacs as expected, but it also generates two dars.
The following list shows the most common causes:
v You create a partition and attach the LUNs to it, but you do not add the host ports to the partition.
Therefore, the host ports remain in the default partition.
v You replace one or more HBAs but do not update the worldwide name (WWN) of the partition for
the HBA.
v You switch the storage subsystem from one set of HBAs to another as part of a reconfiguration and
do not update the WWNs.
In each of these cases, resolve the problem, and run cfgmgr again. The system removes the extra dar
or moves it from the Available state to the Defined state. (If the system moves the dar into the
Defined state, you can delete it.)
Note: When you perform the initial device identification, the Object Data Manager (ODM) attributes
of each device are updated with default values. In most cases and for most configurations, the default
values are satisfactory. However, there are some values that can be modified for maximum
performance and availability. See Appendix D, Viewing and setting AIX Object Data Manager (ODM)
attributes, on page 261 for information about using the lsattr command to view attribute settings on
an AIX system.
122
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Configuring devices
To maximize your storage subsystem performance, you can set the queue depth for your hdisks, disable
cache mirroring, use dynamic capacity and dynamic logical drive expansion (DVE), and check the size of
your LUNs.
3. Press Enter. The new logical drives are available through the Disk Administrator.
123
where logical_drive_letter is the operating-system drive letter that was assigned to the disk
partition on the logical drive.
3. Press Enter.
Windows 2000
To stop and restart the host-agent software on Windows 2000, complete the following steps:
1. Click Start > Programs > Administrative Tools > Services. The Services window opens.
2. Right-click IBM DS Storage Manager Agent.
3. Click Restart. The Storage Manager Agent stops and then starts again.
4. Close the Services window.
124
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
125
126
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
3. Type the amount by which you want to increase the logical drive, and click OK. A clock icon is
displayed on every logical drive within the array. You must wait for the process to be completed
before you can begin any host intervention.
Note: If the storage subsystem is busy, the process might take several hours.
4. Type the following commands to rescan the logical drive on the host:
# cd /sys/block/sdXX/device
# echo 1 > rescan
Enabling the RDAC module on RHEL 5.3 for Storage Foundation 5.0
To enable the RDAC module on RHEL 5.3 for Storage Foundation 5.0, complete the following steps:
1. Disable all of the storage subsystem storage ports so that the HBA cannot detect them.
2. Install Storage Foundation.
Chapter 5. Configuring hosts
127
For example:
mkinitrd /boot/my_image 2.6.18-118.el5 --preload=scsi_dh_rdac
4.
5.
6.
7.
Unloading the RDAC module on RHEL 5.3 for Storage Foundation 5.0
To unload the module after the device probe and attach process, complete the following steps during the
system-boot process:
1. Create a /etc/r3.d script, as in the following example:
# vi /etc/init.d/rm_rdac
----------------------------------------------------------------------------------------## this script is used for detaching the scsi_dh_rdac module for each LUN
## this script has dependency on lsscsi command and this lsscsi should be available for this
## script to successfully execute.
#!/bin/bash
echo "detaching the scsi_dh_rdac module"
for i in /sys/block/sd*/device/dh_state
do
if [[ "`cat $i`" = "rdac" ]]
then
echo detach > $i
fi
done
modprobe -r scsi_dh_rdac
echo "detached successfully"
-----------------------------------------------------------------------------------------
2. Insert the script at the correct location under /etc/rc3.d, before the VCS VxFen Driver startup script
(the VxFen Driver default start script is /etc/rc2.d/S68vxfen). If the system is not running VCS,
insert the script after the /etc/rc3.d/S50vxvm-recover script.
# ln -s /etc/init.d/rm_rdac /etc/rc.d/rc3.d/S57rm_rdac
# ln -s /etc/init.d/rm_rdac /etc/rc.d/rc5.d/S57rm_rdac
2. Multiply this number by 512 (bytes) to calculate the size of the LUN, as shown in the following
example:
8388608 * 512 = 4294967296 (~ 4GB)
128
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
129
Note: DS5000 storage subsystems are not ALUA-compliant. DS5000 subsystems have Target Port Group
Support (TPGS), which is a similar SCSI protocol that directs I/O to preferred ports. For HP-UX 11.31, the
default HP-UX host type must be changed to the TPGS host type HPXTPGS.
To
1.
2.
3.
turn on TPGS support and change the host type, complete the following steps:
Change the operating-system type for the DS5000 storage subsystem from HPUX to HPXTPGS.
Change the load balancing to Default, round-robin.
Verify that the changes are correct. The following example shows one of the LUNs that has the correct
four active paths and four standby paths.
# scsimgr get_info all_lpt -D /dev/rdisk/asm1ai|grep -e STATUS -e Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
STATUS INFORMATION
Open close state
4. Use the SAN Fibre Channel switch-monitoring tools to verify that the I/O loads are distributed
properly.
130
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Any deviations from these notes and procedures might cause a loss of data availability.
Review the following list of issues and restrictions before you perform a hot-swap operation for AIX.
v The autorecovery attribute of the dar must be set to no. Autorecovery is a dynamically set feature that
can be turned back on after the hot-swap procedure is complete. Failure to disable autorecovery mode
during a hot-swap procedure can cause loss of access to data.
v Do not redistribute logical drives to the preferred path until you verify that the HBA replacement
succeeded and that the subsequent configuration was performed correctly. If you redistribute the
logical drives before you verify that the hot swap and configuration were successful can cause a loss of
access to data.
v The only supported hot-swap scenario involves the replacement of a defective HBA with the same
HBA model, and in the same PCI slot. Do not insert the defective HBA into any other system, even if
the HBA is found not to be defective. Always return the HBA to IBM.
Important: As of the date of this document, no other variations of replacement scenarios are
supported.
v Hot swap is not supported in single-HBA configurations.
Preparing for the HBA hot swap on AIX:
To prepare for the hot swap, complete the following procedures:
Collecting system data
To collect data from the system, complete the following steps:
1. Type the following command:
# lsdev -C |grep fcs
Available 17-08
Available 1A-08
FC Adapter
FC Adapter
Available 17-08-02
Available 1A-08-02
1815
1815
where X is the number of the fcs device. The output looks similar to the following example.
lscfg -vpl fcs0
fcs0
U0.1-P1-I1/Q1
FC Adapter
Part Number.................09P5079
EC Level....................A
Serial Number...............1C21908D10
Manufacturer................001C
Feature Code/Marketing ID...2765
FRU Number..................09P5080
Network Address.............10000000C92D2981
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Chapter 5. Configuring hosts
131
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Device
Specific.(Z2)........00000000
Specific.(Z3)........03000909
Specific.(Z4)........FF401210
Specific.(Z5)........02C03951
Specific.(Z6)........06433951
Specific.(Z7)........07433951
Specific.(Z8)........20000000C92D2981
Specific.(Z9)........CS3.91A1
Specific.(ZA)........C1D3.91A1
Specific.(ZB)........C2D3.91A1
Specific.(YL)........U0.1-P1-I1/Q1
PLATFORM SPECIFIC
Name: Fibre Channel
Model: LP9002
Node: Fibre Channel@1
Device Type: fcp
Physical Location: U0.1-P1-I1/Q1
Available
Available
1815
1815
5. Type the following command to list the attributes of each dar found on the system:
# lsattr -El darX
where X is the number of the dar. The output looks similar to the following example.
lsattr -El dar0
act_controller
all_controller
held_in_reset
load_balancing
autorecovery
hlthchk_freq
aen_freq
balance_freq
fast_write_ok
cache_size
switch_retries
dac0,dac2
dac0,dac2
none
no
no
600
600
600
yes
1024
5
Active Controllers
Available Controllers
Held-in-reset controller
Dynamic Load Balancing
Autorecover after failure is corrected
Health check frequency in seconds
Polled AEN frequency in seconds
Dynamic Load Balancing frequency in seconds
Fast Write available
Cache size for both controllers
Number of times to retry failed switches
False
False
True
True
True
True
True
True
False
False
True
132
FC Adapter
FC SCSI I/O Controller Protocol Device
1742
(700) Disk Array Controller
1742
(700) Disk Array Device
1742
(700) Disk Array Device
1742
(700) Disk Array Device
1742
(700) Disk Array Device
1742
(700) Disk Array Device
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. Check the lsattr command output that you collected in step 5 of the procedure Collecting system
data on page 131. In the lsattr output, identify the dars that list the dacs that you identified in step
1 of this procedure.
3. Type the following command for each dar that you identified in step 2:
#
where X is the number of the dar. The output looks similar to the following example.
# lsattr -El dar0 |grep autorecovery
autorecovery
no
Autorecover after failure is corrected
True
4. In the lsattr command output, verify that the second word is no. If the second word is yes,
autorecovery is currently enabled.
Important: For each dar on which autorecovery is enabled, you must disable it by setting the
autorecovery ODM attribute to no. See Using the lsattr command to view ODM attributes on page
265 to learn how to change attribute settings. Do not proceed with the hot-swap procedure until you
complete this step and verify that autorecovery is disabled.
Replacing the hot-swap HBA:
Attention: If you do not follow this procedure as documented here, data availability might be lost. You
must read and understand all of the steps in this section before you begin the HBA hot-swap procedure.
To replace the hot-swap HBA, complete the following steps:
1. Type the following command to put the HBA that you want to replace into the Defined state:
# rmdev -Rl fcsX
where X is the number of the HBA. The output is similar to the following example.
rmdev -Rl fcs0
fcnet0 Defined
dac0 Defined
fscsi0 Defined
fcs0 Defined
For Linux operating systems, type the following command to identify the PCI hotplug slot:
# drslot_chrp_pci -i -s
slot-name
where slot-name is the name of the slot for the HBA that you are replacing, for example,
U7879.001.DQD014E-P1-C3.
The LED at slot slot-name flashes, and the following message is displayed.
The visual indicator for the specified
PCI slot has been set to the identify
state. Press Enter to continue or
enter x to exit.
2. In the AIX smit menu, initiate the process that is required for the HBA hot swap by clicking smit >
Devices > PC Hot Plug Manager > Replace/Remove a PCI Hot Plug Adapter.
3. In the Replace/Remove a PCI Hot Plug Adapter window, select the targeted HBA. A window opens
and displays instructions for replacing the HBA.
4. Follow the smit instructions to replace the HBA.
Note: Do not reinstall the Fibre Channel cable at this time.
5. If the steps in this procedure are completed successfully up to this point, you get the following
results:
v The defective HBA is removed from the system.
v The replacement FC HBA is turned on.
Chapter 5. Configuring hosts
133
Note: The new HBA is placed in the default group. If hdisks are assigned to the default group, the
HBA generates a new dar and dac, which causes a split. Issue the rmdev command to remove the
new dar and dac after you map the WWPN.
8. Type the following command to verify that the fcs device is now available:
# lsdev -C |grep fcs
9. Type the following command to verify or upgrade the firmware on the replacement HBA to the
correct level:
# lscfg -vpl fcsX
After you complete this procedure, continue to Mapping the new WWPN to the storage subsystem for
AIX and Linux on page 138.
lsslot
drslot_chrp_pci
If these tools are not installed, complete the following steps to install them:
1. Make sure that rdist-6.1.5-792.1 and compat-2004.7.1-1.2 are installed from the SLES 9 media.
2. To find the PCI Hotplug Tools rpm files, go to http://www14.software.ibm.com/webapp/set2/sas/f/
lopdiags/.
3. On the website, select the applicable link for your operating system. Download and install the
following rpm files:
v librtas-1.3.1-0.ppc64.rpm
v rpa-pci-hotplug-1.0-29.ppc64.rpm
4. Type the following command to install each rpm file:
# rpm -Uvh <filename>.rpm
134
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
If the PCI core is installed, the output looks similar to the following example:
elm17c224:/usr/sbin # ls -l /sys/bus/pci/slots
total 0
drwxr-xr-x 8 root root 0 Sep 6 04:29 .
drwxr-xr-x 5 root root 0 Sep 6 04:29 ..
drwxr-xr-x 2 root root 0 Sep 6 04:29 0000:00:02.0
drwxr-xr-x 2 root root 0 Sep 6 04:29 0000:00:02.4
drwxr-xr-x 2 root root 0 Sep 6 04:29 0000:00:02.6
drwxr-xr-x 2 root root 0 Sep 6 04:29 0001:00:02.0
drwxr-xr-x 2 root root 0 Sep 6 04:29 0001:00:02.6
drwxr-xr-x 2 root root 0 Sep 6 04:29 control
If the /sys/bus/pci/slots directory does not exist, the PCI core is not installed.
Verifying that the rpaphp driver is installed
The rpaphp driver must be installed on the system. Type the following command to verify that it is
installed:
ls -l /sys/bus/pci/slots/*
If the rpaphp driver is installed, the output looks similar to the following example:
elm17c224:/usr/sbin # ls -l /sys/bus/pci/slots/*
/sys/bus/pci/slots/0000:00:02.0:
total 0
drwxr-xr-x 2 root root
0 Sep 6 04:29 .
drwxr-xr-x 8 root root
0 Sep 6 04:29 ..
-r--r--r-- 1 root root 4096 Sep 6 04:29 adapter
-rw-r--r-- 1 root root 4096 Sep 6 04:29 attention
-r--r--r-- 1 root root 4096 Sep 6 04:29 max_bus_speed
-r--r--r-- 1 root root 4096 Sep 6 04:29 phy_location
-rw-r--r-- 1 root root 4096 Sep 6 04:29 power
Using the lsslot tool to list slot information: Before you replace an HBA by using PCI hotplug, you can
use the lsslot tool to list information about the I/O slots. This section describes how to use lsslot and
provides examples. Use the lsslot tool according to the following guidelines.
Syntax for the lsslot tool
The lsslot syntax is shown in the following example.
lsslot [ -c slot | -c pci [ -a | -o]] [ -s drc-name ] [ -F delimiter ]
-c pci -a
Displays all available (empty) PCI hotplug slots
135
-c pci -o
Displays all occupied PCI hotplug slots
Uses delimiter to delimit columns
-F
64
64
64
64
64
64
bit,
bit,
bit,
bit,
bit,
bit,
133MHz
133MHz
133MHz
133MHz
133MHz
133MHz
slot
slot
slot
slot
slot
slot
Device(s)
Empty
0002:58:01.0
0001:40:01.0
Empty
Empty
0001:58:01.0
Type the following command to list all empty PCI hotplug slots:
# lsslot -c pci -a
Device(s)
Empty
Empty
Empty
Type the following command to list all occupied PCI hotplug slots:
# lsslot -c pci -o
Device(s)
0002:58:01.0
0001:40:01.0
0001:58:01.0
To see detailed information about a PCI hotplug device, complete the following steps:
1. Select a device number from the output of # lsslot -c pci -o, as seen in the preceding output
example.
2. Type the following command to show detailed information about the device:
# lspci | grep xxxx:yy:zz.t
where xxxx:yy:zz.t is the number of the PCI hotplug device. The resulting output looks similar to
the following example.
0001:40:01.0 Ethernet controller: Intel Corp. 82545EM Gigabit
Ethernet Controller (Copper) (rev 01)
136
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
slot-name
where slot-name is the name of the slot for the HBA that you are replacing, for example,
U7879.001.DQD014E-P1-C3.
The LED at slot slot-name begins flashing, and the following message is displayed.
The visual indicator for the specified
PCI slot has been set to the identify
state. Press Enter to continue or
enter x to exit.
slot-name
d. Press Enter.
e. Physically remove the HBA from the slot.
f. Type the following command to verify that the slot is empty:
# lsslot -c pci -s
slot-name
If the slot is empty, the resulting output looks similar to the following example.
# Slot
Description
U7879.001.DQD014E-P1-C3 PCI-X capable, 64 bit, 133MHz slot
Device(s)
Empty
3. To hot plug the HBA into the slot, complete the following steps:
a. Type the following command:
# drslot_chrp_pci -a -s
slot-name
137
slot-name
If the slot is not empty, the resulting output looks similar to the following example.
# Slot
Description
U7879.001.DQD014E-P1-C3 PCI-X capable, 64 bit, 133MHz slot
Device(s)
0001:40:01.0
Mapping the new WWPN to the storage subsystem for AIX and Linux
For each storage subsystem that is affected by the hot swap, complete the following steps to map the
worldwide port name (WWPN) of the HBA to the storage subsystem:
1. Start the Storage Manager and open the Subsystem Management window.
2. On the Mappings tab of the Subsystem Management window, click Mappings > Show All Host Port
Information. The Host Port Information window opens.
3. Find the entry in the Host Port Information window that matches the WWPN of the defective HBA
(the HBA that you removed), and record the alias name. Then, close the Host Port Information
window.
4. On the Mappings tab, select the alias name of the HBA host port that you just recorded.
5. Click Mappings > Replace Host Port. The Replace Host Port window opens.
6. In the Replace Host Port window, verify that the current HBA Host Port Identifier, which is listed at
the top of the window, matches the WWPN of the HBA that you removed.
7. Type the 16-digit WWPN, without the colon (:), of the replacement HBA in the New Identifier field,
and click OK.
After you complete these steps, continue to Completing the HBA hot-swap procedure.
138
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4. If an HBA is attached to a Fibre Channel switch and the zoning is based on WWPN, modify the
zoning information to replace the WWPN of the former HBA with the WWPN of the replacement
HBA.
5. Run the cfgmgr command to enable the HBA to register its WWPN in the Fibre Channel switch.
6. Type the following commands to verify that the replaced fcsX device and its associated dacs are
placed in the Available state:
# lsdev -C |grep fcs
lsdev -C |grep dac
7. Type the following command to verify that no additional dars have been created and that the
expected dars are in the Available state.
Note: With MPIO, the only time you have a dac device is when the UTM LUN is assigned.
# lsdev -C |grep dar
Attention: The presence of additional dars in the lsdev output indicates a configuration problem. If
this occurs, do not continue this procedure until you correct the problem. Otherwise, data
availability might be lost.
8. For each dar, type the following command to verify that affected dar attributes indicate the presence
of two active dacs:
# lsattr -El darX|grep act_controller
False
Attention: If two dacs are not reported for each affected dar, data availability might be lost. Do not
continue this procedure if two dacs are not reported for each dar. Correct the problem before you
continue.
9. Redistribute volumes manually to preferred paths.
10. Verify that disks stay on the preferred path with one or both of the following methods:
Using the AIX system
Run the mpio_get_config -Av command, and verify that drives are on the expected path.
Using Storage Manager
In the Enterprise Management window, verify that the storage subsystems are in Optimal
state. If they are not in Optimal state, make sure that any drives that are part of the
subsystems that are involved with the hot-swap process are not listed in the Recovery Guru.
11. If necessary, enable autorecovery of the affected dars. See Appendix D, Viewing and setting AIX
Object Data Manager (ODM) attributes, on page 261 to learn how to change attribute settings.
The Fibre Channel HBA hot swap is now complete.
139
3. If RDAC is installed, type the following command to recognize the new HBA:
# mppBusRescan
140
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note: You can also use the smit disk command to enable/disable protection on hdisk#.
Note: You can use the chdev -l hdisk# -a DIF_protection=no command to disable T10 protection on
a disk.
5. Use the lsattr -El hdisk# command to check the current value of this attribute after enabling
protection. If at least one path does not support protection, then protection can't be enabled on the
disk. If this attribute has a values of "unsupported," it means that:
v some or all paths to disk won't support protection OR
v the disk does not support protection
hdisk2 has three paths. These three paths are from fcs0, fcs2 and fcs3. You want to enable protection on
these adapters. To do this:
1. Upgrade the firmware on all the fcs devices mentioned above. All of them must be 8Gb PCIe FC
adapters (feature code 5735 or 5273).
2. Un-configure the child devices (fscsi0, fscsi2 and fscsi3).
3. Enable protection on fcs0, fcs2 and fcs3 adapters using the chdev command (chdev -l fcs0 -a
DIF_enabled=yes).
4. Run cfgmgr so that all the devices will come into available state.
5. Use chdev command on hdisk2 to enable or disable protection (chdev -l hdisk2 -a
DIF_protection=yes). If the disk supports protection and all paths support protection, the attribute
value will be set to "yes". Otherwise, it will be set to "unsupported".
Note: If the attribute value is set to "unsupported", check all paths (all fcs adapter attributes) and
check to see if the protection is enabled on the LUN when it is created on the DS5000 storage. In
some cases, the attribute on the fcs adapter may show "yes" but it may not be supported due to an
old 8Gb PCIe FC adapter (feature code 5735 or 5273) firmware that does not support T10 protection
(BlockGuard feature).
141
142
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
v Installing and configuring the DS TKLM Proxy Code server on page 160
1. Modifying the DS TKLM Proxy Code server configuration file on page 161
2. Installing the DS TKLM Proxy Code on page 165
v Configuring disk encryption with FDE drives on page 166
1. Installing FDE drives on page 166
2. Enabling premium features on page 166
3. Securing a RAID array on page 175
4. Unlocking disk drives on page 181
5. Migrating storage subsystems (head-swap) with FDE drives on page 183
6. Erasing disk drives on page 187
7. Global hot-spare disk drives on page 190
8. Log files on page 191
v Frequently asked questions on page 191
Note: Not all IBM DS storage subsystems support FDE. See the documentation that came with your
storage subsystem for information about FDE compatibility.
143
144
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
alone. To ensure that security is never compromised, an encrypted version of the encryption key is stored
only on the disk drive. Because the disk encryption key never leaves the disk, you might not have to
periodically change the encryption key, the way a user might periodically change the operating-system
password.
145
5. Use Storage Manager to command the storage subsystem controller to request the security key from
the external key license manager, instead of generating a local security key.
6. Configure the external key license manager software to accept an external key request.
Important:
1. Tivoli Key Lifecycle Manager is the only external security key management software that is supported
on IBM DS storage subsystems.
2. External security key management requires controller firmware version 7.70.xx.xx or later.
3. Make sure that at least one non-FDE drive is installed in the storage subsystem when you use
external security key management. Otherwise, if the storage subsystem power is turned off and then
on again, the storage subsystem might require that you supply the security key from the saved file
manually to unlock the secured FDE drives and complete the boot process.
146
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
the security key that was provided by the storage subsystem are the same. If they are the same, data can
be read from and written to the security-enabled FDE drives.
Attention: The pass phrase is used only to protect the security key in the security key file. Anyone who
can access the Subsystem Management window can save a copy of the security key file with a new pass
phrase. Set a storage subsystem password for each storage subsystem that requires you to provide a
password when any configuration changes are made, including creating and changing the security key.
See Setting a storage subsystem management password on page 32 for instructions for setting the
storage subsystem password.
If you use local security key management, the security key file provides protection against a corrupted
security key or the failure of both controllers in the storage subsystem. The security key file is also
needed to unlock security-enabled FDE drives when they are moved from one storage subsystem to
another. In these cases, the security-enabled FDE drives remain locked until the drives are unlocked by
the security key that is stored in the security key file. To decrypt the security key in the security key file,
you must provide the same pass phrase that was entered when the security key file was generated. The
drive then determines whether its security key and the security key that was provided by the storage
subsystem are the same. If they are the same, data can be read from and written to the security-enabled
FDE drives.
If you use external security key management, the security key file provides protection in the following
situations:
1. If communication is lost to either the proxy server or the external key license servers when the
controller unlocks the secured FDE drives
2. If the secured FDE drives are moved to or from a storage subsystem that is not managed by the same
external key license manager
3. If drives must be unlocked after the power cycle of a storage subsystem configuration that has only
secured FDE drives and no unsecured FDE or non-FDE drives in the configuration
After the storage subsystem controller creates the security key, the RAID arrays can be changed from a
state of Security Capable to a state of Security Enabled. The Security Enabled state requires the RAID
array FDE drives to be unlocked after power to the drive is turned on using the security key to access the
data that is stored on the drives. Whenever power is applied to the drives in a RAID array, the drives are
all placed in Security Locked state. They are unlocked only during drive initialization with the storage
subsystem security key. The Security Unlocked state makes the drives accessible for the read and write
activities. After they are unlocked, the drives remain unlocked until the power is removed from the
drives, the drives are removed and reinserted in the drive bays, or the storage subsystem power is
cycled.
After a drive is secured, the drive becomes locked if power is turned off or if it is removed. The
encryption key within that drive will not encrypt or decrypt data, making the drive unreadable until it is
unlocked by the controllers.
147
Figure 22. Security-enabled FDE drives: With the correct authorizations in place, the reading and writing of data
occurs in Unlocked state
After authentications are established and security is enabled on a storage subsystem, the encryption of
write operations and decryption of read operations that takes place inside the FDE drive are not apparent
to the user or to the DS5000 storage subsystem controllers. However, if a secured drive is lost, removed,
or stolen, the drive becomes locked, and the data that is stored on the disk remains encrypted and
unreadable. Because an unauthorized user does not have the security key file and pass phrase, gaining
access to the stored data is impossible.
Figure 23. A security-enabled FDE drive is removed from the storage subsystem: Without correct authorizations, a
stolen FDE disk cannot be unlocked, and the data remains encrypted
148
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
When you change a security key, a new security key is generated by the storage subsystem controller
firmware. The new security key is obfuscated in the storage subsystem, and you cannot see the security
key directly. The new security key replaces the previous key that is used to unlock the security-enabled
FDE drives in the storage subsystem. The controller negotiates with all of the security-enabled FDE drives
for the new key. With controller firmware versions 7.50.xx.xx and 7.60.xx.xx, an n-1 version of the security
key is also stored in the storage subsystem for protection in case something prevents the controllers from
completing the negotiation of the new security key with the security-enabled FDE drives (for example,
loss of storage subsystem power during the key change process). If this happens, you must change the
security key so that only one version of the security key is used to unlock drives in a storage subsystem.
The n-1 key version is stored in the storage subsystem only. It cannot be changed directly or exported to
a security key file.
Note: The n-1 key is not stored in the storage subsystem with controller firmware version 7.70.xx.xx or
later.
A backup copy of the security key file is always generated when you change a security key, and must be
stored on some other storage medium in case of controller failure, or for transfer to another storage
subsystem. You participate in creation of the security key identifier, the pass phrase, and the security key
file name and location when you change the security key. The pass phrase is not stored anywhere in the
storage subsystem or in the security file. The controller uses the pass phrase to encrypt the security key
before it exports the security key to the security key file.
149
The Change Security Key Complete window shows that the security key identifier that was written to the
security key file has a random number appended to the security key identifier you entered in Figure 24
and the storage subsystem worldwide identifier. Figure 25 on page 151 shows an example of the random
number part of the security key identifier.
150
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The Security key identifier field in the FDE Drive Properties window includes a random number that is
generated by the controller when you create or change the security key. Figure 26 on page 152 shows an
example of the random number. The random number is currently prefixed with 27000000. If all of the
secured FDE drives in the storage subsystem have the same value in the security key identifier field, they
can be unlocked by the same security key identifier.
Note: The Security Capable and Secure fields in the Drive Properties window show whether the drive is
secure capable and whether it is in Secure (Yes) or Unsecured (No) state.
151
Figure 27 on page 154 shows an example of the security key identifier that is displayed in the File
information field when you select a security key back up file to unlock the secured drives in the storage
subsystem. The security key identifier or LockKeyID, shown in the file information field, contains the
152
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
characters that you entered in the security key identifier field when you created or changed the security
key along with the storage subsystem worldwide identifier and the randomly-generated number that
appears in the security key identifier of all secured FDE drives. This information is delimited by a colon
(:). For example LockKeyID
Passw0rdplus3:600a0b800029ece6000000004a2d0880:600a0b800029ed8a00001aef4a2e4a73
contains the following information:
v The security key identifier that you specified, for example Passw0rdplus3
Note: With external security key management, the security key identifier cannot be modified by the
user as it can with local security key management. Therefore, this information will not be shown.
v The storage subsystem worldwide identifier, for example 600a0b800029ece6000000004a2d0880
v A randomly-generated number 600a0b800029ed8a00001aef4a2e4a73
153
Figure 28 on page 155 shows an example of the drive properties for an unsecured FDE drive. Note that
the security key identifier field for an unsecured FDE drive is populated with zeros. Note also that the
Security Capable field value is yes and the Secure field value is no, indicating that this is a security
capable but unsecured FDE drive.
154
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
155
before data can be read from or written to the drives. The security key on the new storage subsystem will
be different and will not unlock the drives. You must supply the security key from a security key file that
you saved from the original storage subsystem. In addition, you must provide the pass phrase that was
used to encrypt the security key to extract the security key from the security key file. After you unlock
the drives with the security key in the security key file, the controller negotiates the existing security key
for these drives so that only one version of the security key is used to unlock drives in a storage
subsystem.
You do not have to provide the security key file to unlock the security-enabled drives in a storage
subsystem every time the storage subsystem power is cycled or the drives are removed and reinserted in
the same storage subsystem, because the controllers always keep a copy of the current and previous (n-1)
values of the security key to unlock these drives. However, if the drives are removed from the storage
subsystem and the security key is changed more than two times in the same storage subsystem, the
controllers will not have the security key to unlock the drives when they are reinserted in the same
storage subsystem.
Attention: Always back up the data in the storage subsystem to secured tape to prevent loss of data
due to malicious acts, natural disasters, abnormal hardware failures, or loss of the FDE security key.
156
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
157
Parameter
Description
Encryption key
How is it generated?
The encryption key is
generated when the drive is
manufactured and then
regenerated at the customer
site (by a command from the
controller to the drive) to
ensure that the key was not
compromised prior to use.
Security key
Security key
identifier
Pass phrase
User-specified alphanumeric
User-specified alphanumeric
character string.
character string, not stored
anywhere on the storage
subsystem or in the security
key file. The pass phrase is
used to encrypt the security
key when it is exported in the
security key file. It is also used
to decrypt the key in the
security file when it is used to
import security-enable FDE
drives into a storage
subsystem.
158
User-specified alphanumeric
character string (local security
key management only). The
v Can always be read from the storage subsystem adds the
storage subsystem worldwide
disk
identifier and a randomly
v Can be written to the disk
generated number to the
only if security has been
characters that are entered.
enabled and the drive is
unlocked
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
FDE terminology
The following table defines FDE terminology that is used throughout this chapter.
Table 26. Full disk encryption terminology
Term
Description
FDE
Full disk encryption, a custom chip or ASIC (application specific integrated circuit) on
the disk drive that requires a security key to allow encryption and decryption to begin.
FDE disk drives encrypt all the data on the disk. The secured drive requires that a
security key be supplied before read or write operations can occur. The encryption and
decryption of data is processed entirely by the drive and are not apparent to the storage
subsystem.
Secure erase
Permanent destruction of data by changing the drive encryption key. After secure erase,
data that was previously written to the disk becomes unintelligible. This feature takes
advantage of FDE disk security capabilities to erase data by changing the encryption
key to a randomly generated value. Because the encryption key never leaves the drive,
this provides a secure erase. After secure erase, the drive becomes unlocked, allowing
anyone to read or write to the disk. Secure erase is sometimes referred to as drive
reprovisioning.
A key management method that uses a security key created and contained in the storage
subsystem controller. To move secured drives from one storage subsystem to another,
you must use the saved security key file from the original storage subsystem to unlock
the drives. The security key is obfuscated and stored in the storage subsystem when the
power is turned off.
Note: Local security key management requires controller firmware version 7.50.xx.xx or
later.
External key
management
A key management method that uses a central key location on your network (one or
more servers external to a storage subsystem) to manage keys for different storage
devices. A proxy server must facilitate the request for and acceptance of a security key.
The security key is not stored in the storage subsystem when the power is turned off.
Note:
1. External security key management requires dedicated software, such as IBM Tivoli
Key Lifecycle Manager (TKLM).
2. External security key management requires controller firmware version 7.70.xx.xx or
later.
Locked
The state that a security-enabled FDE drive enters when it has been removed from and
then reinserted in the storage subsystem, or when the storage subsystem powered off.
When storage subsystem power is restored, the drive remains in the Locked state. Data
cannot be written to or read from a locked disk until it is unlocked by the controller,
using the security key. If the controller does not have the security key, the security key
file and its pass phrase are required to unlock the drives for read and write operations.
Repurposing/
Reprovisioning
Changing a drive from being in Secured state to Unsecured state so that the drive can
be reused. Reprovisioning the drive is accomplished by secure erase.
Secure array
Security-capable drive
An FDE drive that is capable of encryption but is in Unsecured state (security not
enabled).
Security-enabled drive
An FDE drive with security enabled. The security-enabled FDE drive must be unlocked
using the security key after power to the drive is turned on and before read or write
operations can occur.
Unlocked
The state of a security-enabled FDE drive in which data on the disk is accessible for
read and write operations.
159
1. Install and configure the external key license manager software, IBM Tivoli Key Lifecycle Manager
(TKLM). See the documentation that came with the software for more information.
2. Download the DS TKLM Proxy Code from the IBM Support Portal at http://www.ibm.com/support/
entry/portal.
3. Install and configure the DS TKLM Proxy Code. See Installing and configuring the DS TKLM Proxy
Code server.
4. Enable the Full Disk Encryption and External Key Management premium features in Storage Manager.
See Enabling premium features on page 166.
5. Configure TKLM and the storage subsystems for the DS TKLM proxy and create external key
management security authorizations. See Creating security authorizations using external security key
management on page 170 in Enabling premium features on page 166.
If you prefer to use local security key management, begin with the information in Configuring disk
encryption with FDE drives on page 166.
160
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Storage subsystem 1
controller
TKLM server 1
Storage subsystem 2
controller
Proxy polls
controllers for keys
Storage subsystem 3
controller
TKLM server 2
Proxy server
TKLM servers send
keys to proxy
Storage subsystem 4
controller
TKLM server 3
TKLM server 4
To establish an external security key management configuration, download the DS TKLM Proxy Code
from the IBM Support Portal at http://www.ibm.com/support/entry/portal and complete the following
procedures:
1. Modifying the DS TKLM Proxy Code server configuration file
2. Installing the DS TKLM Proxy Code on page 165
Important: You must complete the procedures in order. Make sure that the IBM Tivoli Key Lifecycle
Manager (TKLM) software is installed. See the documentation that came with the software for more
information.
For Linux:
start_DS_TKLM_Proxy_Code_Linux.sh
stop_DS_TKLM_Proxy_Code_Linux.sh
restart_DS_TKLM_Proxy_Code_Linux.sh
The stop_DS_TKLM_Proxy_Code_*.sh script will remove the entry from /etc/inittab and end the
processes.
161
The method of creating and editing the configuration file in Windows is different from the method in AIX
or Linux. With Windows, you must create DS_TKLM_Proxy_Code.config manually, using the template
included in the DS_TKLM_Proxy_Code_Windows*.zip file. The definitions for parameters must be assigned
before the proxy can be installed.
Important: If you are working in a Windows operating-system environment, you must create and modify
the configuration file before you install the DS TKLM Proxy Code server.
With AIX and Linux, DS_TKLM_Proxy_Code.config is created and parameter definitions are assigned
during the installation. You must assign definitions for the configuration file parameters when you are
prompted.
The definition of each parameter is explained in the following table.
Table 27. Proxy configuration file properties
Property name
Description
Example
LogLevel
LogLevel = debug
This property specifies the location of the debug file. You AIX or Linux example:
must provide a path in your file system that can either
be a path relative to the directory /DS_TKLM_Proxy_Code/ DebugPath = ./Log/Debug/debug.log
bin or an absolute path.
Windows example:
Note: Make sure that you have read and write
permissions for the path directory.
DebugPath = .\Log\Debug\debug.log
AuditPath
ThresholdSize
162
This property specifies the maximum size of each log file Threshold size = 100000000000
in bytes. If the size threshold is reached, a new file is
created with same file name as the original file name
with the numbers 01 added at the end. If the new log file
reaches the size threshold, the original file is overwritten.
Note: If you decide later to increase the threshold size,
delete the existing log files. Otherwise, the proxy will
write log information in the old files if the new size
threshold is larger than the old size threshold.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Description
Example
Keyinformation
Path
KeyPassword
KeyinformationPath =
./CertFile/ibmproxycert.p12
Windows example:
KeyinformationPath =
.\CertFile\ibmproxycert.p12
KeyPassword = password
Example of KeyPassword property
after first reading occurs and the
password is obfuscated:
KeyPasswordHex = 47558BADDI3321FC
KeyPassword = ********
163
Description
Example
SYMServer.x
SYMServer.1 = 9.37.117.35 ,
9.37.117.36 , 2463 , 2463
,600A0B8000339848000000004B72851F,
false, SymPasswd
Example after the first time the
configuration file is read:
SYMServer.1 = 9.37.117.35 ,
9.37.117.36 , 2463 , 2463 ,
600A0B8000339848000000004B72851F,
true , 6408D5D0C596979894AA8F
TcpTimeout
TcpTimeout = 1000
RpcTimeout
RpcTimeout = 10
TimeBetweenSymbolServerQueries
TimeBetweenSymbolServerQueries = 10
164
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Note: To uninstall the proxy, open a DOS prompt window and type and execute the following command:
DS_TKLM_Proxy_Code_WinService.exe -u. Restart Windows.
165
Note: Be sure to download the correct file for your operating system. The operating system is a part
of the RPM file name.
2. Use rpm commands to extract the downloaded file and begin the installation process. For example:
rpm -ivh nodeps DS_TKLM_Proxy_Code-AIX-V1_.ppc.rpm
Note: The -nodeps part of the command is required only for AIX installations.
When you execute the RPM command, you create symbolic links, specify the location of the certificate
file that is provided by IBM, create a backup of /etc/inittab, and provide the path to use when you
execute the installation script.
3. After you execute the RPM command, run the installation script (/DS_TKLM_Proxy_Code/bin/
install.sh).
4. When you are prompted, enter all of the configuration file properties. See Modifying the DS TKLM
Proxy Code server configuration file on page 161 for a description of the properties and their values.
To configure TKLM and storage subsystems for the proxy, and to create external key management
security authorizations, continue to Creating security authorizations using external security key
management on page 170 in Enabling premium features.
166
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Feature Pack Information window, Full Disk Encryption: Enabled and External Key Management:
Enabled indicates that the FDE premium feature is enabled.
Important: External key management requires a security certificate file and its password. The file and
password are emailed to you after you enable the External Key Management premium feature. When you
enable the External Key Management premium feature at the IBM Premium Feature website, you must
provide a valid email address in the fields shown in the following image. Otherwise, you are prompted
to enter your email address after you click Continue.
It might take up to a day to receive the security certificate file and password. If you do not receive the
file or if you no longer have the email with the file, you can request another file and password by using
the key reactivation process on the IBM Premium Features website. For more information about the
security certificate file and configuring the KeyinformationPath and KeyPassword properties (Windows
operating systems only), see Modifying the DS TKLM Proxy Code server configuration file on page
161.
Note:
1. For subsystems with controller firmware version 7.60.xx.xx and earlier, the notification might read
Drive Security: Enabled.
2. The External Key Management premium feature will not be available if the controller firmware
version is earlier than 7.60.xx.xx.
3. For storage subsystems with controller firmware version 7.50.xx.xx and 7.60.xx.xx that already have
the FDE premium feature enabled, upgrading to controller firmware version 7.70.xx.xx or later will
not enable the External Key Management premium feature. You must reactivate the FDE key at the
IBM premium feature website to enable both Full Disk Encryption and External Key Management
premium features.
167
4. All FDE premium feature enablement key files generated after November, 2010, consist of two keys:
one for the Full Disk Encryption premium feature and one for the External Key Management
premium feature. Upgrade the controller firmware to version 7.70.xx.xx or later before you apply the
keys.
5. The FDE premium feature supports external security key management at no additional cost. The only
requirements are that the controller firmware is version 7.70.xx.xx or later and the premium feature
must be activated or reactivated at the IBM premium feature key website after November, 2010.
However, you must purchase external key license management software (such as TKLM).
If you enable the FDE feature after November, 2010, for a storage subsystem with controller firmware
7.70.xx.xx or later, External Key Management: Enabled and Full Disk Encryption: Enabled are displayed
in the Premium Features and Feature Pack Information window.
Enabling full disk encryption includes creating the security authorizations that you will need later to
unlock a secured FDE drive that has been turned off or removed from the storage subsystem. These
authorizations include the security key identifier, a pass phrase, and the security key file. The security
authorizations apply to all the FDE drives within the storage subsystem and are critical if a drive must be
unlocked after the power is turned on.
The process for creating security authorizations depends on the method of key management you use. See
the applicable section for local or external security key management.
168
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. Enter a security key identifier, the security key file name and location, and a pass phrase in the Create
Security Key window:
v Security key identifier: The security key identifier is paired with the storage subsystem worldwide
identifier and a randomly generated number and is used to uniquely identify the security key file.
The security key identifier can be left blank or can be up to 189 characters.
v Pass phrase: The pass phrase is used to decrypt the security key when it is read from the security
key file. Enter and record the pass phrase at this time. Confirm the pass phrase.
v Security key backup file: Click Browse next to the file name to select the security key file name
and location, or enter the value directly in the field. Click Create Key.
Note: Save the security key file to a safe location. The best practice is to store the security key file
with your key management policies. It is important to record and remember where this file is
stored because the security key file is required when a drive is moved from one storage subsystem
to another or when both controllers in a storage subsystem are replaced at the same time.
3. In the Create Security Key Complete window, record the security key identifier and the security key
file name; then, click OK. The authorizations that are required to enable security on FDE drive in the
Chapter 6. Working with full disk encryption
169
storage subsystem are now in place. These authorizations are synchronized between both controllers
in the storage subsystem. With these authorizations in place, arrays on the FDE drives in the storage
subsystem can be secured.
Attention: For greater security, store more than one copy of the pass phrase and security key file. Do
not specify the default security file directory as the location to store your copy of the security key file.
If you specify the default directory as the location to save the security key file, only one copy of the
security key file will be saved. Do not store the security key file in a logical drive that is mapped
from the same storage subsystem. See the IBM Full Disk Encryption Best Practices document for more
information.
170
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4. Click the Welcome link on the left side of the window. The Welcome window opens.
5. In the Key and Device Management box, select DS5000 from the Manage keys and devices menu,
and click Go. The Key and Device Management window opens.
171
172
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
10. When you are prompted with the Confirm Security Key Management window, type yes and click
OK.
11. When you are prompted, save a copy of the security key. Enter the pass phrase, file name, and file
location, and click OK. The controller attempts to contact the external key manager for the security
key. If it fails, the following message is displayed:
12. Return to the TKLM application and click the Pending devices link in the Action Items box.
173
174
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
The TKLM server is now ready to send keys to the DS TKLM Proxy Code server.
175
2. In the Select Configuration Task window, click Manual (advanced), click Create arrays and logical
drives, and then click OK.
3. In the Create Arrays and Logical Drives window, select Create a new array using unconfigured
capacity. If other (non-FDE) drive types are also installed in the DS5000, be sure to select only Fibre
Channel FDE drives. Click Next to continue.
176
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
4. Use the Create Array wizard to create the array. Click Next to continue.
5. In the Array Name & Drive Selection window, enter an array name (for example, Secure_Array_1).
Note that the Create a secure array check box has been preselected in this window. Clear the Create
a secure array check box and select Manual (Advanced) under Disk selection choices. Click Next to
continue.
Note: The Create a secure array check box is displayed and selected only if the full disk encryption
premium feature is enabled. If you select this check box when you create an array, the array that is
created will be secured, and the Manual (Advanced) option is not needed to secure the array.
177
6. Configure drives for the array in the Manual Drive Selection window:
a. Select a RAID level (for example, RAID 5).
b. From the Unselected drives list, select the security-capable drives that you want to use and click
Add to add them to the Selected drives list (for example, select the disk drives in slots 2 through
6 from storage expansion enclosure 8).
c. Click Calculate Capacity to calculate the total capacity of the selected drives.
d. Click Finish to complete the array.
Note: These drives are not yet secure. They are secured later in the process.
7. In the Array Created window, click OK to acknowledge successful creation of the array.
178
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
8. When the wizard prompts you to create logical drives in the array, use the wizard to create the
logical drives. After the logical drives are created, continue to the next step. See Chapter 4,
Configuring storage, on page 49 for more information about creating logical drives.
9. Secure the array that you have created:
a. In the Subsystem Management window, click the Logical/Physical tab.
Note: The blue dots below the disk icons on the right side of the window indicate which disks
compose the array.
b. To enable security on the array, right-click the array name; then, click Secure Drives.
179
c. In the Confirm Array Drive Security window, click Yes to secure the array.
Note:
1) If you move a drive to a separate storage subsystem or if you change the security key more
than two times in the current storage subsystem while the drive is removed from the storage
subsystem, you must provide the pass phrase, the security key, and the security key file to
unlock the drive and make the data readable.
2) After an array is secured, the only way to remove security is to delete the array. You can
make a VolumeCopy of the array and save it to other disks so that the data can continue to
be accessed.
10. In the Subsystem Management window, click the Logical/Physical tab, and note that the array is
secured, as indicated by the lock symbol to the left of the array name.
180
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
181
2. Right-click the drives that you want to unlock; then, click Unlock.
Note: If you want to unlock multiple drives, you only have to select one drive. The Storage Manager
automatically lists all of the drives that are locked in the storage subsystem and checks each drive
against the supplied security key file to determine whether it can use the key in the security key file.
3. In the Unlock Drives window, the locked drives that you selected are listed. To unlock these drives,
select the security key file, enter the pass phrase, and then click Unlock. The storage subsystem uses
182
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
the pass phrase to decrypt the security key from the security key file. The storage subsystem then
compares the decrypted security key to the security key on the drive and unlocks all the drives for
which the security key matches.
Note: The authentication process occurs only when the drive is in Locked state because the drive was
powered on after a power-down event. It does not repeat with each read and write operation.
4. In the Unlock Drives Complete window, click OK to confirm that the drives are unlocked. The
unlocked drives are now ready to be imported.
183
supported when you replace both controllers. A security file is necessary in this case; you might not
have management access to the storage subsystem to export the current security key if both of the
controllers must be replaced.
1. Save the security key that is used to unlock the drives in the existing storage subsystem in a security
key file before you remove the drives from the existing storage subsystem. After you export the
security key, pass phrase, and security key file, the security key file can be transferred from one
storage subsystem to another.
a. In the Subsystem Management window, click Storage Subsystem, click Drive Security, and click
Save Security Key File.
b. In the Save Security Key File - Enter Pass Phrase window, select a file save location, and enter and
confirm the pass phrase; then, click Save.
184
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
185
c. Select the security key file for the selected drives and enter the pass phrase that you entered when
saving the security key back up file; then, click Unlock.
186
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
187
1. Before the drives can be secure erased, you must delete the RAID array that the drives are associated
with and return the drives to Unassigned status:
a. Click the Logical/Physical tab in the Subsystem Management window.
c. When you are prompted to select the array that you want to delete, click the array name and click
Delete.
d. To confirm that you want to delete the array, enter yes in the field and click OK.
188
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
e. Wait for the array deletion process to be completed. When you receive the confirmation Processed
1 of array(s) Complete, click OK.
3. Select the drive on which you want to perform a secure erase. You can select more than one drive to
be erased by holding down the Ctrl key. In the top menu bar, click Drive; then, click Secure Erase.
189
4. To confirm that you want to permanently erase all data on the disk, enter yes in the field and click
OK. These drives can now be repurposed or discarded.
190
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
An unconfigured secured FDE drive cannot be used as a global hot-spare drive. If a global hot spare is a
secured FDE drive, it can be used as a spare drive only in secured arrays. If a global hot-spare drive is an
unsecured FDE drive, it can be used as a spare drive in secured or unsecured arrays with FDE drives, or
as a spare drive in arrays with non-FDE drives. You must secure erase the FDE drive to change it to
Unsecured state before it can be used as a global hot-spare drive. The following error message is
generated if you assign an unconfigured secured FDE drive as a global hot spare.
Return code: Error 2 - The operation cannot complete because either (1) the current state of a
component does not allow the operation to be completed, (2) the operation has been disabled in
NVSRAM (example, you are modifying media scan parameters when that option (offset 0x31, bit 5)
is disabled), or (3) there is a problem with the storage subsystem. Please check your storage
subsystem and its various components for possible problems and then retry the
operation.Operation when error occurred: PROC_assignSpecificDrivesAsHotSpares
When a global hot-spare drive is used as a spare for a failed drive in a secure array, it becomes a secure
FDE drive and remains secure provided that it is a spare in the secure array. After the failed drive in the
secure array is replaced and the data in the global hot-spare drive is copied back to the replaced drive,
the global hot-spare drive is automatically reprovisioned by the controllers to become an unsecured FDE
global hot-spare drive.
As a best practice in a mixed disk environment that includes non-security-capable SATA drives,
non-security-capable Fibre Channel drives, and FDE Fibre Channel drives (with security enabled or not
enabled), use at least one type of global hot-spare drive (FDE Fibre Channel and a SATA drive) at the
largest capacity within the array. If a secure-capable FDE Fibre Channel and a SATA hot-spare drive are
included, all arrays are protected.
Follow the standard hot-spare drive configuration guidelines in Configuring global hot-spare drives on
page 67). Hot-spare configuration guidelines are the same for FDE drives.
Log files
The Storage Manager major events log (MEL) includes messages that describe any security changes that
are made in the storage subsystem.
Securing arrays
Secure erase on page 192
Local security key management on page 192
External security key management on page 193
Premium features on page 193
Securing arrays
v Can I change an unsecured array with FDE drives to a secured array?
191
Yes. The steps to complete this process are described in Securing a RAID array on page 175. The
DS5000 Encryption feature must be enabled and the security key file and pass phrase already
established. See Enabling premium features on page 166 for more information.
v When I enable security on an array, will the data that was previously written to that array be lost or
erased?
No. Unless you perform a secure erase on the array disk drives, this data remains intact.
v Can I change a secured array with FDE drives to an unsecured array?
No. This is not a supported option. After an unsecured array is changed to a secure array, you
cannot change it back to an unsecured array without destroying the data in the security-enabled
FDE drives. Use VolumeCopy to copy the secure data to an unsecured array, or back up the data to
a secured tape. If you VolumeCopy the secure data to an unsecured array, you must physically
secure the drives. Then you must delete the original array and secure erase the array drives. Create
a new unsecured array with these drives and use VolumeCopy to copy the data back to the original
drives, or restore the data from secure tape.
v If I have an array with FDE drives that is secured, can I create another array that uses these same
drives and not enable security? Does the storage subsystem have a control so that this does not occur?
No. These are not supported functions. Any logical drive that is part of an array must be secured,
because the drives on which it is stored are security enabled.
v When a secure array is deleted, does disk security remain enabled?
Yes. The only way to disable security is to perform a secure erase or reprovision the drives.
v If I create a new array on a set of unassigned/unconfigured security-enabled FDE disks, will they
automatically become secure?
Yes.
Secure erase
v With secure erase, what can I erase, an individual drive or an array?
Secure erase is performed on an individual drive. You cannot erase a secured drive that is part of an
array; you must first delete the array. After the array is deleted and the drives become unassigned,
you can erase multiple disks in the same operation by holding the Ctrl key while you select the
drives that are to be secure erased.
v If I want to use only the secure erase feature, do I still have to set up a security key identifier and pass
phrase?
Yes. The full disk encryption feature must be enabled before you can use secure erase.
v After secure erase is performed on a drive, is security enabled or disabled on that drive?
The drive is returned to Secure Capable (unsecured) state after a secure erase. Security is disabled
on the drive.
v If I inadvertently secure erase a drive, is it possible to recover the data in the drive?
No. After a drive is secure erased, the data in the drive is not recoverable. You must recover the lost
data from a backup copy. Back up the data in secure drives before you secure erase the drives.
192
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Yes. Because security has not been enabled on the drive, it remains unlocked, and the data is
accessible.
v If my security key falls into the wrong hands, can I change it without losing my data?
Yes. The drive can be re-keyed, using the procedure to change the security key.
Premium features
v How do I make sure that my mirrored data is secure? What is a best practice for protecting data at the
remote site?
Secure your data with security-enabled FDE drives at both the primary and secondary sites. Also,
you must make sure that the data is protected while it is being transferred between the primary and
secondary sites.
v Can I use VolumeCopy to copy a secured logical unit number to a unsecured one? If so, what prevents
someone from doing that first and then stealing the unsecured copy?
Yes. To prevent someone from stealing the data with this method, implement prudent security
features for the DS5000 storage subsystem. The Storage Manager forces a strong password, but
administrator access must also have stringent controls in place.
v Can FlashCopy and VolumeCopy data be secured?
Yes. For FlashCopy, the FlashCopy repository logical drive must be secured if the target FlashCopy
data is secured. The Storage Manager enforces this rule. Similarly, if the source array of the
VolumeCopy pair is secured, the target array of the VolumeCopy pair must also be secured.
Yes, but only if the drive is unsecured (security not enabled). Check the status of the unconfigured
FDE drive. If the drive is secure, it must be secure erased or reprovisioned before you can use it as a
global hot-spare drive.
v If the hot-spare drive in a secured array is an unsecured FDE drive, does this drive automatically
become secured when a secured FDE drive fails and that data is written to the hot-spare drive?
Yes. When the failed drive is removed from the RAID group, a rebuild is automatically started to
the hot-spare drive. Security is enabled on the hot-spare drive before the rebuild is started. A rebuild
cannot be started to a non-FDE drive for a secure array. After the failed drive in the secured array is
replaced and the data in the global hot-spare drive is copied back to the replaced drive, the global
hot-spare drive is automatically reprovisioned by the controllers to become an unsecured FDE global
hot-spare drive.
Chapter 6. Working with full disk encryption
193
Boot support
v Is there a special process for booting from a security-enabled drive?
No. The only requirement is that the storage subsystem must be running (which is required in any
booting process).
v Are FDE drives susceptible to cold boot attacks?
No. This issue applies more to the server side, because an individual can create a boot image to gain
access to the server. This does not apply to FDE drives. FDE drives do not use the type of memory
that is susceptible to a cold boot attack.
Other
v Is DACstore information still written to the disk?
Yes. However, if the drive is secured, it must be unlocked by the controller before the DACstore
information can be read. In the rare event that the controller security key is corrupted or both
controllers are replaced, a security key file must be used to unlock the drive.
v Is data on the controllers cache secure with FDE and IBM Disk Encryption? If not, are there any best
practices here?
No. This is a security issue of physical access to the hardware. The administrator must have physical
control and security of the storage subsystem itself.
v If I have secure-capable disks but have not purchased the IBM Disk Encryption premium feature key,
can I still recognize secure-capable disks from the user interface?
Yes. This information is available from several windows in the Storage Manager interface.
v What about data classification?
See the SNIA best practices for more information about data classification. For specific references,
see the IBM Full Disk Encryption Best Practices document. To access this document on the IBM
website, go to http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5081492
&brandind=5000028, or complete the following steps:
1. Go to the IBM Support Portal at http://www.ibm.com/support/entry/portal.
2. In the Search within all of support & downloads field at the bottom of the webpage, type FDE
and press Enter.
194
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
3. In the list of search results, click the IBM Full Disk Encryption Best Practices - IBM System
Storage link.
4. Click the link to the PDF file to open or download the IBM Full Disk Encryption Best Practices
document.
v Can I use both FDE and non-FDE drives if I do not secure the drives?
Yes. However, using both FDE and non-FDE drives is not a cost-effective use of FDE drives. An
array with both FDE and non-FDE drives cannot be converted into a secure array at a later time.
v Do FDE disk drives have lower usable capacity because the data is encrypted or because capacity is
needed for the encryption engine and keys?
No. There is no capacity difference between non-FDE and FDE disk drives (1 GB unencrypted = 1
GB encrypted).
195
196
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
197
Console area
The console area of the Support Monitor interface shows the primary content for the function or item that
you select in the navigation tree.
198
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Icons
The meanings of the Support Monitor icons are described in the following table.
Table 28. Support Monitor icons
Icon
199
Note: If the email server information has not been entered on the Server page in the Administration
menu of the Support Monitor interface, the Server Setup window opens, similar to the one in the
following illustration. Enter the settings for the email server and user email address in the E-mail
Server and E-mail From fields, and click Save.
200
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2. In the Send Support Data window, select one or more SOC and RLS change log files, as requested by
IBM support, from the Available area and click the right arrow to move the selected support bundle
to the Selected area. To move all of the available support bundles to the Selected area, click the
double right arrow. If necessary, click the left arrow or double left arrow to move one or all of the
support bundles in the Selected area back to the Available area.
3. Enter the Problem Management Record (PMR) number and the email address that were provided to
you by the IBM support representative. To find the phone number for your country, go to
http://www.ibm.com/planetwide/.
4. Type your email address in the cc field so that a copy of the same email that is sent to IBM support is
also sent to you for your records.
5. In the Details area, type any information that might help IBM support identify the source of the
email. For example, type the PMR number, company information, contact name, telephone number,
and a one-line summary of the problem.
6. Click Send to send the support data bundle to IBM support.
201
Discovery
Discovery (<id>)
This message appears when the device id is assigned
from the Storage Manager Profiler server.
202
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
SMTP discovery
203
204
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Chapter 8. Troubleshooting
Use the information in this chapter to diagnose and solve problems related to Storage Manager. For
information about getting help, service, or other technical assistance, see Getting information, help, and
service on page xiv.
The following topics are covered in this chapter:
v Critical event problem solving
v Support Monitor troubleshooting on page 223
v DS Diagnostic Data Capture (DDC) on page 225
v Resolving disk array errors on AIX on page 227
Sense key/ASC/ASCQ
6/3F/C3
205
Sense key/ASC/ASCQ
6/5D/80
None
None
None
None
206
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
None
None
None
None
None
None
3/11/8A
Chapter 8. Troubleshooting
207
Sense key/ASC/ASCQ
6/A1/00
6/0C/80
6/0C/81
6/40/81
6/3F/D9
None
208
6/3F/87
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
6/3F/EB
6/3F/80
6/3F/8B
6/3F/8C
6/3F/86
6/3F/85
Chapter 8. Troubleshooting
209
Sense key/ASC/ASCQ
6/3F/E0
6/3F/8E
6/3F/98
6/8E/01
6/91/3B
Event 2255 - Logical drive
definition incompatible with
ALT mode - ALT disabled
Note: This event is not
applicable for the DS4800.
ASC/ASCQ: None
02/04/81
210
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
6/3F/C8
None
6/98/01
6/3F/C7
6/3F/C7
Chapter 8. Troubleshooting
211
Sense key/ASC/ASCQ
6/3F/C7
6/E0/20
6/3F/C7
None
6/98/01
6/98/02
212
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
6/98/03
6/3F/C6
6/98/03
None
Chapter 8. Troubleshooting
213
Sense key/ASC/ASCQ
None
6/E0/20
6/E0/20
6/E0/20
214
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
None
None
ASC/ASCQ: None
None
None
Chapter 8. Troubleshooting
215
Sense key/ASC/ASCQ
None
ASC/ASCQ: None
ASC/ASCQ: None
None
216
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
None
None
None
None
None
Chapter 8. Troubleshooting
217
Sense key/ASC/ASCQ
None
None
None
None
None
None
218
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sense key/ASC/ASCQ
None
None
None
None
Chapter 8. Troubleshooting
219
Sense key/ASC/ASCQ
None
None
None
Event 6700 - Unreadable
sector(s) detected - data loss
occurred
None
220
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Chapter 8. Troubleshooting
221
Database save/restore
Storage Monitor Service automatically saves the configuration DB from a subsystem, and an existing
configuration DB can be restored as well.
Save
Storage Monitor Service automatically saves the configuration DB from a subsystem and saves the file in
"...client\data\monitor\dbcapture" if there is a DB change AND 125 minutes have passed since
previous capture.
When a subsystem is added to a newly installed HSW, the first DB is captured.
All captured DB files are zipped and named as follows: RetrievedRecords_SSID_Date_Time.dbm.
Example: RetrievedRecords_60080e500017b8de000000004be47b12_2010_08_20_14_48_27.dbm
CLI can be used to save a DB manually by using the command save storageSubsystem dbmDatabase
file="C:\path\filename.zip"
Restore
An existing configuration DB can be restored to recover systems that have lost their configuration or their
configuration was removed to recover from failure.
It restores portions of the database containing:
v Lun and array configuration
v Lun WWNs
v Controller WWNs
v premium features
v mappings
It excludes:
v MEL
v UTM
v cache
222
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Duration: up to 45 minutes
The user must have a Validator String to restore the configuration DB. To obtain the validator, send the
config DB zip file and the system profile to IBM support. IBM support generates the validator string
based on the information that you provide.
Possible cause
Possible solution
Chapter 8. Troubleshooting
223
Possible cause
Possible solution
A networked storage
subsystem is not in the list
of monitored storage
subsystems.
The missing storage subsystem Re-scan for devices in the Storage Monitor window.
has not been detected by the
software.
Storage Monitor has not been
configured with unique names
for each storage subsystem.
224
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Possible cause
Possible solution
225
old you have successfully retrieved the previous DDC information. In addition, DDC information is
available only if a controller is online. A controller that is in service or lock-down mode does not trigger a
DDC event. After you collect the DDC data, contact IBM support to report the problem and get assistance
with troubleshooting the condition.
Recovery steps
To perform the DDC recovery process, complete the following steps:
1. Open either the Script Editor from the Enterprise Management window or the command-line interface
(CLI).
Note: See the online help in the Enterprise Management window for more information about the
syntax of these commands.
2. Follow the instructions in the following table, depending on whether you want to save the diagnostic
data.
Table 32. Recovery Step 2
If...
Then...
Go to step 3.
Go to step 5.
3. Type
save storageSubsystem diagnosticData file="filename ";
where filename is the location and name of the file that will be saved. The file is initialized as a .zip
file.
Note: The esm parameter of the command syntax is not supported.
4. Follow the instructions in the following table to work with the diagnostic data.
Table 33. Recovery Step 4
If...
Then...
Go to step 6.
If...
Then...
5. Type
reset storageSubsystem diagnosticData;
Table 34. Recovery Step 5
If...
Then...
Go to step 6.
6. Click Recheck to run the Recovery Guru again. The failure is no longer displayed in the Summary
area.
226
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
After this process has been completed, the DDC message is removed automatically, and a recheck of the
Recovery Guru shows no entries for DDC capture. If for some reason the data has not been removed, the
Recovery Guru provides an example of how to clear the DDC information without saving the data. To
complete the preceding procedure in the script editor, type
reset storageSubsystem diagnosticData;
Description
Priority
Explanation
0x6900
Diagnostic Data is
available.
Critical
0x6901
Diagnostic Data
retrieval operation
started.
Informational
0x6902
Diagnostic Data
retrieval operation
completed.
Informational
0x6903
Diagnostic Data
Needs Attention
status/flag cleared.
Informational
Error type
Error description
FCP_ARRAY_ERR1
FCP_ARRAY_ERR2
FCP_ARRAY_ERR3
FCP_ARRAY_ERR4
FCP_ARRAY_ERR5
UNDETERMINED ERROR
Chapter 8. Troubleshooting
227
Error type
Error description
FCP_ARRAY_ERR6
SUBSYSTEM COMPONENT
FAILURE
FCP_ARRAY_ERR7
CONTROLLER HEALTH
CHECK FAILURE
FCP_ARRAY_ERR8
ARRAY CONTROLLER
SWITCH
FCP_ARRAY_ERR9
ARRAY CONTROLLER
SWITCH FAILURE
10
FCP_ARRAY_ERR10
ARRAY CONFIGURATION
CHANGED
11
FCP_ARRAY_ERR11
12
FCP_ARRAY_ERR12
13
FCP_ARRAY_ERR13
ARRAY INTER-CONTROLLER
COMMUNICATION FAILURE
14
FCP_ARRAY_ERR14
15
FCP_ARRAY_ERR15
16
FCP_ARRAY_ERR16
17
FCP_ARRAY_ERR17
WORLDWIDE NAME
CHANGED
18
FCP_ARRAY_ERR18
RESERVATION CONFLICT
19
FCP_ARRAY_ERR19
SNAPSHOT VOLUME
REPOSITORY FULL
228
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Error type
Error description
20
FCP_ARRAY_ERR20
SNAPSHOT OPERATION
STOPPED BY ADMIN
21
FCP_ARRAY_ERR21
SNAPSHOT REPOSITORY
METADATA ERROR
22
FCP_ARRAY_ERR22
23
FCP_ARRAY_ERR23
24
FCP_ARRAY_ERR24
SNAPSHOT VOLUME
REPOSITORY FULL
25
FCP_ARRAY_ERR25
CACHED DATA WILL BE LOST This message is a warning that a disk array
IF CONTROLLER FAILS
logical drive (LUN) is running with write
cache enabled and cache mirroring disabled.
The warning is displayed when the LUN is
opened, and it is displayed again every 24
hours until cache mirroring is enabled again.
If a controller fails, or if power to the
controller is turned off while the LUN is
running in this mode, data that is in the write
cache (but not written to the physical disk
media) might be lost. This can cause
corrupted files, file systems, or databases.
26
FCP_ARRAY_ERR26
Chapter 8. Troubleshooting
229
FCP_ARRAY_ERR27
Error type
Error description
SINGLE CONTROLLER
RESTARTED
28
FCP_ARRAY_ERR28
SINGLE CONTROLLER
RESTART FAILURE
A new errorlog DISK_ERR7 has been created to notify that a path has been designated as failed because
of a predetermined number of IO errors that occurred on the path. This is normally preceded with other
error logs that represent the actual error that occurred on the path.
230
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
231
232
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
is 0. For host operating systems other than Microsoft Windows, you might have to change this
setting to a value other 0 to allow the host to see more than one logical drive from the storage
subsystem.
Enable LIP Reset
This setting determines the type of loop initialization process (LIP) reset that is used when the
operating system initiates a bus reset routine. When this setting is Yes, the driver initiates a global
LIP reset to clear the target device reservations. When this setting is no, the driver initiates a
global LIP reset with full login. The default is No.
Enable LIP Full Login
This setting instructs the ISP chip to log in, again, to all ports after any LIP. The default is Yes.
Enable Target Reset
This setting enables the drivers to issue a Target Reset command to all devices on the loop when
a SCSI Bus Reset command is issued. The default is Yes.
Login Retry Count
This setting specifies the number of times that the software tries to log in to a device. The default
is 30 retries.
Port Down Retry Count
This setting specifies the number of seconds that elapse before the software retries a command to
a port returning port down status. The default is 30 seconds. For the Microsoft Windows servers
in MSCS configuration, the Port Down Retry Count BIOS parameter must be changed from the
default of 30 to 70.
Link Down Timeout
This setting specifies the number of seconds that the software waits for a link down to come up.
The default is 60 seconds.
Extended Error Logging
This setting provides additional error and debug information to the operating system. When it is
enabled, events are logged in the Windows NT Event Viewer. The default is Disabled.
RIO Operation Mode
This setting specifies the reduced interrupt operation (RIO) modes, if supported by the software
driver. RIO modes allow posting multiple command completions in a single interrupt. The
default is 0.
Interrupt Delay Timer
This setting contains the value (in 100-microsecond increments) used by a timer to set the wait
time between accessing (DMA) a set of handles and generating an interrupt. The default is 0.
233
Note: The BIOS settings in the Windows column are the default values that are set when the adapters are
ordered from IBM as IBM FC-2 (QLA2310), FC2-133 (QLA2340) and single-port and dual-port 4 Gbps
(QLx2460 and QLx2462) Fibre Channel host bus adapters. If the adapters are not from IBM, the default
BIOS might not be the same as those defined in the Microsoft Windows column. There is one exception:
the default setting for Fibre Channel tape support is enabled.
Table 37 shows the default settings for IBM Fibre Channel FC-2 and FC2-133 (QLogic adapter models
QLA2310 and QLA2340) host bus adapter settings (for BIOS V1.35 and later) by operating system as well
as the default registry settings for Microsoft Windows operating systems. DS3000, DS4000, or DS5000
products require BIOS V1.43 or later for these adapters. In addition, these settings are also the default
BIOS settings for the newer DS3000, DS4000, or DS5000 4 Gbps single and dual-port host bus adapters
(QLogic adapter models QLx2460 and QLx2462). The 4 Gbps host bus adapter BIOS version is 1.12 or
later. See the applicable readme file for the latest updates to these values.
Table 37. QLogic model QLA234x, QLA24xx, QLE2462, QLE2460, QLE2560, QLE2562, QMI2572, QMI3572,
QMI2582
Item
Default
Windows
2003 and
Windows Windows
2000
2008
VMware
LINUX
MPP
Solaris
LINUX
DMMP
NetWare
BIOS settings
Host Adapter settings
Host Adapter BIOS
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
2048
2048
2048
2048
2048
2048
2048
2048
Disabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
1251
1251
1251
1251
1251
1251
1251
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Frame Size
Loop Reset Delay
Adapter Hard Loop ID
(only for arbitrated
loop topology).
Hard Loop ID (must
be unique for each
HBA) (only for
arbitrated loop
topology).
Spin-up Delay
Connect Options
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled3
Disabled
Disabled
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
Execution Throttle
16
256
256
256
256
256
256
256
32
No
No
No
No
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
30
30
30
30
30
30
30
30
30
30
30
12
12
70
234
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Table 37. QLogic model QLA234x, QLA24xx, QLE2462, QLE2460, QLE2560, QLE2562, QMI2572, QMI3572,
QMI2582 (continued)
Item
Default
VMware
Windows
2003 and
Windows Windows
2000
2008
Solaris
LINUX
MPP
LINUX
DMMP
NetWare
DS3K: 144
DS4K/5K:
702
70
DS3K: 70
DS4K5K:
35
10
70
DS3K:144 DS3K:144
DS4K/
DS4K/5K:
5K: 60
60
60
DS3K:144
DS4K/
5K: 60
NA
60
Disabled
Disabled
Disabled
Disabled
Disabled
256
256
256
256
256
256
256
256
>4 GB Addressing
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enable Database
Updates
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
No
Disabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
70
30
60
Disabled
Disabled
Disabled
Extended Error
Logging
IOCB Allocation
DS3K:
144
DS4K/
5K: 702
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Connection Options
Class 2 Service
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
ACK0
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Enabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Enabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Command Reference
Number
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Response Timer
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
2 (Auto)
Extended Control
Block
Data Rate
5
REGISTRY SETTINGS
(HKEY_LOCAL_MACHINESystemCurrentControlSetServicesQL2300ParametersDevice)
LargeLuns
MaximumSGList
N/A
N/A
N/A
N/A
N/A
N/A
0x21
0xff
0xff
0xff
N/A
N/A
N/A
N/A
235
Table 37. QLogic model QLA234x, QLA24xx, QLE2462, QLE2460, QLE2560, QLE2562, QMI2572, QMI3572,
QMI2582 (continued)
Item
Default
VMware
Windows
2003 and
Windows Windows
2000
2008
Solaris
LINUX
MPP
LINUX
DMMP
NetWare
UseSameNN
N/A
N/A
N/A
N/A
BusChange (SCSIPort
Miniport 9.0.1.60 and
earlier does not apply
to 9.1.1.11 and newer)
N/A
N/A
N/A
N/A
N/A
0x3C
0x78
DS3K:
xA0
DS4K/
5K: x78
DS3K:
xA0
DS4K/5K:
x78
N/A
N/A
N/A
N/A
TimeOutValue 4
(REG_DWORD)
REGISTRY SETTINGS5
(HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices<FAILOVER>parameters: Where
<FAILOVER>=Rdacdisk for MPPor RDAC installations or <FAILOVER>=mppdsm, ds4dsm, md3dsm, sx3dsm,
csmdsm, or tpsdsm for MPIO installations. Mppdsm is for the generic version, your installation could be
different.)
SynchTimeOut
(REG_DWORD)
0x78
N/A
DS3K:
xA0
DS4K/
5K: x78
DS3K:
xA0
DS4K/5K:
x78
DisableLunRebalance
(Only applies to
cluster configurations.
Firmware version
6.xx.xx.xx and later.)
0x00
N/A
0x03
0x03
236
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Table 37. QLogic model QLA234x, QLA24xx, QLE2462, QLE2460, QLE2560, QLE2562, QMI2572, QMI3572,
QMI2582 (continued)
Item
Default
VMware
Windows
2003 and
Windows Windows
2000
2008
Solaris
LINUX
MPP
LINUX
DMMP
NetWare
Note: The BIOS settings under the Windows column are the default values that are set when the
adapters are ordered from IBM as IBM Fibre Channel host bus adapters. If the adapters are not from
IBM, the default BIOS might not be the same as the ones that are defined in the Microsoft Windows
column. There is one exception: the default setting for Fibre Channel tape support is enabled.
Table 38 shows the default settings for various IBM DS3000, DS4000, or DS5000 Fibre Channel host bus
adapters (QLogic adapter QL220x) models (for BIOS V1.81) by operating system. See the applicable
readme file for the latest updates to these values.
Table 38. QLogic model QL220x (for BIOS V1.81) host bus adapter settings by operating system
Item
Windows
NT
Linux
NetWare
237
Table 38. QLogic model QL220x (for BIOS V1.81) host bus adapter settings by operating system (continued)
BIOS settings
Host Adapter settings
Disabled
Disabled
Disabled
Disabled
2048
2048
2048
2048
Enabled
Enabled
Enabled
Enabled
1251
1251
1251
1251
Disabled
Disabled
Disabled
Disabled
256
256
256
256
Disabled
Disabled
Disabled
Disabled
32
No
No
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
30
30
30
30
30
30
12
302
IOCB Allocation
256
256
256
256
Disabled
Disabled
Disabled
Disabled
Enabled
Enabled
Enabled
Enabled
Connection Options
Class 2 Service
Disabled
Disabled
Disabled
Disabled
ACK0
Disabled
Spin Up Delay
Advanced adapter settings
Execution Throttle
>4 Gbyte Addressing
Supported
Disabled
3
Supported
Disabled
3
Supported
Disabled
3
Supported3
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Disabled
Response Timer
Interrupt Delay Time
4
LargeLuns
MaximumSGList
0x21
0x21
0x3C
0x3C
238
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Table 38. QLogic model QL220x (for BIOS V1.81) host bus adapter settings by operating system (continued)
Note:
1. This setting must be changed to a unique AL-PA value if there is more than one Fibre Channel device in the
FC-AL loop.
2. For larger configurations with heavy I/O loads, change this value to 70.
3. Change this setting to Enable or Supported when the HBA is connected to a tape device only. Set it to Disabled
when you connect to DS3000, DS4000, or DS5000 Storage Subsystem.
4. To access registry settings, click Start, select Run, type regedit into the Open field, and then click OK.
Attention: Exercise caution when you change the Windows registry. If you change the wrong registry entry or
make an incorrect entry for a setting, you can cause an error that prevents your server from booting or operating
correctly.
New value
FcLoopEnabled = 1
FcLoopEnabled = 0 (for non-loop; auto-topology)
FcLoopEnabled = 1 (for loop)
FcFabricEnabled = 0
FcFabricEnabled = 0 (for non-fabric; auto-topology)
FcFabricEnabled = 1 (for fabric)
FcEngHeartbeatInterval = 5
FcLinkUpRecoveryTime =
1000
BusRetryDelay = 5000
TargetOfflineEnable = 1
TargetOfflineEnable = 0 (Disable)
TargetOfflineEnable = 1 (Enable)
FailoverDelay = 30;
FailoverDelayFcTape = 300
TimeoutResetEnable = 0
QfullRetryCount = 5
QfullRetryDelay = 5000
239
New value
LunRecoveryInterval = 50
FcLinkSpeed = 3
JNICreationDelay = 1
FlogiRetryCount = 3
FcFlogiTimeout = 10
PlogiRetryCount = 3
PlogiControlSeconds = 30
LunDiscoveryMethod = 1
CmdTaskAttr = 0
CmdTaskAttr = 0 (Simple Queue)
CmdTaskAttr = 1 (Untagged)
automap = 0
automap = 1 (Enable)
FclpEnable = 1
FclpEnable = 0 (Disable)
OverrunFailoverCount = 0
PlogiRetryTime = 50
SwitchGidPtSyncEnable = 0
target_throttle = 256
lun_throttle = 64
target0_hba = jnic146x0;
target0_wwpn = <controller wwpn>
target1_hba = jnic146x1;
target1_wwpn = <controller wwpn>
Note: You might have to run the /etc/raid/bin/genjniconf reconfigure script from the Solaris shell:
# /etc/raid/bin/genjniconf
New value
FcLoopEnabled = 1
FcLoopEnabled = 0 (for non-Loop)
FcLoopEnabled = 1 (for Loop)
FcFabricEnabled = 0
FcFabricEnabled = 0 (for non-fabric)
FcFabricEnabled = 1 (for fabric)
FcPortCfgEnable = 1
FcPortCfgEnable = 0 (port reconfiguration not required)
FcPortCfgEnable = 1 (port reconfiguration required)
FcEngHeartbeatInterval = 5
240
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
New value
FcLrrTimeout = 100
FcLinkUpRecoverTime =
1000
BusyRetryDelay = 5000
FailoverDelay = 30;
FailoverDelay = 60;
TimeoutResetEnable = 0
QfullRetryCount = 5
QfullRetryDelay = 5000
loRecoveryDelay = 50
JniCreationDelay = 5;
JniCreationDelay = 10;
FlogiRetryCount = 3
PlogiRetryCount = 5
FcEmIdEndTcbTimeCount =
1533
target_throttle = 256
lun_throttle = 64
automap = 0
automap = 0 (persistence binding)
automap = 1 (automapping)
Add these settings.
target0_hba = jnic146x0;
target0_wwpn = controller wwpn
target1_hba = jnic146x1;
target1_wwpn = controller wwpn
v You might have to run the /etc/raid/bin/genjniconf reconfigure script from the Solaris shell:
# /etc/raid/bin/genjniconf
v Set portEnabled = 1; only when you see JNI cards entering non-participating mode in the
/var/adm/messages file. Under that condition, complete the following steps:
1. Set FcPortCfgEnabled = 1;
2. Restart the host.
3. Set FcPortCfgEnabled = 0;
4. Restart the host again.
When you have done so, check /var/adm/messages to make sure that it sets the JNI cards to Fabric
or Loop mode.
New value
scsi_initiator_id = 0x7d
241
New value
fca_nport = 0;
public_loop = 0
target_controllers = 126
ip_disable = 1;
ip_compliant = 0
qfull_retry_interval = 0
qfull_retry_interval = 1000
failover = 30;
failover_extension = 0
recovery_attempts - 5
class2_enable = 0
fca_heartbeat = 0
reset_glm = 0
timeout_reset_enable = 0
busy_retry_delay= 100;
link_recovery_delay = 1000;
scsi_probe_delay = 500;
def_hba_binding = fca-pci*;
def_hba_binding = nonjni; (for binding)
def_hba_binding = fcaw; (for non-binding)
def_wwnn_binding = $xxxxxx
def_wwnn_binding = xxxxxx
def_wwpn_binding = $xxxxxx
fca_verbose = 1
Note: You might have to run the /etc/raid/bin/genjniconf reconfigure script from the Solaris shell:
# /etc/raid/bin/genjniconf
New value
fca_nport = 0;
fca_nport =1;
ip_disable = 0;
ip_disable=1;
failover = 0;
failover =30;
242
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
New value
busy_retry_delay = 5000;
busy_retry_delay = 5000;
link_recovery_delay = 1000;
link_recovery_delay = 1000;
scsi_probe_delay = 5000;
scsi_probe_delay = 5000;
def_hba_binding = fcaw*;
Direct attached configurations:
def_hba_binding = fcaw*;
SAN-attached configurations:
def_hba_binding = nonJNI;
def_wwnn_binding = $xxxxxx def_wwnn_bindindef_hba_ binding = nonjni; g = xxxxxx
def_wwnn_binding = $xxxxxx Same as the original entry
Will be added by reconfigure
script
Note: You might have to run the /etc/raid/bin/genscsiconf reconfigure script from the shell prompt:
# /etc/raid/bin/genscsiconf
Original value
New value
hba0
hba0-execution-throttle=16;
hba0-execution-throttle=255;
hba1
hba1-execution-throttle=16;
hba1-execution-throttle=255;
In the vi Editor, uncomment and modify the loop attributes of each QLogic HBA, using the values that
are specified in Table 43.
Table 43. Configuration settings for QL2342
Original value
New value
Comments
max-frame-length=2048;
max-frame-length=2048
execution-throttle=16;
execution-throttle=255;
Change
login-retry-count=8;
login-retry-count=30;
Change
enable-adapter-hard-loop-ID=0;
enable-adapter-hard-loop-ID=1;
Change
243
New value
Comments
adapter-hard-loop-ID=0;
adapter-hard-loop-ID=0;
enable-LIP-reset=0;
enable-LIP-reset=0;
hba0-enable-LIP-full-login=1;
hba0-enable-LIP-full-login=1;
enable-target-reset=0;
enable-target-reset=0;
reset-delay=5
reset-delay=8
Change
port-down-retry-count=8;
port-down-retry-count=70;
Change
maximum-luns-per-target=8;
maximum-luns-per-target=0;
Change
connection-options=2;
connection-options=2;
fc-tape=1;
fc-tape=0;
Change
loop-reset-delay = 5;
loop-reset-delay = 8;
Change
Change
link-down-timeout = 30;
link-down-timeout = 60;
Change
244
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Sample configuration
Figure 32 shows a sample VMware ESX Server configuration.
Ethernet
Management station
ESX server
Fibre-channel
I/O path
Ethernet
Controller
Controller
Storage subsystems
SJ001150
Software requirements
This section describes the software that is required to use a VMware ESX Server host operating system
with a DS3000, DS4000, or DS5000 storage subsystem.
Copyright IBM Corp. 2012
245
Management station
The following software is required for the Windows or Linux management station:
1. SM Runtime (Linux only)
2. SMclient (Linux and Windows)
Hardware requirements
You can use VMware ESX Server host servers with the following types of storage subsystems and storage
expansion enclosures. For additional information, see the System Storage Interoperation Center at the
following website:
http://www.ibm.com/systems/support/storage/config/ssic
Note: For general storage subsystem requirements, see Chapter 1, Preparing for installation, on page 1.
DS5000 Storage Subsystems
v DS5300
v DS5100
DS4000 Storage Subsystems
v DS4100 (Dual-controller units only)
v DS4200
v DS4300 (Dual-controller and Turbo units only)
v DS4400
v DS4500
v DS4700
246
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
v DS4800
DS5000 storage expansion enclosures
v EXP5000
DS4000 storage expansion enclosures
v EXP100
v EXP420 ( with DS4200 only)
v
v
v
v
EXP500
EXP700
EXP710
EXP810
All logical drives that are configured for VMware ESX Server must be mapped to an VMware
ESX Server host group.
Note: VMware ESX server-specific host type is not available for DS3000, DS4000, or DS5000
storage subsystems if the controller firmware version is earlier than 7.70.xx.xx. Use
LNXCLVMWARE host type for your VMware hosts and host groups. If you are using the
default host group, make sure that the default host type is LNXCLVMWARE. DS Storage
subsystems with controller firmware version 7.70.xx.xx or later have a VMware ESX
server-specific host type defined, named VMWARE. VMWARE should be used as the host type
of the VMWare hosts and host group.
v In a DS4100 storage subsystem configuration, you must initially assign the LUNs to Controller
A, on the lowest-numbered HBA. After the LUNs are initialized, you can change the path to
Controller B. (This restriction will be corrected in a future release of ESX Server.)
Appendix B. Using a storage subsystem with a VMware ESX Server configuration
247
248
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Server 1
HBA 1
Server 2
HBA 2
HBA 1
HBA 2
FC Switch
FC Switch
Controller A
Controller B
Storage subsystem
249
5. Select Scan Fibre Devices and press Enter. The resulting output is similar to the following:
ID
128
129
130
131
132
133
134
135
No
No
No
No
No
No
Port ID
Note: Depending on how the configuration is cabled, you might see multiple instances.
If you do not see a storage subsystem controller, verify the cabling, switch zoning, and LUN mapping.
250
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
General information
This document does not describe how to install or configure cluster services. See the documentation that
is provided with your cluster service products for this information.
Important: The information in this document might not include up-to-date cluster software version
levels.
For the latest requirements and user information about using the Storage Manager with cluster services,
see the readme file that is located on the Storage Manager DVD for your host operating system, or check
the most recent readme files online.
See Finding Storage Manager software, controller firmware, and readme files on page xiii for
instructions on finding the readme files online.
You can also find more information on the System Storage Interoperation Center, which is maintained at
the following website:
www.ibm.com/systems/support/storage/config/ssic
251
Software requirements
For the latest supported HACMP versions, see the System Storage Interoperation Center at the following
website:
www.ibm.com/systems/support/storage/config/ssic
Configuration limitations
The following limitations apply to HACMP configurations:
v HACMP C-SPOC cannot be used to add a DS3000, DS4000, or DS5000 disk to AIX using the Add a Disk
to the Cluster facility.
v HACMP C-SPOC does not support enhanced concurrent mode arrays.
v Single-HBA configurations are allowed, but each single-HBA configuration requires that both
controllers in the storage subsystem be connected to a switch, within the same SAN zone as the HBA.
Attention: Although single-HBA configurations are supported, do not use them in HACMP
environments because they introduce a single point-of-failure in the storage I/O path.
v Use switched fabric connections between the host nodes and the storage subsystem. Direct attachment
from the host nodes to the storage subsystem in an HACMP environment is supported only if all the
following restrictions and limitations are met:
Only dual-controller DS3000, DS4000, or DS5000 storage subsystem versions are supported for direct
attachment in a high-availability configuration.
The AIX operating system must be version 05.2 or later.
The HACMP clustering software must be version 05.1 or later.
All host nodes that are directly attached to the storage subsystem must be part of the same HACMP
cluster.
All logical drives (LUNs) that are surfaced by the storage subsystem are part of one or more
enhanced concurrent mode arrays.
The array varyon is in the active state only on the host node that owns the HACMP non-concurrent
resource group (which contains the enhanced concurrent mode array or arrays). For all other host
nodes in the HACMP cluster, the enhanced concurrent mode array varyon is in the passive state.
Direct operations on the logical drives in the enhanced concurrent mode arrays cannot be
performed, from any host nodes in the HACMP cluster, if the operations bypass the Logical
VolumeManager (LVM) layer of the AIX operating system. For example, you cannot use a DD
command while logged in as the root user.
Each host node in the HACMP cluster must have two Fibre Channel connections to the storage
subsystem. One direct Fibre Channel connection must be to controller A in the storage subsystem,
and the other direct Fibre Channel connection must be to controller B in the storage subsystem.
You can directly attach a maximum of two host nodes in an HACMP cluster to a dual-controller
version of a DS4100 or DS4300 storage subsystem.
You can directly attach a maximum of two host nodes in an HACMP cluster to a storage subsystem.
Each host node must have two direct Fibre Channel connections to the storage subsystem.
Note: In a DS3000, DS4000, or DS5000 storage subsystem, the two direct Fibre Channel connections
from each host node must be to independent minihubs. Therefore, this configuration requires that
four host minihubs (feature code 3507) be installed in the DS3000, DS4000, or DS5000 storage
subsystemtwo host minihubs for each host node in the HACMP cluster.
252
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
v HACMP clusters can support from two to 32 servers on each DS3000, DS4000, and DS5000 storage
subsystem partition. If you run this kind of environment, be sure to read and understand the AIX
device drivers queue depth settings that are described in Setting the queue depth for hdisk devices
on page 124.
v You can attach non-clustered AIX hosts to a storage subsystem that is running the Storage Manager
and is attached to an HACMP cluster. However, you must configure the non-clustered AIX hosts on
separate host partitions on the storage subsystem.
Software requirements
For the latest supported PSSP and GPFS versions, see the System Storage Interoperation Center at the
following website:
www.ibm.com/systems/support/storage/config/ssic
Configuration limitations
The following limitations apply to PSSP and GPFS configurations:
v Direct connection is not allowed between the host node and a DS3000, DS4000, or DS5000 storage
subsystem. Only switched fabric connection is allowed.
v RVSD clusters can support up to two IBM Virtual Shared Disk and RVSD servers for each storage
subsystem partition.
v Single node quorum is not supported in a dual-node GPFS cluster with DS3000, DS4000, or DS5000
disks in the configuration.
v Heterogeneous configurations are not supported.
253
Preferred
Failover
RVSD Cluster
FCHA0 FC
VSD 1
FCHA1 FC
Server
pair
1
Partition 1
FC
SW
WWPN-1A
LUN 0
WWPN-1B
LUN 31
CTLR A
WWPN-2A
WWPN-1A
WWPN-2A
WWPN-1B
WWPN-2B
Port 0
FCHA0 FC
Port 1
VSD 2
FCHA1 FC
WWPN-2B
Partition 2
Fabric zone 4
FCHA0 FC
VSD 7
Server
pair
4
Partition 3
FC fabric 5
FC
SW
WWPN-7A
CTLR B
FCHA1 FC
WWPN-7B
Port 0
WWPN-8A
Port 1
Partition 4
LUN 1
FCHA0 FC
VSD 8
LUN 31
WWPN-7A
WWPN-8A
WWPN-7B
WWPN-8B
FCHA1 FC
WWPN-8B
FC fabric 8
SJ001031
Figure 34. Cluster configuration with single storage subsystemone to four partitions
Figure 35 on page 255 shows a cluster configuration that contains three DS storage subsystems, with one
partition on each storage subsystem.
254
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Preferred
Failover
DS4000 Storage Server
Port 0 CTLR A
Partition 1
Fabric zone 1
FC
SW
Port 1
Port 0 CTLR B
LUN 0
LUN 31
RVSD Cluster
WWPN-1A
WWPN-1A
WWPN-2A
WWPN-1B
WWPN-2B
Port 1
FCHA0 FC
VSD 1
Port 0 CTLR A
FCHA1 FC
WWPN-1B
WWPN-2A
Port 1
Port 0 CTLR B
FCHA0 FC
FCHA1 FC
LUN 0
LUN 31
Fabric zone 2
FC
SW
VSD 2
Partition 1
WWPN-1A
WWPN-2A
WWPN-1B
WWPN-2B
Port 1
WWPN-1B
Partition 1
Port 1
Port 0 CTLR B
LUN 0
LUN 31
WWPN-1A
WWPN-2B
WWPN-1B
WWPN-2B
Port 1
SJ001032
Figure 35. Cluster configuration with three storage subsystemsone partition per subsystem
Figure 36 on page 256 shows a cluster configuration that contains four DS storage subsystems, with one
partition on each storage subsystem.
255
Preferred
Failover
DS4000 Storage Server #1
Port 0 CTLR A
Partition 1
Port 1
Port 0 CTLR B
RVSD Cluster
FC
WWPN-1A
WWPN-1B FC
WWPN-1C FC
WWPN-1D FC
Partition 1
Port 1
VSD 2
WWPN-1A
WWPN-1B
WWPN-1C
WWPN-1D
LUN 31
Port 1
FC
SW
Fabric zone 1
VSD 1
LUN 0
Port 0 CTLR B
FC
LUN 0
LUN 31
FC
FC
WWPN-1A
WWPN-2A
WWPN-1B
WWPN-2B
WWPN-1A
WWPN-2A
WWPN-1B
WWPN-2B
Port 1
FC
FC
SW
Fabric zone 2
Port 0 CTLR A
Partition 1
Port 1
Port 0 CTLR B
LUN 0
LUN 31
WWPN-1C
WWPN-2C
WWPN-1D
WWPN-2D
Port 1
Partition 1
Port 1
Port 0 CTLR B
LUN 0
LUN 31
Port 1
WWPN-1C
WWPN-2C
WWPN-1D
WWPN-2D
SJ001033
Figure 36. Cluster configuration with four storage subsystemsone partition per subsystem
Figure 37 on page 257 shows a cluster configuration that contains two DS storage subsystems, with two
partitions on each storage subsystem.
256
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Preferred
Failover
DS4000 Storage Server #1
FC
SW
Fabric zone 1
LUN 0
RVSD Cluster
Port 0 CTLR B
FC
Partition 2
FC
Port 1
FC
LUN 0
FC
VSD 2
FC
Fabric zone 2
FC
FC
Fabric zone 3
LUN 31
WWPN-1A
WWPN-2A
WWPN-1C
WWPN-2C
WWPN-1B
WWPN-2B
WWPN-1D
WWPN-2D
FC
FC
SW
WWPN-2A
WWPN-2B
WWPN-2C
WWPN-2D
LUN 31
Port 1
VSD 1
WWPN-1A
WWPN-1B
WWPN-1C
WWPN-1D
Partition 1
Port 0 CTLR A
Partition 1
Port 0 CTLR A
LUN 0
LUN 31
Port 1
Port 0 CTLR B
Port 1
Partition 2
LUN 0
LUN 31
WWPN-1A
WWPN-2A
WWPN-1C
WWPN-2C
WWPN-1B
WWPN-2B
WWPN-1D
WWPN-2D
Fabric zone 4
SJ001034
Figure 37. RVSD cluster configuration with two storage subsystemstwo partitions per subsystem
Figure 38 on page 258 shows an HACMP/GPFS cluster configuration that contains a single DS storage
subsystem, with one partition.
257
WWPN-1A
HACMP/GPFS Cluster
Preferred
Failover
WWPN-1B
Fabric zone 1
WWPN-2A
FC
SW
Svr 1
Svr 2
CTLR A
WWPN-2B
WWPN-3A
Svr 3
Primary:
WWPN-1A
WWPN-2A
WWPN-3A
WWPN-4A
.
.
.
WWPN-32A
Partition 1
WWPN-3B
Svr 4
LUN 0
Fabric zone 2
WWPN-4B
Port 1
WWPN-32A
LUN 31
CTLR B
FC
SW
WWPN-4A
Failover:
WWPN-1B
WWPN-2B
WWPN-3B
WWPN-4B
.
.
.
WWPN-32B
Svr 32
WWPN-32B
SJ001035
Figure 38. HACMP/GPFS cluster configuration with one storage subsystemone partition
Figure 39 on page 259 shows an HACMP/GPFS cluster configuration that contains two DS storage
subsystems, with two partitions on each storage subsystem.
258
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Preferred
Failover
HACMP/GPFS Cluster
FC
Partition 1
FC
FC
Svr 2
WWPN-2A
WWPN-2B
WWPN-2C
WWPN-2D
Fabric zone 1
Port 0
FC
SW
WWPN-1A
WWPN-1B
WWPN-1C
WWPN-1D
FC
Port 1
CTLR A
LUN 0
FC
CTLR B
FC
FC
Port 0
FC
Port 1
Partition 2
LUN 0
Svr 3
FC
FC
FC
LUN 31
Fabric zone 2
FC
Partition 1
CTLR A
Port 0
LUN 0
Port 1
CTLR B
WWPN-32A
WWPN-32B
WWPN-32C
WWPN-32D
Fabric zone 4
LUN 31
Partition 2
Port 0
Svr 32
WWPN-1B
WWPN-2B
WWPN-3B
WWPN-32B
WWPN-1D
WWPN-2D
WWPN-3D
WWPN-32D
Fabric zone 3
FC
SW
WWPN-3A
WWPN-3B
WWPN-3C
WWPN-3D
LUN 31
WWPN-1A
WWPN-2A
WWPN-3A
WWPN-32A
WWPN-1C
WWPN-2C
WWPN-3C
WWPN-32C
Port 1
FC
LUN 0
FC
FC
LUN 31
WWPN-1A
WWPN-2A
WWPN-3A
WWPN-32A
WWPN-1C
WWPN-2C
WWPN-3C
WWPN-32C
WWPN-1B
WWPN-2B
WWPN-3B
WWPN-32B
WWPN-1D
WWPN-2D
WWPN-3D
WWPN-32D
FC
SJ001036
Svr 1
Figure 39. HACMP/GPFS cluster configuration with two storage subsystemstwo partitions per subsystem
259
System dependencies
This section provides information about RDAC IDs and single points of failure.
260
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Attribute definitions
The following tables list definitions and values of the ODM attributes for dars, dacs and hdisks:
v Table 44: Attributes for dar devices
v Table 45 on page 262: Attributes for dac devices
v Table 46 on page 263: Attributes for hdisk devices
Note:
1. Attributes with True in the Changeable column can be modified from their default settings.
2. Attributes with False in the Changeable column are for informational or state purposes only.
However, some attributes with False in the Changeable column can be modified using the Storage
Manager.
3. The lsattr -El (uppercase E, lowercase L) command is another way to determine which attributes
can be modified. Attributes that can be modified display True in the last column of the lsattr -El
output. You can also display the default values with the lsattr -Dl command.
Table 44. Attributes for dar devices
Attribute
Definition
Changeable (T/F)
Possible value
act_controller
False
all_controller
held_in_reset
load_balancing
True
Yes or No.
Attention: You must only
set the load_balancing
attribute to yes in
single-host configurations.
autorecovery
True
261
Definition
Changeable (T/F)
Possible value
hlthchk_freq
True
aen_freq
True
balance_freq
fast_write_ok
False
cache_size
False
switch_retries
True
0 - 255.
Default: 5
For most configurations, the
default is the best setting. If
you are using HACMP, it
can be helpful to set the
value to 0.
Attention: You cannot use
concurrent firmware
download if you change the
default setting.
Definition
passive_control
alt_held_reset
False
controller_SN
False
ctrl_type
False
cache_size
262
Changeable (T/F)
Possible value
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Definition
Changeable (T/F)
Possible value
scsi_id
False
lun_id
False
utm_lun_id
node_name
False
location
True
ww_name
False
GLM_type
False
Definition
Changeable (T/F)
Possible value
pvid
Set by AIX.
q_type
False
queue_depth
True
1 - 64
Note: See Setting the
queue depth for hdisk
devices on page 124 for
important information
about setting this attribute.
PR_key_value
True
1-64, or None.
Note: You must set this
attribute to non-zero before
the reserve_policy attribute
is set.
reserve_policy
True
no_reserve
PR_shared,
PR_exclusive, or
single_path
max_transfer
True
Maximum transfer size is
the largest transfer size that
can be used in sending I/O.
Numeric value;
Default = 1 MB
Note: Usually unnecessary
to change default, unless
very large I/Os require
increasing the value.
Appendix D. Viewing and setting AIX Object Data Manager (ODM) attributes
263
Definition
Changeable (T/F)
Possible value
write_cache
False
Indicator that shows
whether write-caching is
enabled on this device (yes)
or not (no); see the
definition of the
cache_method attribute for
more information.
Yes or No.
size
False
raid_level
False
rw_timeout
True
reassign_to
True
scsi_id
False
lun_id
False
cache_method
Default, fast_write,
fast_load, fw_unavail,
fl_unavail.
264
Number of blocks to be
prefetched into read cache
for each block read.
False
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
0 - 100.
Definition
Changeable (T/F)
Possible value
ieee_volname
False
Active Controllers
Polled AEN frequency in seconds
Available Controllers
Autorecover after failure is corrected
Dynamic Load Balancing frequency in seconds
Cache size for both controllers
Fast Write available
Held-in-reset controller
Health check frequency in seconds
Dynamic Load Balancing
Number of times to retry failed switches
False
True
False
True
True
False
False
True
True
True
True
GLM type
Alternate held in reset
Cache Size in MBytes
Controller serial number
Controller Type
Location Label
Logical Unit Number
FC Node Name
Passive controller
SCSI ID
Logical Unit Number
World Wide Name
False
False
False
False
False
True
False
False
False
False
False
False
Note: Running the # lsattr -Rl <device> -a <attribute> command, will show allowable values for the
specified attribute and is an hdisk attribute list when using MPIO.
Note: In Table 49 on page 266, the ieee_volname and lun_id attribute values are shown abbreviated. An
actual output would show the values in their entirety.
Appendix D. Viewing and setting AIX Object Data Manager (ODM) attributes
265
266
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Appendix E. Accessibility
The information in this appendix describes documentation accessibility and accessibility features in
Storage Manager.
Document format
The publications for this product are in Adobe Portable Document Format (PDF) and must be compliant
with accessibility standards. If you experience difficulties when you use the PDF files and want to request
a web-based format or accessible PDF document for a publication, direct your mail to the following
address:
Information Development
IBM Corporation
205/A015
3039 E. Cornwallis Road
P.O. Box 12195
Research Triangle Park, North Carolina 27709-2195
U.S.A.
In the request, be sure to include the publication part number and title.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the
information in any way it believes appropriate without incurring any obligation to you.
267
v Shift-Tab - Moves keyboard focus to the previous component or to the first component in the previous
group of components
v Arrow keys - Move keyboard focus within the individual components of a group of components
Table 50. Storage Manager alternate keyboard operations
Short cut
Action
F1
F10
Move keyboard focus to main menu bar and post first menu; use the
arrow keys to navigate through the available options.
Alt+F4
Alt+F6
Access menu items, buttons, and other interface components with the
keys associated with the underlined letters.
For the menu options, select the Alt + underlined letter combination to
access a main menu, and then select the underlined letter to access the
individual menu item.
For other interface components, use the Alt + underlined letter
combination.
Ctrl+F1
Spacebar
Ctrl+Spacebar
(Contiguous/Non-contiguous)
AMW Logical/Physical View
Esc
Home, Page Up
Shift+Tab
Ctrl+Tab
Move keyboard focus from a table to the next user interface component.
Tab
Down arrow
Left arrow
Right arrow
Up arrow
268
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries.
Consult your local IBM representative for information on the products and services currently available in
your area. Any reference to an IBM product, program, or service is not intended to state or imply that
only that IBM product, program, or service may be used. Any functionally equivalent product, program,
or service that does not infringe any IBM intellectual property right may be used instead. However, it is
the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or
service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not give you any license to these patents. You can send
license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
Copyright IBM Corp. 2012
269
IBM Corporation
Almaden Research
650 Harry Road
Bldg 80, D3-304, Department 277
San Jose, CA 95120-6099
U.S.A.
Such information may be available, subject to appropriate terms and conditions, including in some cases,
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided
by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or
any equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be the
same on generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform for
which the sample programs are written. These examples have not been thoroughly tested under all
conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
(your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs.
270
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked
terms are marked on their first occurrence in this information with a trademark symbol ( or ), these
symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information
was published. Such trademarks may also be registered or common law trademarks in other countries.
A current list of IBM trademarks is available on the web at http://www.ibm.com/legal/copytrade.shtml.
The following terms are trademarks of International Business Machines Corporation in the United States,
other countries, or both:
IBM
AIX
eServer
FlashCopy
Netfinity
POWER
Series p
RS/6000
TotalStorage
Adobe and PostScript are either registered trademarks or trademarks of Adobe Systems Incorporated in
the United States and/or other countries.
Intel, Intel Xeon, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or
its subsidiaries in the United States and other countries.
Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc., in the United States, other
countries, or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Important notes
Processor speed indicates the internal clock speed of the microprocessor; other factors also affect
application performance.
This product is not intended to be connected directly or indirectly by any means whatsoever to interfaces
of public telecommunications networks.
CD or DVD drive speed is the variable read rate. Actual speeds vary and are often less than the possible
maximum.
Notices
271
When referring to processor storage, real and virtual storage, or channel volume, KB stands for 1024
bytes, MB stands for 1,048,576 bytes, and GB stands for 1,073,741,824 bytes.
When referring to hard disk drive capacity or communications volume, MB stands for 1,000,000 bytes,
and GB stands for 1,000,000,000 bytes. Total user-accessible capacity can vary depending on operating
environments.
Maximum internal hard disk drive capacities assume the replacement of any standard hard disk drives
and population of all hard disk drive bays with the largest currently supported drives that are available
from IBM.
Maximum memory might require replacement of the standard memory with an optional memory
module.
IBM makes no representation or warranties regarding non-IBM products and services that are
ServerProven, including but not limited to the implied warranties of merchantability and fitness for a
particular purpose. These products are offered and warranted solely by third parties.
IBM makes no representations or warranties with respect to non-IBM products. Support (if any) for the
non-IBM products is provided by the third party, not IBM.
Some software might differ from its retail version (if available) and might not include user manuals or all
program functionality.
272
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Glossary
This glossary provides definitions for the
terminology and abbreviations used in IBM
System Storage publications.
http://www.ibm.com/ibm/terminology
adapter
See also
Refers you to a related term.
Abstract Windowing Toolkit (AWT)
In Java programming, a collection of GUI
components that were implemented using
native-platform versions of the
components. These components provide
that subset of functionality which is
common to all operating system
environments.
accelerated graphics port (AGP)
A bus specification that gives low-cost 3D
graphics cards faster access to main
memory on personal computers than the
usual peripheral component interconnect
Copyright IBM Corp. 2012
AGP
AL_PA
See arbitrated loop physical address.
arbitrated loop
One of three existing Fibre Channel
topologies, in which 2 - 126 ports are
interconnected serially in a single loop
circuit. Access to the Fibre Channel
Arbitrated Loop (FC-AL) is controlled by
an arbitration scheme. The FC-AL
topology supports all classes of service
and guarantees in-order delivery of FC
frames when the originator and responder
are on the same FC-AL. The default
topology for the disk array is arbitrated
loop. An arbitrated loop is sometimes
referred to as a Stealth Mode.
arbitrated loop physical address (AL_PA)
An 8-bit value used to identify a
273
ATA
See AT-attached.
AT-attached
Peripheral devices that are compatible
with the original IBM AT computer
standard in which signals on a 40-pin
AT-attached (ATA) ribbon cable followed
the timings and constraints of the
Industry Standard Architecture (ISA)
system bus on the IBM PC AT computer.
Equivalent to integrated drive electronics
(IDE).
Auto Drive Transfer (ADT)
A function that provides automatic
failover in case of controller failure on a
storage subsystem.
ADT
AWT
BOOTP
See bootstrap protocol.
Bootstrap Protocol (BOOTP)
A protocol that allows a client to find
both its Internet Protocol (IP) address and
the name of a file from a server on the
network.
274
command
A statement used to initiate an action or
start a service. A command consists of the
command name abbreviation, and its
parameters and flags if applicable. A
command can be issued by typing it on a
command line or selecting it from a
menu.
community string
The name of a community contained in
each Simple Network Management
Protocol (SNMP) message.
concurrent download
A method of downloading and installing
firmware that does not require the user to
stop I/O to the controllers during the
process.
CRC
CRT
CRU
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
dac
dar
domain
The most significant byte in the node port
(N_port) identifier for the Fibre Channel
(FC) device. It is not used in the Fibre
Channel-small computer system interface
(FC-SCSI) hardware path ID. It is required
to be the same for all SCSI targets
logically connected to an FC adapter.
DASD
See direct access storage device.
data striping
Storage process in which information is
split into blocks (a fixed amount of data)
and the blocks are written to (or read
from) a series of disks in parallel.
default host group
A logical collection of discovered host
ports, defined host computers, and
defined host groups in the
storage-partition topology that fulfill the
following requirements:
v Are not involved in specific logical
drive-to-LUN mappings
v Share access to logical drives with
default logical drive-to-LUN mappings
device type
Identifier used to place devices in the
physical map, such as the switch, hub, or
storage.
DHCP See Dynamic Host Configuration Protocol.
direct access storage device (DASD)
A device in which access time is
effectively independent of the location of
the data. Information is entered and
retrieved without reference to previously
accessed data. (For example, a disk drive
is a DASD, in contrast with a tape drive,
which stores data as a linear sequence.)
DASDs include both fixed and removable
storage devices.
direct memory access (DMA)
The transfer of data between memory and
an input/output (I/O) device without
processor intervention.
disk array controller (dac)
The device, such as a Redundant Array of
Independent Disks (RAID), that manages
one or more disk arrays and provides
functions. See also disk array router.
disk array router (dar)
A router that represents an entire array,
including current and deferred paths to
all logical unit numbers (LUNs) (hdisks
on AIX). See also disk array controller.
drive channels
The DS4200, DS4700, and DS4800
subsystems use dual-port drive channels
that, from the physical point of view, are
connected in the same way as two drive
loops. However, from the point of view of
the number of drives and enclosures, they
are treated as a single drive loop instead
of two different drive loops. A group of
storage expansion enclosures are
connected to the DS3000 or DS4000
storage subsystems using a drive channel
from each controller. This pair of drive
channels is referred to as a redundant
drive channel pair.
drive loops
A drive loop consists of one channel from
each controller combined to form one pair
of redundant drive channels or a
redundant drive loop. Each drive loop is
associated with two ports. (There are two
drive channels and four associated ports
per controller.) For the DS4800, drive
loops are more commonly referred to as
drive channels. See drive channels.
DRAM
See dynamic random access memory.
Dynamic Host Configuration Protocol (DHCP)
A communications protocol that is used to
centrally manage configuration
information. For example, DHCP
automatically assigns IP addresses to
computers in a network.
dynamic random access memory (DRAM)
Storage in which the cells require
repetitive application of control signals to
retain stored data.
ECC
EEPROM
See electrically erasable programmable
read-only memory.
EISA
Glossary
275
ESM canister
See environmental service module canister.
automatic ESM firmware synchronization
When you install a new ESM into an
existing storage expansion enclosure in a
DS3000 or DS4000 storage subsystem that
supports automatic ESM firmware
synchronization, the firmware in the new
ESM is automatically synchronized with
the firmware in the existing ESM.
EXP
276
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
HBA
277
IDE
in-band
Transmission of management protocol
over the Fibre Channel transport.
Industry Standard Architecture (ISA)
Unofficial name for the bus architecture of
the IBM PC/XT personal computer. This
bus design included expansion slots for
plugging in various adapter boards. Early
versions had an 8-bit data path, later
expanded to 16 bits. The "Extended
Industry Standard Architecture" (EISA)
further expanded the data path to 32 bits.
See also Extended Industry Standard
Architecture.
initial program load (IPL)
The process that loads the system
programs from the system auxiliary
storage, checks the system hardware, and
prepares the system for user operations.
Also referred to as a system restart,
system startup, and boot.
integrated circuit (IC)
A microelectronic semiconductor device
that consists of many interconnected
transistors and other components. ICs are
constructed on a small rectangle cut from
a silicon crystal or other semiconductor
material. The small size of these circuits
allows high speed, low power dissipation,
and reduced manufacturing cost
compared with board-level integration.
Also known as a chip.
integrated drive electronics (IDE)
A disk drive interface based on the 16-bit
IBM personal computer Industry Standard
Architecture (ISA) in which the controller
electronics reside on the drive itself,
eliminating the need for a separate
adapter card. Also known as an
Advanced Technology Attachment
Interface (ATA).
278
IPL
IRQ
ISA
label
LAN
LBA
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
MAC
man page
In UNIX systems, one page of online
documentation. Each UNIX command,
utility, and library function has an
associated man page.
MCA
media scan
A media scan is a background process
that runs on all logical drives in the
storage subsystem for which it has been
enabled, providing error detection on the
drive media. The media scan process
scans all logical drive data to verify that it
can be accessed, and optionally scans the
logical drive redundancy information.
Media Access Control (MAC)
In networking, the lower of two sublayers
of the Open Systems Interconnection
model data link layer. The MAC sublayer
handles access to shared media, such as
whether token passing or contention will
be used.
metro mirror
A function of the remote mirror and copy
feature that constantly updates a
secondary copy of a logical drive to
match changes made to a source logical
drive. See also remote mirroring, Global
Mirroring.
MIB
279
NMS
node
NVSRAM
Nonvolatile storage random access
memory. See nonvolatile storage.
Object Data Manager (ODM)
An AIX proprietary storage mechanism
for ASCII stanza files that are edited as
part of configuring a drive into the
kernel.
ODM See Object Data Manager.
out-of-band
Transmission of management protocols
outside of the Fibre Channel network,
typically over Ethernet.
partitioning
See storage partition.
parity check
A test to determine whether the number
of ones (or zeros) in an array of binary
digits is odd or even.
A mathematical operation on the
numerical representation of the
information communicated between two
pieces. For example, if parity is odd, any
character represented by an even number
has a bit added to it, making it odd, and
an information receiver checks that each
unit of information has an odd value.
PCI local bus
See peripheral component interconnect local
bus.
PDF
performance event
Event related to thresholds set on storage
area network (SAN) performance.
Peripheral Component Interconnect local bus
(PCI local bus)
A local bus for PCs, from Intel, that
provides a high-speed data path between
the CPU and up to 10 peripherals (video,
disk, network, and so on). The PCI bus
coexists in the PC with the Industry
Standard Architecture (ISA) or Extended
Industry Standard Architecture (EISA)
bus. ISA and EISA boards plug into an IA
or EISA slot, while high-speed PCI
controllers plug into a PCI slot. See also
Industry Standard Architecture, Extended
Industry Standard Architecture.
280
polling delay
The time in seconds between successive
discovery processes during which
discovery is inactive.
port
281
SA Identifier
See storage subsystem Identifier.
SAN
SCSI
282
SFP
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
sweep method
A method of sending Simple Network
Management Protocol (SNMP) requests
for information to all the devices on a
subnet by sending the request to every
device in the network.
switch A Fibre Channel device that provides full
bandwidth per port and high-speed
routing of data with link-level addressing.
switch group
A switch and the collection of devices
connected to it that are not in other
groups.
switch zoning
See zoning.
synchronous write mode
In remote mirroring, an option that
requires the primary controller to wait for
the acknowledgment of a write operation
from the secondary controller before
returning a write I/O request completion
to the host. See also asynchronous write
mode, remote mirroring, Metro Mirroring.
system name
Device name assigned by the vendor
third-party software.
TCP
283
TCP/IP
See Transmission Control Protocol/Internet
Protocol.
terminate and stay resident program (TSR
program)
A program that installs part of itself as an
extension of DOS when it is executed.
topology
The physical or logical mapping of the
location of networking components or
nodes within a network. Common
network topologies include bus, ring, star,
and tree. The three Fibre Channel
topologies are fabric, arbitrated loop, and
point-to-point. The default topology for
the disk array is arbitrated loop.
TL_port
See translated loop port.
transceiver
In communications, the device that
connects the transceiver cable to the
Ethernet coaxial cable. The transceiver is
used to transmit and receive data.
Transceiver is an abbreviation of
transmitter-receiver.
translated loop port (TL_port)
A port that connects to a private loop and
allows connectivity between the private
loop devices and off loop devices (devices
not connected to that particular TL_port).
Transmission Control Protocol (TCP)
A communication protocol used in the
Internet and in any network that follows
the Internet Engineering Task Force (IETF)
standards for internetwork protocol. TCP
provides a reliable host-to-host protocol in
packet-switched communication networks
and in interconnected systems of such
networks.
Transmission Control Protocol/Internet Protocol
(TCP/IP)
A set of communication protocols that
provide peer-to-peer connectivity
functions for both local and wide-area
networks.
trap
284
trap recipient
Receiver of a forwarded Simple Network
Management Protocol (SNMP) trap.
Specifically, a trap receiver is defined by
an Internet Protocol (IP) address and port
to which traps are sent. Presumably, the
actual recipient is a software application
running at the IP address and listening to
the port.
TSR program
See terminate and stay resident program.
uninterruptible power supply
A source of power from a battery
installed between the commercial power
and the system that keeps the system
running, if a commercial power failure
occurs, until it can complete an orderly
end to system processing.
user action events
Actions that the user takes, such as
changes in the storage area network
(SAN), changed settings, and so on.
worldwide port name (WWPN)
A unique 64-bit identifier associated with
a switch. The WWPN is assigned in an
implementation-independent and
protocol-independent manner.
worldwide name (WWN)
A 64-bit, unsigned, unique name identifier
that is assigned to each Fibre Channel
port.
WORM
See write-once read-many.
Write Once Read Many (WORM)
Any type of storage medium to which
data can be written only a single time, but
can be read from any number of times.
After the data is recorded, it cannot be
altered.
WWN See worldwide name.
zoning
In Fibre Channel environments, the
grouping of multiple ports to form a
virtual, private, storage network. Ports
that are members of a zone can
communicate with each other, but are
isolated from ports in other zones.
A function that allows segmentation of
nodes by address, name, or physical port
and is provided by fabric switches or
hubs.
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
Index
A
cache hit
optimizing 83
percentage 83
cache mirroring 229, 253
cache mirroring, disabling 126
cache read-ahead, choosing a
multiplier 83
CHAP 35, 36
cluster services
AIX requirements 253
HACMP ES and ESCRM 251
cluster services, high-availability
AIX 251
AIX requirements 252
HP-UX requirements 259
MC/Service Guard 259
PSSP with GPFS 253
Solaris 260
Solaris requirements 260
system dependencies 251
clustering
VMware ESX Server
configuration 246
command-line interface (CLI) 84
comments, Script Editor 85
components, Storage Manager
software 2, 29
concurrent firmware download 38, 43
configuration 4, 5, 161, 170
device driver, Linux DM-Multipath
driver 97
devices 123
direct-attached 2, 5
DS TKLM Proxy Code server, starting,
stopping, and restarting 161
DS TKLM Proxy Code, for external
security key management 160
FDE drives 166
GPFS, PSSP, and HACMP
clusters 253
HBAs 231
hosts 89
hot-spare drives 67
IBM i 72
iSCSI host ports 36
iSCSI settings 34
MTU 38
network 2, 3
network example 3
network settings for iSCSI host
attachment 37
recovery 63
SAN-attached 2, 5
storage subsystem passwords 32
storage subsystems 5
types 2
configuration types
Storage Manager installation 2
controller cache memory 76
B
background media scan
BIOS
settings 231
BOOTP server
sample network 3
BOOTP servers
sample network 6
77
controller firmware
downloading 38, 40
firmware
downloading 40
Controller Firmware Upgrade Tool
device health, checking 42
firmware, downloading 42
log file, viewing 43
opening 42
overview 41
storage subsystems, adding 42
using 42
controllers
addresses 6
dar 121
disk array 121
IP addresses 6
transfer rate, optimizing 82
copy services 45
cross connections
VMware ESX Server 248
D
dac
and RDAC 121
attributes 265
dar
and RDAC 121
attributes 265
data
collecting before HBA hot swap on
AIX 131
files, defragmenting 84
optimal segment size, choosing 84
redundancy 64
securing with FDE 144
DCE 126
DDC
See Diagnostic Data Capture
default host types, defining and
verifying 68
device drivers
description 91
DMP, installing 118
downloading latest versions xi, xiii
failover 91
HBAs 96
Linux DM-Multipath driver 97
multipath 91
installing 95
RDAC 91, 110
RDAC failover driver on Solaris 116
SCSIport Miniport 96
Storport Miniport 96
Veritas DMP DSM 97
with HACMP cluster 253
Device Specific Module
See DSM
devices
adding 32
285
devices (continued)
configuring 123
identification 122
identifying 120
setting alert notifications 33
Devices tab
See Enterprise Management window
DHCP server
sample network 3
DHCP servers
sample network 6
DHCP, using 36
Diagnostic Data Capture
MEL events 227
Recovery Guru 225, 227
recovery steps 226
Script Editor 225
direct-attached configuration
setting IP addresses 6
direct-attached configurations
setting up 5
discovery, automatic storage
subsystem 31
disk access, minimizing 84
disk array controller
See dac
disk array router
See dar
disk drives
FDE 144
FDE hot-spare 190
FDE, configuring 166
FDE, erasing 187
FDE, installing 166
FDE, migrating 183
FDE, secure erase 156
FDE, unlocking (external) 156
FDE, unlocking (local and
external) 181
FDE, unlocking (local) 155
hot spares, assigning 67
hot spares, configuring 67
hot spares, restoring data from 67
DMP 110
planning for installation 118
preparing for installation 118
DMP drivers 106
DMP DSM driver 97
documentation
about xi
accessibility 267
documents xiii
FDE best practices 194
notices xvi
related documentation resources xii
statements xvi
Storage Manager xii
Sun Solaris 110
Sun StorEdge 116
Symantec 97, 118
using xv
Veritas 97, 118
VMware 248
websites xii, xiii
drive firmware
downloading 43
levels, determining 39, 40
286
drivers xiii
See also device drivers
rpaphp 134
drives
See disk drives
DS TKLM Proxy Code server,
restarting 161
DS TKLM Proxy Code server, supported
operating systems 160
DS TKLM Proxy Code, configuring for
external security key management 170
DS TKLM Proxy Code, for external
security key management 165
DSM 96
DVE 126
Dynamic Capacity Expansion
See DCE
Dynamic Logical Drive Expansion
See DVE
Dynamic Multipathing (DMP)
See also DMP
description 110
E
Enterprise Management window
alert notifications 33
Devices tab 12
elements 11
online help xii
Setup tab 14
table view description 12
tree view description 12
Enterprise Management Window
adding devices 32
errors
FCP disk array 227
errors, media scan 78
ESM firmware
automatic ESM firmware
download 43
automatic ESM firmware
synchronization 43
downloading 38, 43
levels, determining 39, 40
Ethernet
Solaris requirements, cluster
services 260
Ethernet MAC addresses
See MAC addresses
events
DDC MEL 227
events, critical
descriptions of 205
numbers 205
required actions 205
solving problems 205
external security key management 145,
149, 156, 160, 161, 165, 170
configuring 170
DS TKLM Proxy Code server 165
F
fabric switch environments
94
failover driver
description 91
MPxIO 111
failure support
cluster services 251
DMP driver 110
MPxIO 110
RDAC driver 110
redistributing logical drives 129, 130
Fast!UTIL 231
FC/SATA Intermix premium feature 45
FCP disk array errors 227
FDE 75, 143
arrays, securing 191
backup and recovery 194
best practices 194
boot support 194
disk drives 144
disk drives, configuring 166
disk drives, erasing 187
disk drives, installing 166
disk drives, migrating 183
disk drives, unlocking (local and
external) 181
enabling 166
external security key
management 145, 193
frequently asked questions 191
hot-spare disk drives 190
hot-spare drives 193
key management method, choosing
a 145
local security key management 145,
192
log files 191
RAID arrays, securing 175
secure drives, unlocking
(external) 156
secure drives, unlocking (local) 155
secure erase 192
secure erase, using 156
securing data against a breach 144
security authorizations 157
security key identifier 149
security key management, FDE 145
security keys
creating 146
obtaining 145, 146
using 146
security keys, changing
(external) 149
security keys, changing (local) 148
security keys, creating 146
security keys, obtaining 145
states, locked and unlocked 194
terminology 159
understanding 144
using with other premium
features 193
feature enable identifier 46
feature key file 46
features
Fast!UTIL 231
features, premium
See premium features
Fibre Channel
HBAs in a switch environment 94
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
G
General Parallel File System (GPFS)
glossary 273
GPFS 253
253
H
HACMP 251
using 252
hardware
Ethernet address 4, 5
service and support xv
VMware ESX Server
requirements 246
hardware initiators, iSCSI 37
HBAs
advanced settings 232
connecting an a Fibre Channel switch
environment 94
default settings 232
device drivers 96
Fibre Channel switch
environment 94
hot swap, completing 138
hot-swap on Linux, preparing
for 134
hot-swap, replacing 130
hot-swap, replacing for AIX and
Linux 133
hot-swap, replacing on AIX 131
in a direct-attached configuration 5
in a SAN-attached configuration 5
JNI settings 239
JNI settings on Solaris 239
on Linux, replacing 134
overview 93
PCI hotplug, replacing 137
QLogic settings 233, 243
settings 231
using 93
hdisk
attributes 124, 265
queue depth, setting 124
hdisks
verification 122
head-swap, FDE storage subsystems 183
help
obtaining xiv, xv
websites xiii
help, online xii
heterogeneous environment 70
High Availability Cluster
Multi-Processing
See HACMP
host bus adapters
See HBAs
HBAs
setting host ports 49
Solaris
QLogic settings 243
host computers
See hosts
host groups
defining 49, 70
host ports
defining 71
definition 71
host ports, iSCSI 36
host types
defining default 68
verifying 68
host-agent software
stopping and restarting 124
host-agent-managed configuration 5
host-agent-managed, setting up 5
host-agent-management method
UTM device 121
hosts xi
AIX, devices on 121
automatic discovery 31
configuring 89
defining 71
heterogeneous 70
IBM i 72, 73
iSCSI 37
manual discovery 32
pre-installation tasks 4, 5
VMware ESX Server 246
hot_add utility 123
hot-spare
FDE disk drives 190
hot-spare drives 67
hot-swap HBAs
See HBAs, hot-swap
HP-UX
cluster services, high-availability
requirements 259
logical drives, redistributing 129
multipath I/O with PV-links 106, 107
native multipathing 110
PV-links 106, 107
I
I/O
access pattern 83
request rate optimizing 83
size 83
write-caching 83
I/O access pattern and I/O size
I/O activity, monitoring 91
I/O data field 81, 82
83
Index
287
iSCSI (continued)
software initiator considerations,
Microsoft 38
statistics, viewing 36
supported hardware initiators,
using 37
target authentication, changing 35
target discovery, changing 36
target identification, changing 36
iSNS server, using 36
J
JNI
HBA settings 239
HBA settings on Solaris
239
K
keys, security (FDE)
See FDE
L
least path weight policy 81
least queue depth policy 81
Linux
DCE 126
DVE 126
HBA hot swap, completing the 138
HBAs, preparing for hot swap 134
hot-swap HBA, replacing 133
mapping new WWPNs to storage
subsystems 138
replacing HBAs 134
RHEL 5.3, with Veritas Storage
Foundation 5.0 127
SUSE, with Veritas Storage
Foundation 127
Linux DM-Multipath driver 97
Linux host
support xiv
Linux MPP drivers 104
load balancing 261
load_balancing attribute 261, 262
local security key management 146, 148,
155
LockKeyID, FDE 149
log files 202, 205
major events log 191
security changes 191
logical drives 5, 263
configuration 67
creating 63, 66
creating from free or unconfigured
capacity 63
definition 63
expected usage 67
identifying 120
modification priority setting 84
redistributing 129, 130
Logical tab
See Subsystem Management window
lsslot tool 135
LUNs
adding to an existing partition 71, 72
288
LUNs (continued)
attributes 124, 265
checking size 128
mapping to a new partition 71
mapping to a partition on VMware
ESX Server 249
M
MAC addresses
identifying 7
MAC OS 105
management stations xi
compatible configuration types 2
description 1, 4
VMware ESX Server 245
manual discovery 32
Mappings tab
See Subsystem Management window
Maximum Transmission Unit
See MTU
MC/Service Guard 259
Media Scan 77
changing settings 77
duration 80
errors reported 78
overview 77
performance impact 78
settings 79
medical imaging applications 65
Medium Access Control (MAC) address
See MAC addresses
MEL
security changes 191
messages
Support Monitor 202
Microsoft iSCSI Software Initiator 38
Microsoft Windows MPIO 96
Microsoft Windows MPIO/DSM 96
Minihubs 5
modifying the proxy configuration file for
external security key management 161
MPIO 122
MPP drivers 104
MPxIO 110
device names, changing 111
devices, verifying 111
driver, disabling 116
failover driver, enabling 111
failover path, configuring 111
failover path, unconfiguring 111
latest driver version, acquiring 111
MTU
settings 38
multi-user environments 65
multimedia applications 65
multipath driver 105
description 91
multipath drivers 97, 104, 120
installing 95
multipathing 38, 95, 110
DMP, installing on Solaris 118
MPxIO, using with Solaris 110
native, on HP-UX 110
PV-links, using on HP-UX 106, 107
RDAC failover driver on Solaris 116
multipathing (continued)
redistributing logical drives on
AIX 129
redistributing logical drives on
HP-UX 129
redistributing logical drives on
Solaris 130
Multiplexed I/O (MPxIO)
See MPxIO
mutual authentication permissions,
entering for iSCSI 36
My Support xvi
N
names, storage subsystem 33
network installation, preparing 3
network-managed configuration 4
network-managed, setting up 4
networks
configuration example 3
general configuration 3
iSCSI settings 37
notes, important 271
notices xvi
general 269
notifications
to alphanumeric pagers 33
to email 33
using SNMP traps 33
NVSRAM firmware
downloading 38, 40
O
Object Data Manager (ODM) attributes
definitions 261
initial device identification 122
operating system
requirements 23
operating systems
booting with SAN boot 89
DS TKLM Proxy Code 160
Solaris 111
supported for Storage Manager 1
Other frequently asked questions 194
out-of-band configuration
See network-managed configuration
P
packages, Storage Manager software
29
Parallel System Support Programs
(PSSP) 253
parity 64
partitioning 49
passwords, setting 32
PCI core 134
PCI Hotplug 135
PCI hotplug HBAs 137
PCI Hotplug tools 134
PCI slot information 135
performance
ODM attribute settings and 124
Performance Monitor 80
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
2,
Persistent Reservations 77
Physical tab
See Subsystem Management window
policies, load-balancing
least path weight policy 81
least queue depth policy 81
round robin policy 81
premium features 45
configuring 74
descriptions of 45
disabling 47
enabling 46, 47
FDE 143
FDE and FlashCopy 193
FDE and VolumeCopy 193
FDE, enabling 166
feature enable identifier 46
feature key file 46
FlashCopy 74
Full Disk Encryption
See FDE
key 75
Remote Mirror Option 75
storage partitioning 49, 70
using 74
VolumeCopy 75
prerequisites
HP-UX
cluster services,
high-availability 259
prerequisites, Storage Manager client
software 29
problem solving, critical events 205
problems, solving 205
products, developed 269
profile, storage subsystem 48
proxy, installing on AIX or Linux 165
proxy, installing on Windows 165
proxy, uninstalling on Windows 165
PSSP 253
PV-links
See HP-UX
Q
QLogic
HBA settings 231, 233, 239, 243
settings 243
QLogic SANsurfer xii
queue depth
changing for AIX 125
changing for Windows 125
maximum, calculating 125
queue depth, setting 124
R
RAID
application behavior, by level 83
choosing levels 83
data redundancy 64
levels 64
securing arrays with FDE 175
RAID level
application behavior 66
choosing 66
S
SAN boot
configuring hosts 89
requirements 89
SAN-attached configuration
setting up 5
schedule support bundle collection 199
Script Editor
Diagnostic Data Capture 225
using 85
window 85
SCSIport Miniport 96, 97
secure erase, FDE 156
security authorizations, FDE 157
security keys
changing (external) 149
changing (local) 148
creating 146
identifier 149
using to unlock FDE drives 181
security keys, FDE
See FDE
service
requesting xiv
289
290
Summary tab
See Subsystem Management window
support
multipath driver 91
notifications xvi
obtaining xiv, xv
using Storage Manager to send
support bundles 200
websites xiii, xvi
support bundles
collecting manually 201
scheduling collection 199
sending to IBM Support 200
Support Monitor
configuring 197
console area 198
enterprise status 198
icons 198
installation with a console
window 27
installation wizard 25
interface 198
log window 202
messages 202
sending support bundles 200
solving problems 223
support bundle collection
schedule 199
support bundles 201
troubleshooting 223
uninstalling 30
uninstalling on Linux, AIX, or
Solaris 30
uninstalling on Windows 30
using 197
Support Monitor log window, using 202
support notifications xvi
receiving xvi
Support tab
See Subsystem Management window
switch
technical support website xiv
switch environment 94
switches
in a SAN-attached configuration 5
zoning 5
System p host
support xiv
System Storage Interoperation Center
(SSIC) xiv
System Storage Productivity Center xiii
System Storage Productivity Center
(SSPC) xiv
System x host
support xiv
T
target authentication, changing for
iSCSI 35
target discovery, changing for iSCSI
target identification, changing for
iSCSI 36
Task Assistant
description 50
shortcuts 50
36
TCP/IP
IPv6 37
TCP/IP addresses, static
assigning to storage subsystems 8
terminology xi
terminology, FDE 159
Tivoli Key Lifecycle Manager
See IBM Tivoli Key Lifecycle Manager
TKLM
See IBM Tivoli Key Lifecycle Manager
tools
lsslot 135
PCI Hotplug 134
Support Monitor 197
trademarks 271
transfer rate 80
troubleshooting 205
critical events 205
Diagnostic Data Capture 225
Support Monitor 223
U
uninstallation
DS TKLM Proxy Code on Windows,
for external security key
management 165
Storage Manager 30
Support Monitor 30
universal transport mechanism
See UTM device
updates
receiving xvi
updates (product updates) xvi
utilities
hot_add 123
SMdevices 120
SMrepassist 123
UTM device 121
V
Veritas 97
DMP 118
Dynamic Multipathing (DMP) 110
File System 118
Storage Foundation 127
Storage Foundation 5.0 127
VolumeManager 110, 118
Veritas DMP drivers 106
Veritas DMP DSM 97
Veritas Storage Foundation
LVM scan, disabling for SUSE Linux
Enterprise Server 127
RDAC module, enabling on RHEL for
Storage Foundation 5.0 127
Veritas Storage Foundation 5.0
RDAC module, enabling 127
RDAC module, unloading 127
VMware ESX Server 245
cross connections 248
mapping LUNs to a partition 249
VolumeCopy 75
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
W
Web sites
AIX xiv
SAN support xiv
websites
documentation xii
FDE best practices 194
IBM publications center xiv
IBM System Storage product
information xiv
list xiii
notification xvi
premium feature activation xiv
services xv
Solaris failover driver 118
SSIC xiv
support xv, xvi
switch support xiv
System p xiv
System Storage Productivity Center
(SSPC) xiv
System x xiv
VMware 248
window
Script Editor 85
worldwide port name
See WWPN
write-caching
enabling 83
WWPN
mapping to storage subsystem on AIX
and Linux 138
Z
zoning 94
zoning switches
Index
291
292
IBM System Storage DS Storage Manager Version 10: Installation and Host Support Guide
GA32-0963-03
Printed in USA