Hacmp PPRC PDF
Hacmp PPRC PDF
Hacmp PPRC PDF
SC23-4863-14
SC23-4863-14
Note Before using this information and the product it supports, read the information in Notices on page 143.
Fifteenth Edition (September 2010) This edition applies to PowerHA SystemMirror Enterprise Edition Version 6.1 and to all subsequent releases of this product until otherwise indicated in new editions. Note to U.S. Government Users Restricted Rights - - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Copyright IBM Corporation 1998, 2010. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this document . . . . . . . . . v
Highlighting . . . . . . . . . . . . . . v Case-sensitivity in AIX . . . . . . . . . . . v ISO 9000. . . . . . . . . . . . . . . . v PowerHA SystemMirror publications . . . . . . v PowerHA SystemMirror Enterprise Edition publications . . . . . . . . . . . . . . vi PowerHA SystemMirror Smart Assist publications . vi EMC SRDF replicated resources guide . . . . . Overview of the SRDF management system . . Planning for SRDF management . . . . . . Installing SRDF-managed filesets . . . . . . Configuring SRDF resources . . . . . . . Changing SRDF-managed replicated resources Removing SRDF-managed replicated resource Troubleshooting PowerHA SystemMirror Enterprise Edition SRDF-managed replicated resources . . . . . . . . . . . . . . Hitachi Truecopy/HUR replication resources guide Overview of Truecopy/HUR Management . . Planning for Truecopy/HUR Management . . Configuring volume groups and filesystems on Truecopy/HUR protected disks . . . . . . Installing Truecopy management filesets . . . Configuring Truecopy/HUR Resources . . . . Changing Truecopy/HUR-Managed Replicated Resource . . . . . . . . . . . . . . Removing Truecopy/HUR-Managed Replicated Resource . . . . . . . . . . . . . . 121 122 122 126 127 130 131
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery . . . . . . . . 1
DS8000 PPRC replication resources guide . . . . . 1 Metro Mirror guide . . . . . . . . . . . 1 Global Mirror guide . . . . . . . . . . 92 SVC replication resources guide . . . . . . . 99 Overview of SVC management . . . . . . 100 Planning for SVC management . . . . . . 101 Installing PowerHA SystemMirror Enterprise Edition Metro Mirror for SVC . . . . . . . 110 Configuring PowerHA SystemMirror Enterprise Edition Metro Mirror for SVC . . . . . . . 111 Changing an PowerHA SystemMirror Enterprise Edition for Metro Mirror SVC configuration . . 116 Troubleshooting PowerHA SystemMirror Enterprise Edition for Metro Mirror for SVC . . 118
Notices . . . . . . . . . . . . . . 143
Trademarks . . . . . . . . . . . . . . 145
Index . . . . . . . . . . . . . . . 147
iii
iv
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Highlighting
The following highlighting conventions are used in this book:
Bold Identifies commands, subroutines, keywords, files, structures, directories, and other items whose names are predefined by the system. Also identifies graphical objects such as buttons, labels, and icons that the user selects. Identifies parameters whose actual names or values are to be supplied by the user. Identifies examples of specific data values, examples of text similar to what you might see displayed, examples of portions of program code similar to what you might write as a programmer, messages from the system, or information you should actually type.
Italics Monospace
Case-sensitivity in AIX
Everything in the AIX operating system is case-sensitive, which means that it distinguishes between uppercase and lowercase letters. For example, you can use the ls command to list files. If you type LS, the system responds that the command is not found. Likewise, FILEA, FiLea, and filea are three distinct file names, even if they reside in the same directory. To avoid causing undesirable actions to be performed, always ensure that you use the correct case.
ISO 9000
ISO 9000 registered quality systems were used in the development and manufacturing of this product.
v PowerHA SystemMirror on Linux: Installation and Administration Guide, SC23-5211 v PowerHA SystemMirror for AIX: Smart Assist Developer's Guide, SC23-5210 v IBM International Program License Agreement.
vi
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
This guide provides information necessary to plan, install, configure, and maintain Metro Mirror, Global Mirror, SRDF management replicated resources, and Hitachi Truecopy/HUR management. Note: PowerHA SystemMirror is the new name for HACMP. To view or download the PDF version of this topic, select Storage based High Availability and Disaster Recovery guide. Downloading Adobe Reader: You need Adobe Reader installed on your system to view or print this PDF. You can download a free copy from the Adobe website (www.adobe.com/products/acrobat/ readstep.html).
Prerequisites
Before using PowerHA SystemMirror Enterprise Edition for Metro Mirror or reading this guide, you should be familiar with: v PowerHA SystemMirror installation and administration v General background and Metro Mirror support for the type of implementation you intend to configure: Direct management PPRC (ESS systems) DSCLI PPRC (ESS or DS systems) San Volume Controller (SVC) PPRC For more information, refer to the appropriate published or online documentation. For more information on how these products integrate with PowerHA SystemMirror, refer to the chapters in this book dealing with individual product configuration. PowerHA SystemMirror Enterprise Edition for Metro Mirror features: PowerHA SystemMirror Enterprise Edition for Metro Mirror extends PowerHA SystemMirror cluster management for highly available applications and servers to support the disaster recovery mechanism supplied by PPRC. PPRC is a hardware mirroring technique that the IBM TotalStorage Enterprise Storage Server uses to replicate data. It allows mirroring to be suspended and restarted without affecting data integrity. For all types of Metro Mirror support, PowerHA SystemMirror helps to manage PPRC instances. Direct management PPRC is the longest-running type of support through PowerHA SystemMirror Enterprise Edition, designed to provide basic PPRC management for ESS systems. In response to demand for simpler management interfaces, HAMCP/XD also provides more automated PPRC management (notably in the area of path and instance (pair) creation) via the following configurations: v DSCLI management for PowerHA SystemMirror Enterprise Edition Metro Mirror allows additional flexibility by supporting both ESS and DS storage subsystems with the same interface. v SAN Volume Contoller (SVC) Management for PowerHA SystemMirror Enterprise Edition Metro Mirror provides storage virtualization and an additional layer of disaster recovery via the SVC cluster and hardware configuration. In general, the styles of Metro Mirror support mentioned above (DSCLI and SVC) are intended to make Metro Mirror management easier to install and manage. Direct management PPRC has the advantage of not having "middle man" software and hardware to maintain, but, in exchange, there is a relatively heavy amount of configuration and maintenance tasks for the administrator. High Availability and Disaster Recovery Support Features PowerHA SystemMirror Enterprise Edition for Metro Mirror provides high availability and disaster recovery with: v Automatic fallover of PPRC-protected volume pairs between nodes within a site v Automatic fallover of PPRC-protected volume pairs between sites v Automatic recovery/reintegration of PPRC-protected volume pairs between sites v Support for user-defined policy-based resource groups v Support for the following Inter-Site Management Policies for resource groups: Prefer Primary Site Online on Either Site v Support for the Subsystem Device Driver (SDD) v Support for cluster verification and synchronization
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
v Support for C-SPOC for cluster administration. PPRC mirroring: PPRC is a mirroring technique used to maintain consistent copies of data across two ESS systems that are at different locations. It mirrors at the disk subsystem level, making it transparent to hosts. PPRC is also called Remote Copy and Mirror. The three types of remote copy and mirror are: v Metro Mirror Synchronous mirroring v Global Mirror Asynchronous mirroring v Global Copy eXtended Distance mirroring In this topic collection, the term PPRC is synonymous with synchronous mirroring. Disk subsystems: IBM TotalStorage Disk subsystems (Enterprise Storage System - ESS, or DS) are large disk subsystems that are typically configured to provide RAID-5 and RAID-10 volumes. They contain two System p servers connected to the system backplane to manage and monitor the system. The disk drives are configured to provide groups of logical disks that are divided into partitions. A data partition in a disk group is referred to as a volume. Each volume is associated with a logical unit number (LUN) that is a subdivision of the physical disk on a storage system. The disk subsystems have connections to: v System p (or other) host servers over SCSI or Fibre Channel links v Other disk subsystems over ESCON or Fibre Channel links. (PPRC uses these physical links to mirror data.) v ESSNet, the local management Ethernet network for the ESS Commands sent over this network configure and manage PPRC through ESS Copy Services. v Nodes that access stored data. Note: PowerHA SystemMirror Enterprise Edition for Metro Mirror requires that inter-storage links be available to carry data in both directions at the same time. If ESCON links are used, a minimum of two ESCON links is required because each link can carry data in only one direction at a time. You should have at least four links to improve throughput and to provide redundancy of the ESCON cables and adapters. PPRC overview: When PPRC is activated, it establishes a mirror between specified volumes on two storage systems. The PPRC mirrored volumes are referred to as a PPRC pair, instance, or as a PPRC-protected volume. PPRC can mirror data in one of two ways: v Synchronous mirroring provides concurrent copy over ESCON links. This is supported by PowerHA SystemMirror Enterprise Edition . Note that ESCON links transmit data in one direction and have distance limitations.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
v Non-synchronous mirroring, also called Global Mirror mirroring, provides copying over longer distances. Extended distance mirroring ensures that volumes are consistent with each other at a point-in-time. This method is not supported in PowerHA SystemMirror Enterprise Edition. Note: FlashCopy is an optional feature on the ESS. It is an instant (point-in-time) copy of the data. Hardware configuration for PPRC mirroring The following illustrations show sample hardware configurations that support PPRC-mirroring. The illustrations also show the connections from the ESS to: v Nodes over SCSI or Fibre Channel connections v Fibre channel or ESCON links that connect the two ESS systems to mirror data v The ESS Specialist and ESS Copy Services Specialist over ESSNet and a wide-area network (WAN) that joins the two ESSNets.
Optional Metro Mirror configurations: These topics are a light overview of each of the optional PowerHA SystemMirror Enterprise Edition for Metro Mirror configurations available. DSCLI management: DSCLI management is one type of optional configuration of PowerHA SystemMirror Enterprise Edition for Metro Mirror. It provides a simplified PPRC interface for both the ESS and DS storage hardware. The DSCLI interface provides simplified management of PPRC paths and instances in the following manners: v Provides a simplified interface to IBM TotalStorage PPRC services on ESS or DS storage systems in order to allow management and reporting on PPRC instances and/or paths. v Monitors the status of PPRC relationships and consistency groups between the volumes being mirrored. It reports any change in status, such as a volume moving to an offline state. The DSCLI client software interfaces with the ESSNI server on the HMC, SMC or controller connected to the storage to which the DSCLI client is connected.
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
The following list highlights a few of the specific types of functions that you can perform with the DS command-line interface: v Check and verify your storage unit configuration. v Check the current Copy Services configuration that is used by the storage unit. v Create new logical storage and Copy Services configuration settings. v Modify or delete logical storage and Copy Services configuration settings. For more information on DSCLI configuration, refer to the current DSCLI online or otherwise published documentation: (http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp?topic=/ com.ibm.storage.ssic.help.doc/f2c_cliesscli_1kx2so.html) Related concepts PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management on page 43 These topics describe the planning, installation and configuration tasks for PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management, from here on referred to as DSCLI management. DSCLI management simplifies how you manage PPRC replicated resources on IBM TotalStorage systems and how you can integrate PPRC replicated resources into an PowerHA SystemMirror configuration. Related information dscli SVC management: SVC management is another optional configuration of PowerHA SystemMirror Enterprise Edition for Metro Mirror. SVC provides both storage virtualization and PPRC management services in the following ways: v Provides virtualized storage that interfaces with the TotalStorage PPRC services to provide management services and reporting for PPRC relationships (instances,) and/or consistency groups. v Monitors the status of PPRC relationships and consistency groups between the volumes being mirrored. It reports any change in status, such as a volume moving to an offline state. v Responds to a site failure (or possible site failure), by suspending all mirroring, and, if necessary, activating the mirror copy at the backup site for host access. All data changes are tracked until the mirrors can be synchronized again. v Synchronizes the volumes. Related concepts SVC replication resources guide on page 99 These topics present information for planning, installing, and configuring an PowerHA SystemMirror Enterprise Edition Metro Mirror for SAN Volume Controller (SVC) cluster. PowerHA SystemMirror Enterprise Edition for Metro Mirror in an PowerHA SystemMirror cluster: PowerHA SystemMirror Enterprise Edition for Metro Mirror allows you to include PPRC-mirrored volumes in an PowerHA SystemMirror cluster. This requires two PowerHA SystemMirror sitesan PowerHA SystemMirror component to which you assign nodes. Cluster nodes access the same shared volume groups, but the nodes at each site access them from different physical volumesthe two volumes in a single PPRC pair. This is different from a single-site PowerHA SystemMirror environment, in which all cluster nodes sharing volume groups have physical connections to the same set of disks.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
PPRC replicated resources: A PPRC replicated resource is an PowerHA SystemMirror resource that manages a PPRC pair (has a primary and secondary instance that is copied from one site to another). The managing resource group definition includes the volume groups and filesystems built on top of the volume groups defined to a PPRC replicated resource. PowerHA SystemMirror Enterprise Edition for Metro Mirror supports three types of replicated resources: v Direct management PPRC replicated resources. A PPRC replicated resource is a PPRC pair associated with an PowerHA SystemMirror site that is included in an PowerHA SystemMirror cluster . The definition for a PPRC replicated resource contains a volume identifier and the name of the associated ESS. PowerHA SystemMirror has knowledge of which volumes are mirrors of each other for each PPRC replicated resource. Synchronous mirroring is supported. v DSCLI-managed PPRC replicated resources. A DSCLI-managed PPRC replicated resource is a definition of a set of volume pairs and the paths needed to communicate between them. The resource group definition includes the volume groups built on top of the PPRC replicated volumes. Synchronous mirroring is supported. v SVC-managed PPRC replicated resources. An SVC-managed PPRC replicated resource contains the SVC virtual disk (vDisk) volume pair information combined with the SVC Cluster name bundled in PPRC relationships and consistency group. Synchronous mirroring is supported. Resource groups that include PPRC replicated resources: An PowerHA SystemMirror resource group is a collection of resources that comprise the operating environment for an application. Applications, as resources in a resource group, are made highly available. Resource group management policies direct which node hosts the resource group during normal operation and when the host node fails or goes offline. With PowerHA SystemMirror Enterprise Edition for Metro Mirror, resource group configuration is the same as for other resource groups. In addition, the resource group includes v A shared volume group and the PPRC replicated resources associated with the individual volumes in the volume group v An intersite management policy to handle a resource group during site recovery. For information about PowerHA SystemMirror resource groups, see the Planning Guide. Limitations and restrictions on resource groups that include PPRC replicated resources Refer to the section for the PowerHA SystemMirror Enterprise Edition Metro Mirror configuration you are implementing for additional Limitations lists. In general, the following restrictions apply to PowerHA SystemMirror resource groups that will manage PPRC replicated resources because of the way that PPRC instances are managed (i.e. source site nodes will have I/O access, target site nodes will not.) The outcome of this is that any PowerHA SystemMirror policy that allows a resource group to come online on more than one site at a time is not supported: v Inter-Site Management Policy Online Both Sides is not supported. v Startup Policies of Online Using Distribution Policy and Online on All Available Nodes are not supported. v Fallover Policy Fallover Using Dynamic Node Priority is not supported. Related information Planning guide PowerHA SystemMirror sites:
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
PowerHA SystemMirror Enterprise Edition for Metro Mirror support requires the use of sites. PowerHA SystemMirror supports two sites. The primary site is the active site, and the secondary site is the standby site. The Inter-Site Management Policy for a resource group directs how a resource group and its resources fall over in response to an outage, and how they fall back if configured to do so. For each resource group, one site is an active production site and the other a backup site. If the nodes at the active production site become unavailable, the backup site becomes the active production site. For non direct-management types of PowerHA SystemMirror Enterprise Edition for Metro Mirror support (DSCLI, SVC PPRC), each site contains at least one storage system and the nodes attached to it. For PowerHA SystemMirror Enterprise Edition Direct management environments, each site contains only one ESS. Resource groups have two types of management policies: v Resource group management policies determine fallover behavior if a node becomes unavailable. v Site management policies determine fallover behavior if all of the nodes at a site are not available. Related tasks Configuring PowerHA SystemMirror Enterprise Edition for MetroMirror using DSCLI management on page 56 These topics explain how to configure the DSCLI management with PowerHA SystemMirror. Fallover and fallback: PowerHA SystemMirror Enterprise Edition for Metro Mirror handles the automation of fallover from one site to another in response to an outage at a production site, minimizing recovery time. When a site fails, the resource group configuration determines whether source volumes are accessible from the secondary site. PowerHA SystemMirror automates application recovery by managing replicated resources defined to PowerHA SystemMirror resource groups in the following manner: v The fallover of nodes within a site based on node priority (as identified in the nodelist for a resource group) v The fallover between sites (as specified by the site management policy for a resource group) v The fallback of a resource group or site as configured. When an application is running on an active production site: v Updates to the application data are made to the disks associated with the active production site. v Data is mirrored using PPRC to the backup disks. If the node or the disks at the production site become unavailable: v The application moves to a server at the backup site. v The application continues operation using the mirrored copy of the data. When the initial production site becomes active again, resource group and site management policies determine whether or not the application moves back to the previous site: v The direction of mirroring may be reversed. v The application may be stopped and restarted on another node. v Manual intervention may be required to bring the application back to a nominal functioning state.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
Synchronous
DSCLI management, via ESSNI Server on either storage controller or Hardware Management Console (HMC) SVC management of PPRC services on SVC-specific hardware
Synchronous
Coexistence of two solutions PowerHA SystemMirror Enterprise Edition for Metro Mirror solutions can coexist on the same PowerHA SystemMirror cluster only if the PPRC pairs are managed by one of the PPRC solutions at a time. Please refer to the latest support information for which PPRC solutions can successfully coexist on a single PowerHA SystemMirror cluster. Planning overview: Once you have decided what type of PowerHA SystemMirror Enterprise Edition for Metro Mirror implementation to set up, you can begin planning the configuration. At this stage, you should be familiar with the planning tasks for base PowerHA SystemMirror. For information about planning PowerHA SystemMirror clusters, see the Planning Guide.
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
The following planning is required for all PowerHA SystemMirror Enterprise Edition for Metro Mirror implementations: v Identify sites for PowerHA SystemMirror Enterprise Edition for Metro Mirror. v Identify the resource groups needed to manage the PPRC replicated resources (optional at this point, can be done at a later stage). v Identify storage systems to be used in the configuration. v Plan for connections to the storage units. v Plan the PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration. Identify the type of PowerHA SystemMirror Enterprise Edition for Metro Mirror support to be used. Related concepts General PowerHA SystemMirror planning worksheets on page 81 You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. Related information Planning guide Planning PowerHA SystemMirror sites: Within a resource group, the nodes at one site may handle the PPRC replicated resource differently than the nodes at the other site, especially in cases where the states of the volumes (suspended or full duplex) are different at the two sites. PowerHA SystemMirror Enterprise Edition for Metro Mirror requires two PowerHA SystemMirror sites for use within a resource group to control which volume in a PPRC pair a node can access. Although nodes at both sites can access a volume group, access is permitted to only one volume in a PPRC pair at a time - the source volume. This prevents nodes at different sites from accessing the same volume group at the same time. Typically, a number of volumes are mirrored through PPRC from one site to the other. Completing the PowerHA SystemMirror Enterprise Edition for Metro Mirror site worksheet: An PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration requires two sites. Each site contains one ESS and the nodes attached to it. Complete a PowerHA SystemMirror Site Worksheet for each site. To define an PowerHA SystemMirror Enterprise Edition for Metro Mirror site, enter the following information on the worksheet: 1. Record the Site Name. Use no more than 20 alphanumeric characters and underscores. Do not begin the name with a number. 2. Record the names of the Site Nodes. The nodes must have the same names as those you define to the PowerHA SystemMirror cluster. A site can have 1 - 7 nodes. A node can belong to only one site. 3. Record the Site Backup Communication Method as None. Related reference PowerHA SystemMirror Site Worksheet on page 81 Use the PowerHA SystemMirror Site Worksheet to record PowerHA SystemMirror sites to support PPRC replicated resources. Planning PowerHA SystemMirror Enterprise Edition for Metro Mirror resource groups: In addition to basic resource group planning, you need to plan for resource group attribute such as sites and PPRC replicated resources.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
Plan your resource groups using the guidelines in Planning resource groups in the Planning Guide. Note: An PowerHA SystemMirror resource group can contain multiple volume groups, file systems, PPRC replicated resources, applications, etc. When configuring a resource group to use a PPRC replicated resource, the resource group includes nodes in both cluster sites. For the resource group to manage nodes and resources across sites, you assign one of the following inter-site management policies: v Prefer Primary Site . In a two-site configuration, replicated resources at startup are on the site with the highest priority, fall over to the other site, and then fall back to the site with the highest priority. v Online on Either Site . Replicated resources are on either site at startup, fall over to the other site and remain on that site after fallover. This selection simplifies resource group takeover rules, which is helpful if you have a number of resource groups. Note: Each PPRC pair will be included in a resource group whose home node is located at the site that is primary for that PPRC pair. If you want to set up a mutual recovery configuration for ESS volumes, you configure one resource group in which one site is the active site and another resource group in which the other site is the active site. For an example of a mutual recovery configuration, see Sample configuration for Direct management. Resource group limitations In PowerHA SystemMirror Enterprise Edition for Metro Mirror configurations, all volume groups in a resource group must contain only PPRC-protected disks. If PowerHA SystemMirror Enterprise Edition for Metro Mirror is not being used, all volume groups in a resource group must contain only non-PPRC-protected disks. Each PPRC pair will be included in a resource group whose home node is located at the site that is primary for that PPRC pair. Related reference Sample configuration for Direct management on page 20 You can set up a mutual recovery configuration in which each site acts as a production site with the other site acting as an associated backup site. Related information Planning resource groups Completing the PPRC resource group worksheet: Complete a Resource Groups for PPRC Replicated Resources Worksheet for each resource group that contains PPRC replicated resources. To identify information required to include PPRC replicated resources in a resource group, enter the following information on the PPRC Replicated Resources Worksheet: 1. Record the Resource Group Name for the resource group that will contain the PPRC replicated resource. The name can contain up to 32 characters and underscores. Do not begin the name with a number. 2. Record the startup, fallover, and fallback policies for the resource group. Resource groups that include PPRC replicated resources do not support the startup policy Online on All Available Nodes. 3. Record the Intersite Management Policy: Prefer Primary Site or Online Either Site.
10
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Prefer Primary Site is recommended for use with PPRC. 4. Record the names of the Nodes in the resource group, listing the nodes in the prioritized order in which they will appear in the nodelist for the resource group. 5. Plan and record the name of each PPRC replicated resource that will be managed, (and if you will be using Direct management PPRC, specify whether it is primary or secondary.) You will use this name for the resources and assigned each as primary or secondary on the PPRC-Mirrored Volumes Worksheet appropriate for the PowerHA SystemMirror Enterprise Edition PPRC configuration you are working on. 6. Record the names of any related volume groups (and file systems).
Related information Installation guide Installation prerequisites for PowerHA SystemMirror Enterprise Edition Metro Mirror: Before installing PowerHA SystemMirror Enterprise Edition Metro Mirror, be sure to have the necessary base PowerHA SystemMirror filesets installed. PowerHA SystemMirror Enterprise Edition requires base PowerHA SystemMirror. Base PowerHA SystemMirror can be installed at the same time as PowerHA SystemMirror Enterprise Edition. Installation requisites will ensure the proper version(s) of base PowerHA SystemMirror filesets are installed before installing PowerHA SystemMirror Enterprise Edition. At least the following base PowerHA SystemMirror filesets should be installed on all cluster nodes: Note: Fileset versions must reflect the version of PowerHA SystemMirror you are installing. In addition, refer to the following table for a list of filesets to install for each type of PPRC management support:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
11
Filesets to install cluster.es.pprc.cmds cluster.es.pprc.rte cluster.msg.en_US.pprc (and/or other appropriate language message sets)
DSCLI
cluster.es.pprc.cmds cluster.es.pprc.rte cluster.es.spprc.cmds cluster.es.spprc.rte cluster.msg.en_US.pprc (and/or other appropriate language message sets)
Since each type of PPRC management has different prerequisites, information on installing the particular filesets for specific support types (for example, cluster.es.pprc filesets) is deferred to the section specific to that PPRC management type. Note: For the latest information about PowerHA SystemMirror Enterprise Edition software, see the Release Notes in the /usr/es/sbin/cluster/release_notes_xd file. Related concepts PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management on page 18 These topics describe the planning, installation, and configuration tasks for setting up PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management. Using this method, PowerHA SystemMirror directly manages PPRC pairs by communicating with the ESS Copy Services Server. PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management on page 43 These topics describe the planning, installation and configuration tasks for PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management, from here on referred to as DSCLI management. DSCLI management simplifies how you manage PPRC replicated resources on IBM TotalStorage systems and how you can integrate PPRC replicated resources into an PowerHA SystemMirror configuration. SVC replication resources guide on page 99 These topics present information for planning, installing, and configuring an PowerHA SystemMirror Enterprise Edition Metro Mirror for SAN Volume Controller (SVC) cluster. Contents of the installation media: The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group. These images include: Filesets for base install v cluster.es.pprc v cluster.es.spprc v cluster.es.svcpprc Direct management and DSCLI message sets v cluster.msg.ja_JP.pprc
12
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
v cluster.msg.en_US.pprc v cluster.msg.Ja_JP.pprc v cluster.msg.En_US.pprc SVC PPRC message sets v cluster.msg.ja_JP.svcpprc v cluster.msg.en_US.svcpprc v cluster.msg.Ja_JP.svcpprc v cluster.msg.En_US.svcpprc User Documentation for all Management Types v cluster.doc.en_US.pprc PowerHA SystemMirror Enterprise Edition for Metro Mirror installation choices: Install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software on each cluster node (server). Installing from an installation server: To install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software in a cluster environment, you can create an PowerHA SystemMirror Enterprise Edition for Metro Mirror installation server (containing the PowerHA SystemMirror Enterprise Edition for Metro Mirror software images for installation) on one node and then load the images onto the remaining cluster nodes. This is the fastest way to install PowerHA SystemMirror Enterprise Edition for Metro Mirror. PowerHA SystemMirror Enterprise Edition for Metro Mirror supports the Network Installation Management program and Alternate Disk Migration. For instructions on creating an installation server, see the AIX Installation and migration guide or the Network Installation Management. Related information AIX installation and migration Network installation management Installing from a hard disk: To install PowerHA SystemMirror Enterprise Edition for Metro Mirror software from your hard disk, you copy the software from the installation media to the hard disk prior to installation. To copy the PowerHA SystemMirror Enterprise Edition for Metro Mirror software to your hard disk: 1. Place the PowerHA SystemMirror Enterprise Edition for Metro Mirror CD into the CD-ROM drive. 2. Enter smit bffcreate The Copy Software to Hard Disk for Future Installation panel appears. 3. Enter the name of the CD-ROM drive in the INPUT device / directory for software field and press Enter. If you are unsure of the input device name, press F4 to list available devices. Select the correct drive and press Enter. That drive name appears in the INPUT device / directory field as the valid input device. 4. Press Enter to display the Copy Software to Hard Disk for Future Installation panel. 5. Enter field values as follows:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
13
SOFTWARE name
Press F4 for a software listing. Install the images for PowerHA SystemMirror Enterprise Edition for Metro Mirror. For a list of the PowerHA SystemMirror Enterprise Edition for Metro Mirror images, see the section Contents of the installation media. Change the value to the storage directory that all nodes using PowerHA SystemMirror Enterprise Edition for Metro Mirror can access.
6. Enter values for the other fields as appropriate for your site. 7. When you are satisfied with the entries, press Enter. SMIT responds: Are you sure? 8. Press Enter again to copy the software. Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror from the Hard Disk After the PowerHA SystemMirror Enterprise Edition for Metro Mirror software is on your system, follow the instructions in the section Installing the software from the installation media to install the software. Related tasks Installing the software from the installation media If you install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software from the CD-ROM, install the software directly onto each cluster node. Related reference Contents of the installation media on page 12 The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group. Installing the software from the installation media: If you install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software from the CD-ROM, install the software directly onto each cluster node. To install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software on a server node: 1. If you are installing directly from the installation media, insert the CD into the CD-ROM drive. 2. Enter smit install_all SMIT displays the first Install and Update from ALL Available Software panel. 3. Enter the device name of the installation media or install directory in the INPUT device / directory for software field and press Enter. If you are unsure about the input device name or about the install directory, press F4 to list the available devices. Then select the correct device or directory and press Enter. The correct value is entered into the INPUT device / directory field as the valid input device. 4. Enter field values as follows. Press F1 for help on any field. Note: You should use F4 to list the software before proceeding with the installation. This way you can install either the English or the Japanese message catalogs.
14
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
This field shows the device or directory you specified earlier. Press F4 for a software listing. In the software list, use the arrow keys to locate all software filesets associated with an image. Next press F7 to select either an image or a fileset. Then press Enter after making all selections. Your selections appear in this field.
PREVIEW only?
If set to yes, the preview option checks and verifies that installation prerequisites are met, for instance that required software is installed and sufficient disk space is available. Press F1 for details. When you are ready to perform the actual installation, set this field to no.
This field applies only when you install software updates (PTFs). See the F1 help for details. This field applies only when you install software updates (PTFs). If you select no in response to commit software updates? select yes for this field. See F1 help for details. Set this field to no if the prerequisite software for the latest version is already installed or if the OVERWRITE same or newer versions? field is set to yes ; otherwise, set this field to yes to install required software. See F1 help for details. Select yes if you have adequate hard disk space, no if you have limited space. See F1 help for details. For new installations, leave this field set to no. Set it to yes if you are reinstalling the software. If you set this field to yes, set the Automatically install requisite software field to no. See the F1 help for details.
VERIFY install and check file sizes? DETAILED output? Process multiple volumes? ACCEPT new license agreements?
Select yes if you want the system to perform some checks on the software you installed. See F1 help for details. Select yes if you want a detailed log of all installation messages. Select this option if you want to enable the processing of multiple-volume CDs. See F1 for information. Select yes for this item to proceed with installation. If you choose no, installation may stop with a warning that one or more filesets require software license agreements. You accept the license agreement only once for each node. Select yes to view the text of the license agreements. The text appears in the current window in the language defined on your system.
5. When you are satisfied with the entries, press Enter. SMIT responds Are you sure? 6. Press Enter to install the software. 7. After you install the software on a node, reboot that node. Read the PowerHA SystemMirror Enterprise Edition for Metro Mirror Release Notes in the /usr/es/sbin/cluster/release_notes_xd file for information that does not appear in the product documentation. After you successfully install the software on each cluster node, you are ready to configure PowerHA SystemMirror Enterprise Edition for Metro Mirror for one of the following management types: v Direct management v DSCLI v SVC PPRC
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
15
Related concepts PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management on page 18 These topics describe the planning, installation, and configuration tasks for setting up PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management. Using this method, PowerHA SystemMirror directly manages PPRC pairs by communicating with the ESS Copy Services Server. PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management on page 43 These topics describe the planning, installation and configuration tasks for PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management, from here on referred to as DSCLI management. DSCLI management simplifies how you manage PPRC replicated resources on IBM TotalStorage systems and how you can integrate PPRC replicated resources into an PowerHA SystemMirror configuration. SVC replication resources guide on page 99 These topics present information for planning, installing, and configuring an PowerHA SystemMirror Enterprise Edition Metro Mirror for SAN Volume Controller (SVC) cluster. Related reference Contents of the installation media on page 12 The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group. Upgrading to latest release of PowerHA SystemMirror Enterprise Edition: These topics discuss how to upgrade to the latest release of the PowerHA SystemMirror Enterprise Edition for Metro Mirror software. Upgrading the PowerHA SystemMirror Enterprise Edition software: Before you upgrade to a new release of PowerHA SystemMirror Enterprise Edition for Metro Mirror, make sure you are familiar with the process for installing and configuring PowerHA SystemMirror Enterprise Edition. See the section Installing the software from the installation media. v Make sure that your system meets the installation prerequisites. See the section Installation prerequisites for PowerHA SystemMirror Enterprise Edition Metro Mirror. v If you have not completed planning worksheets and diagrams, do so before continuing. You should transfer all information about your existing installation, plus any changes you plan to make after the upgrade, to these new worksheets. v Ensure that each cluster node has its own PowerHA SystemMirror Enterprise Edition license. v The PowerHA SystemMirror Enterprise Edition software uses 1 MB of disk space. v Perform the installation process as the root user. Before upgrading your cluster: 1. Archive any localized script and configuration files to prevent losing them during an upgrade. 2. Commit the installation (if it is applied but not committed) so that the PowerHA SystemMirror software can be installed over the existing version. To see if your configuration is already committed, enter: lslpp -h cluster.* 3. If the word COMMIT is displayed under the Action header, continue to the next step. If not, run the smit install_commit utility before installing the latest version software. SMIT displays the Commit Applied Software Updates (Remove Saved Files) panel. 4. Enter field values as follows:
16
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
SOFTWARE name COMMIT old version if above version used it? EXTEND filesystem if space needed?
Press F4 for a software listing, and select all of the cluster.* filesets. Set this field to yes. Set this field to yes.
5. Make a mksysb backup of each node's configuration. If you restore a mksysb backup onto your system, you need to reset the SCSI IDs on the system. 6. Save any customized event information. Note: When upgrading do not leave the cluster at mixed versions for long periods. Mixed versions of the software in the cluster can impact availability within the cluster. For information about installing the latest software, see the section Installing the software from the installation media. The PPRC filesets must be upgraded at the same time as the other cluster filesets. Related tasks Installing the software from the installation media on page 14 If you install the PowerHA SystemMirror Enterprise Edition for Metro Mirror software from the CD-ROM, install the software directly onto each cluster node. Related reference Installation prerequisites for PowerHA SystemMirror Enterprise Edition Metro Mirror on page 11 Before installing PowerHA SystemMirror Enterprise Edition Metro Mirror, be sure to have the necessary base PowerHA SystemMirror filesets installed. Verifying the upgraded cluster definition: After the PowerHA SystemMirror Enterprise Edition for Metro Mirror software is installed on all nodes, and all nodes have been rebooted, you should verify the configuration. Verification provides errors or warnings to ensure that the cluster definition is the same on all nodes. To verify the cluster: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Verification and Synchronization and press Enter. 3. Set the option Verify, Synchronize, or Both to Verify and press Enter. Verification verifies both PowerHA SystemMirror and PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration. Note: You cannot synchronize a mixed-version cluster. New functionality is available only when all nodes have been upgraded and the cluster has been synchronized. Do not expect commands like clfindres to supply correct information in a mixed cluster. Recovering from a failed installation: When you install PowerHA SystemMirror Enterprise Edition, the cl_convert command runs automatically to convert the PowerHA SystemMirror configuration database from a previous release to that of the current release. If the installation fails, run cl_convert from the command line to convert the database. In a failed conversion, run cl_convert using the -F flag. To run a conversion utility requires: v Root user privileges v PowerHA SystemMirror Enterprise Edition version from which you are converting
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
17
The cl_convert utility logs conversion progress to the /tmp/clconvert.log file so that you can gauge conversion success. This log file is regenerated each time cl_convert or clconvert_snapshot runs. For more information about cl_convert and clconvert_snapshot, see the respective man pages or see PowerHA SystemMirror for AIX Commands in the Administration Guide. Related information HACMP for AIX commands Modifying previous cluster snapshots: After you upgrade your PowerHA SystemMirror Enterprise Edition software to the latest version, you may want to restore one or more of the previous version cluster snapshots you created using the Cluster Snapshot utility. The default directory path for storage and retrieval of a snapshot is /usr/es/sbin/cluster/snapshots; however, you may have specified an alternate path using the SNAPSHOTPATH environment variable. Check in these locations before using the /usr/es/sbin/cluster/conversion/clconvert_snapshot utility to convert the snapshot. The snapshot is based on your full PowerHA SystemMirror configuration - including the configuration to include PPRC replicated resources in a cluster. The clconvert_snapshot utility updates PowerHA SystemMirror configuration data with new information for the latest version. To convert and apply a cluster snapshot, enter:
clconvert_snapshot -v version# -s snapshot_file_name
Where the -s flag is used with the snapshot file name you want to update or apply, and the -v flag is used with the version of the saved snapshot. Related information Saving and restoring cluster configurations Addressing problems during the installation: If you experience problems during the installation, the installation program usually performs a cleanup process automatically. If, 1. 2. 3. 4. for some reason, the cleanup is not performed after an unsuccessful installation: Enter smit install to display the Installation and Maintenance menu. Select Install and Update Software. Select Clean Up After a Interrupted Installation. Review the SMIT output (or examine the /smit.log file) for the cause of the interruption.
PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management
These topics describe the planning, installation, and configuration tasks for setting up PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management. Using this method, PowerHA SystemMirror directly manages PPRC pairs by communicating with the ESS Copy Services Server. PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management is the oldest type of PPRC support PowerHA SystemMirror provides. It has the simplest hardware configuration, but requires more work on the administrator's part to set up.
18
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Briefly, PowerHA SystemMirror provides support for direct management of ESS PPRC by managing specified pair and path tasks as defined by the user on their ESS storage systems Copy Services Server (CSS). PowerHA SystemMirror provides monitoring, fallover and fallback support for PPRC by issuing commands directly to the CSS via the ESS CLI interface. Refer to the general overview in Chapter 1 of this book for a full description of this type of configuration. Direct management can be used to support PPRC between ESS 800 storage systems. Planning for Direct management: You should be familiar with the planning tasks for PowerHA SystemMirror. You should have completed the planning steps in Metro Mirror general planning for PowerHA SystemMirror sites, at least. To continue the plan for an PowerHA SystemMirror Enterprise Edition for Metro Mirror with Direct management environment: v Plan for connections to the ESS v Plan for ESS Copy Services v Plan the PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration: Identify ESS systems to be included Identify which resource groups will contain the PPRC replicated resources v Identify PPRC replicated resources and provide information about the associated volumes v (Optional) Plan for any user-specific PPRC tasks. Related concepts PowerHA SystemMirror Enterprise Edition for Metro Mirror general planning on page 8 These topics describe planning tasks required for all the types of PowerHA SystemMirror Enterprise Edition Metro Mirror support. Related information Planning guide Planning prerequisites for Direct management: PowerHA SystemMirror Enterprise Edition PPRC with Direct management manages PPRC resources by communicating with the Copy Services Server (CSS) on ESS systems via the ESS CLI. Accordingly, prior to configuring Direct management, ensure the following: v The version of the ESS CLI shipped with the microcode for your storage system has been installed on all PowerHA SystemMirror cluster nodes. v You have access to the ESS Copy Services Web Interface for the storage systems involved in your configuration. ESS Copy Services The ESS Copy Services Web Interface provides a configuration interface for setting up Direct management PPRC. For complete information on PPRC and the ESS Copy Services, see the ESS Web Interface User's Guide on the IBM Web site at the following URL: (http://publibfp.boulder.ibm.com/epubs/pdf/f2bui04.pdf) For more information on Direct management PPRC, refer to the appropriate sections in this topic collection.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
19
See also Installation Prerequisites later in this chapter for more information on software and installation prerequisites. Limitations You can only use non-concurrent volume groups with PPRC. Disk heartbeat networks are not supported in PowerHA SystemMirror for physical volumes that are mirrored using PPRC. Related reference Installation prerequisites for Direct management on page 28 There are some installation prerequisites for Direct management. Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Sample configuration for Direct management: You can set up a mutual recovery configuration in which each site acts as a production site with the other site acting as an associated backup site. Implementing a mutual recovery configuration requires: v Two PowerHA SystemMirror sites (the same as for a single recovery configuration) v Two resource groups. The following figure shows a mutual recovery configuration with two resource groups in which Site_2 is the production site with Site_1 as the recovery site for Resource Group 2, and Site_1 is the production site with Site_2 as the recovery site for Resource Group 1. In Resource Group 1 the nodes n1 and n2 have a higher priority, and in Resource Group 2 the nodes n3 and n4 have a higher priority. The volume groups in both resource groups are included in PPRC replicated resources.
The order of the nodes in each resource group's nodelist indicates which site is considered to be the production site and which site is considered to be the backup site for that resource group. The site that includes the nodes with the highest priority is considered the production site for the resource group. In the preceding example, the configuration for resource group 1 would be:
20
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Site_1, Site_2 volume100@ESS_1 (mirrored volume on ESS_1) volume200@ESS_2 (mirrored volume on ESS_2) n1, n2, n3, n4
In the preceding example, the configuration for resource group 2 would be:
Ordered sitelist Primary volume Secondary volume Ordered nodelist Site_2, Site_1 volume210@ESS_2 (mirrored volume on ESS_2) volume110@ESS_1 (mirrored volume on ESS_1) n3, n4, n1, n2
Planning integration with PPRC: Before you configure PowerHA SystemMirror Enterprise Edition for Metro Mirror, you configure PPRC on the ESS. To configure PPRC, you set up: v PPRC paths, ESCON links that transfer mirrored data from one ESS to another v PPRC pairs, volumes mirrored from one ESS to another. The PPRC paths connect PPRC pairs. For information about configuring PPRC, see the ESS Web Interface User's Guide on the IBM Web site at the following URL: http://publibfp.boulder.ibm.com/epubs/pdf/f2bui04.pdf Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Planning connections to the ESS: PowerHA SystemMirror Enterprise Edition for Metro Mirror communication with the ESS requires connections to each of the ESS subsystems through the ESSNet administrative network. This access lets PowerHA SystemMirror Enterprise Edition for Metro Mirror use the ESS Command Line Interface (CLI) to control the ESS subsystems during event processing. Commands are sent to the ESS Copy Services Server software that runs on one of the ESS subsystems in the environment. The PowerHA SystemMirror cluster nodes can either connect directly to the ESSNet or connect over a router or bridge. The interfaces that communicate with the ESSNet can be defined to PowerHA SystemMirror, but IP address takeover (IPAT) should not be used on those interfaces. If the interfaces are defined to PowerHA SystemMirror, ensure that the nodes can gain access to ESSNet at any time. Note: The reliability of the connection between a node and the ESSNet directly affects the performance of PowerHA SystemMirror Enterprise Edition for Metro Mirror. A slow or unreliable network connection to ESSNet results in commands to initiate a fallover being processed slowly or unreliably. Planning Copy Services Server on ESS: Considerations for using Copy Service Server differ depending on the PPRC version.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
21
PPRC Versions before 2.2 When you set up ESS Copy Services, you define one ESS cluster processor complex as the active Copy Services Server and one ESS cluster processor complex on another ESS as the backup Copy Services Server. If the active Copy Services Server fails, a notification message is sent. Because PPRC versions prior to 2.2 do not support dual active Copy Services Servers, you manually reset the Copy Services Server to make the backup server assume the active role. PowerHA SystemMirror cannot restart the Copy Services Server. Set up the active Copy Services Server at the recovery site. PPRC Version 2.2 or Later When you set up ESS Copy Services, you define one ESS cluster processor complex in each ESS as a Copy Services Server. Because both Copy Services Servers are active at all times, you do not need to manually restart a Copy Service Server if one fails. Configuring PPRC paths: When configuring PPRC paths that connect PPRC pairs to be included in a PPRC replicated resource, keep in mind that each ESCON link can carry up to 64 paths at the same time. It also carries data in one direction at a time. PowerHA SystemMirror Enterprise Edition for Metro Mirror requires that ESCON links be available to carry data in both directions at the same time; therefore, you need a minimum of two physical links. To improve throughput and to provide redundancy for the ESCON cables and adapters, you should have at least four links. Note that the ESS manages the paths. Planning for PPRC replicated resources: PPRC pairs are defined so that one volume in the pair resides on the ESS at one site, and the other volume in the pair resides on the ESS at the other site. In a PPRC pair, the volume that data is being written to is the source volume, and the volume that contains a mirrored copy of the data is the target volume. The definition for a PPRC replicated resource contains the volume identifier and the name of the ESS for the source and the target volumes. PowerHA SystemMirror has knowledge of which volumes are mirrors of each other for each PPRC replicated resource. To plan PPRC replicated resources for PowerHA SystemMirror Enterprise Edition for Metro Mirror, you should have a good understanding of the volume groups and volumes involved. When you complete the PPRC-Mirrored Volumes Worksheet, you provide information about all of the PPRC pairs that will be configured as PPRC replicated resources. The terms primary and secondary are specific to each pair of PPRC-mirrored volumes as determined by the pair definition. In configurations that designate one site as a production site and the other as a backup site, the production site holds the primary volumes for the PPRC pairs, and the backup site holds the secondary volumes. Each PPRC pair is included in a resource group whose home node is located at the site that is primary for the PPRC pair. In a mutual recovery configuration, in which nodes from both sites are active, each site contains primary volumes for some PPRC pairs and secondary volumes for others.
22
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Note: PPRC copies the volume information, including the PVID, from one volume in a PPRC pair to the other. The volumes at both sites contain the same logical volumes and must therefore be imported with the same volume group name. This also allows single-name entries in a resource group definition. Use ESS Copy Services to obtain information about the ESS configuration and the PPRC pairs that have been established. For information about using ESS Copy Services, see the ESS Web Interface User's Guide on the IBM Web site at the following URL: http://publibfp.boulder.ibm.com/epubs/pdf/f2bui04.pdf Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Identifying volume groups: You use the volume group name when configuring PPRC tasks in ESS Copy Services and when you identify the volume group to an PowerHA SystemMirror node. Make sure that the name is the same in both places. When completing the PPRC-Mirrored Volumes Worksheet, you specify the volume group. For information about defining shared volume groups, see the section Importing Mirrored Volume Groups at the Secondary Site. Note: The volume group and logical volume names must be the same on all nodes in each site where the volume group can be brought online. To identify volume group information on the ESS: 1. Run the lspv command to view which volume group is associated with which hdisk. 2. Run the rsList2105.sh command to see which hdisks are associated with which serial numbers. Or, if you are using vpaths (as provided by an SDD driver) run the lsvpcfg command to see the information. You can also use the PPRC rsPrimeServer command to assign names that are then visible in ESS Copy Services. For information about the rsPrimeServer command, see the ESS (Interface User's Guide) on the IBM website at the following URL: http://www.storage.ibm.com/disk/ess/
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
23
Related tasks Importing mirrored volume groups at the secondary site on page 34 If you have already created volume groups and are sure that volumes are mirroring correctly, skip this section. If you have not created the volume groups, complete the procedure in this section. Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources Related information Enterprise Storage Server family Completing the PPRC-Mirrored Volumes Worksheet: Complete the PPRC-Mirrored Volumes Worksheet to record information that you use to define PPRC replicated resources. Note: On the PPRC-Mirrored Volumes Worksheet, enter the volume information so that the information for each volume in a PPRC pair is on the same line. To identify the information needed to define PPRC replicated resources, enter the following information on the PPRC-Mirrored Volumes Worksheet: 1. Record the Site Name for each of the PowerHA SystemMirror sites in the cluster. You can copy this information from the information on the PowerHA SystemMirror Site Worksheet. 2. Record the ESS Serial Number for the ESS at each site. You can copy this information from the information on the PowerHA SystemMirror Site Worksheet. 3. For each pair of PPRC-mirrored volumes, provide the information in the following list for each volume in a PPRC pair.
Volume ID The identifier, or serial number, for the volume. To determine the volume ID, run the following command: lscfg -vl diskname where diskname is the name of the hdisk that is presented to AIX. The Serial Number field contains the volume ID. The identifier for the logical subsystem (LSS). You can view this information from the Modify Volume Assignments Panel available in the ESS Copy Services Web Interface. Note: This number is used when setting up the paths on the ESS. You provide the information on the worksheet for reference only.
Logical Subsystem ID
4. Specify Primary or Secondary for each volume. Assign the volume on one ESS as primary and the volume on the other ESS as secondary. For information about primary and secondary volumes, see the section Planning for PPRC Replicated Resources. Note: Each PPRC pair will be included in a resource group whose home node is located at the site that is primary for that PPRC pair. 5. Record the Volume Group name for each group that contains PPRC-protected volumes. Volumes that are part of the same volume group should be located in the same LSS. For information about identifying the volume group, see the section Identifying Volume Groups. 6. Record the PPRC Replicated Resource Name, which is a management name to identify the resource. Use a name that is meaningful for your configuration. For example, a PPRC replicated resource could be named pprc4.1, with pprc signifying the type of replicated resource, 4 identifying the volume group to which the PPRC-protected volume belongs, and 1 enumerating a PPRC-protected volume in the volume group. Another PPRC replicated resource that includes a different volume in the same volume group could be named pprc4.2.
24
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related tasks Identifying volume groups on page 23 You use the volume group name when configuring PPRC tasks in ESS Copy Services and when you identify the volume group to an PowerHA SystemMirror node. Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources PowerHA SystemMirror Site Worksheet on page 81 Use the PowerHA SystemMirror Site Worksheet to record PowerHA SystemMirror sites to support PPRC replicated resources. Planning tasks for the ESS: Tasks are a set of actions to be executed on an ESS. They allow you to automate a series of steps that otherwise would be performed through the ESS Copy Services Web Interface. In a direct management environment, PowerHA SystemMirror Enterprise Edition for Metro Mirror relies on PPRC tasks for managing PPRC volume pairs and the associated PPRC paths. These tasks are used to establish and terminate PPRC pairs during fallover and fallback of volumes. To access these tasks, nodes in an PowerHA SystemMirror cluster that supports PowerHA SystemMirror Enterprise Edition for Metro Mirror require access to ESSNet to execute commands. Using user-specific task names: If you need to use user-specific task names, specify the names on the User-Specific Tasks Worksheets. Unless you have a configuration conflict (for example if you already have a task named with one of the recommended task names), you should name these tasks as listed in the section Configuring PPRC tasks. Task names can be up to 16 characters long. If possible, base the task name on the naming conventions described in the section Understanding task names. Note: When you specify a user-specific task name for any task, you need to specify task names for all of the tasks (whether or not they are different from the recommended names) for each volume group. You configure these tasks in ESS Copy Services and define them to PowerHA SystemMirror Enterprise Edition for Metro Mirror. For information about defining a group of user-specific task names to PowerHA SystemMirror , see the section Defining PPRC Tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management. Note: You should use the recommended task names, unless there is a very strong reason to do otherwise. If you do change names, ensure that the names are entered correctly in ESS Copy Services and in HAMCP/XD for PPRC.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
25
Related tasks Configuring PPRC tasks on page 31 You create 24 PPRC tasks for each PPRC-protected volume group in the cluster. This section lists each of these tasks and lists the options in ESS Remote Copy Services that you use to set up the task. Related reference User-Specific Tasks Worksheets on page 84 Use the User-Specific Tasks Worksheet to record User-specific task names (not needed for most configurations). Use this worksheet only if you use task names which are different from the recommended ones. Understanding task names The recommended names for the PPRC tasks specify the volume group and PPRC tasks to manipulate the volume pairs in the volume group. Defining the PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management on page 37 You define PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror only in cases where you named the PPRC tasks differently than recommended. If you used the recommended naming convention, you can skip this section. Understanding task names: The recommended names for the PPRC tasks specify the volume group and PPRC tasks to manipulate the volume pairs in the volume group. The recommended task names use the following naming conventions: volume_ group_action_object_direction_modifier_system A task name may contain some or all of the components listed in the preceding syntax line. The following table shows the values for the parts of the task name:
volume_ group _action The name of the volume group (less than 7 characters long) The action to be executed: Est Sus Ter Del _object The pair or path for the volume group: Pr Pt _direction The direction of the operation: PS SP _modifier The type of copy: NC FC SC FO FB F _system No copy Full copy Sync changes Failover Failback Forced (only for the delete action) Primary to secondary Secondary to primary Pair Path Establish Suspend Terminate Delete
26
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
P S
Primary Secondary
Completing the ESS Disk Subsystem Worksheet: Complete an ESS Disk Subsystem Worksheet for each ESS system that contains PPRC replicated resources. To define an ESS Disk Subsystem, enter the following information on the ESS Disk Subsystem Worksheet: 1. Record the Site Name, which is the name of the site where the system is located. Select the name of one of the sites identified in the section Planning a PPRC Resource Group. 2. Record the ESS Disk Subsystem Name. Enter a name with 30 or fewer characters. 3. Record the ESS IP Address in dotted decimal format. The Copy Services Server for an ESS uses this IP address. Note: The IP address specified here is the ESSNet address of the ESS at this site. 4. Record the User ID, which is the administrative user ID that authenticates on this ESS. 5. Record the ESS Password, which is the password associated with the user ID. Related reference ESS Disk Subsystem Worksheet on page 86 Use the ESS Disk Subsystem Worksheet to record information about ESS disk subsystems that contains PPRC pairs Planning a PPRC resource group If you have not done so, plan your resource groups. Planning a PPRC resource group: If you have not done so, plan your resource groups. Use the guidelines in PowerHA SystemMirror Enterprise Edition for Metro Mirror general planning as well as consulting Planning resource groups in the PowerHA SystemMirror Planning Guide. Related concepts PowerHA SystemMirror Enterprise Edition for Metro Mirror general planning on page 8 These topics describe planning tasks required for all the types of PowerHA SystemMirror Enterprise Edition Metro Mirror support. Related information Planning resource groups Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management: These topics describe how to install PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management. Details for installing base PowerHA SystemMirror Enterprise Edition from the installation media are included in Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror. Note: At runtime, make sure you have at least 9 MB of free space in the PowerHA SystemMirror log directory. PPRC commands use this directory.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
27
Related reference Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror on page 11 These topics describe how to install base PowerHA SystemMirror Enterprise Edition filesets. Details for installing filesets and prerequisites for specific types of PPRC support are contained in subsequent sections. Installation prerequisites for Direct management: There are some installation prerequisites for Direct management. Before you install PowerHA SystemMirror Enterprise Edition Metro Mirror for Direct management, make sure that the following is true for each cluster node: v The PowerHA SystemMirror Enterprise Edition software uses 1 MB of disk space in the /usr filesystem. v You have root access to each node. Related reference Planning prerequisites for Direct management on page 19 PowerHA SystemMirror Enterprise Edition PPRC with Direct management manages PPRC resources by communicating with the Copy Services Server (CSS) on ESS systems via the ESS CLI. Software requirements for Direct management: There are some software requirements for PowerHA SystemMirror Enterprise Edition Metro Mirror. The following software is required: v AIX level as stated in the support flash v Minimum PowerHA SystemMirror version plus all current APARs v Java Runtime Environment version appropriate for the ESS CLI version installed. (This will be tied to and should be available with the microcode level on the ESS systems.) v ESS CLI as appropriate for your storage microcode (LIC) level: The ESS Copy Services CLI software is closely tied to the ESS microcode level. CLI code is found on the MegaCDR that is included in the Customer Software packet. IBM 2105 Command Line Interface (ibm2105cli.rte ) or IBM 2105 Command Line Interface (ibm2105esscli.rte) (also included on the MegaCDR). Depending on which version of the ibm2105esscli fileset ships with your level of ESS microcode, the files will be installed in one of three locations: 1. /usr/opt/ibm2105cli 2. /usr/opt/ibm/ibm2105cli 3. /opt/ibm/ESScli PowerHA SystemMirror Enterprise Edition assumes a default directory for the ESS CLI executables. Therefore, once the CLI executables are installed in one of the directories listed, you must create the following link: /usr/opt/ibm2105cli -> < ESS cli installation location > so that PowerHA SystemMirror Enterprise Edition can find the executables. See the IBM TotalStorage support web pages for updated information on ESS CLI levels and installation site. http://www.ibm.com/servers/storage/support/software/cscli/ v (Optional but recommended) ESS microcode level vrmf 2.4.x.x. This version provides support for a dual active Copy Services Server.
28
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Note: Install microcode vrmf 2.4.x.x on the ESS system to make support for a dual active Copy Services Server available. Otherwise, the Copy Services Server must be manually started on a backup Copy Services Server if the primary Copy Services Server is unavailable. Related information Support for Copy Services CLI (Command Line Interface) Installing filesets for Direct management: You need to install the necessary filesets for Direct management. If you have not already done so, install the filesets listed in Contents of the installation media for Direct management. Related reference Contents of the installation media on page 12 The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group. Addressing problems during the installation: If you experience problems during the installation, the installation program usually performs a cleanup process automatically. If, for some reason, the cleanup is not performed after an unsuccessful installation, there are some steps that you can take. These steps include: 1. Enter smit install to display the Installation and Maintenance menu. 2. Select Install and Update Software. 3. Select Clean Up After an Interrupted Installation. 4. Review the SMIT output (or examine the /smit.log file) for the cause of the interruption. 5. Fix any problems and repeat the installation process. Configuring in a Direct management environment: These topics describe how to configure the ESS system to support PowerHA SystemMirror Enterprise Edition for Metro Mirror, and how to configure an PowerHA SystemMirror Enterprise Edition direct management (ESS CLI) environment. Configuration prerequisites for Direct management: Before configuring Direct management, there are some prerequisite steps you need to take. Before configuring PowerHA SystemMirror Enterprise Edition for Metro Mirror, ensure that: v PPRC is configured and running on the ESS systems. PowerHA SystemMirror Enterprise Edition for Metro Mirror supports System p servers that can support ESS systems. v You have a good understanding of PowerHA SystemMirror sites for PPRC replicated resources. For more information about sites, see the section PowerHA SystemMirror sites. v The PowerHA SystemMirror Enterprise Edition for Metro Mirror software is installed on each cluster node that will support PowerHA SystemMirror Enterprise Edition for Metro Mirror. For information about installing PowerHA SystemMirror Enterprise Edition for Metro Mirror, see Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror. v The PowerHA SystemMirror cluster is configured for: Nodes
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
29
Networks and network interfaces Service labels Initial resource groups. You can modify the attributes for a resource group later to accommodate PPRC replicated resources. Related reference PowerHA SystemMirror sites on page 6 PowerHA SystemMirror Enterprise Edition for Metro Mirror support requires the use of sites. PowerHA SystemMirror supports two sites. The primary site is the active site, and the secondary site is the standby site. Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror on page 11 These topics describe how to install base PowerHA SystemMirror Enterprise Edition filesets. Details for installing filesets and prerequisites for specific types of PPRC support are contained in subsequent sections. Configuration overview: Once you have installed PowerHA SystemMirror Enterprise Edition for Metro Mirror, you need to set up configuration. To set up an PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration: 1. On the ESS, configure PPRC support for PowerHA SystemMirror Enterprise Edition for Metro Mirror: a. Configure PPRC tasks in ESS Copy Services b. Import the volume groups on the nodes at the secondary site. 2. On cluster nodes, configure support for PowerHA SystemMirror Enterprise Edition for Metro Mirror: a. Define PowerHA SystemMirror sites for PowerHA SystemMirror Enterprise Edition for Metro Mirror. b. Define ESS systems to be included. Define PPRC replicated resources. Configure resource groups to include PPRC replicated resources. (Optional ) If user-specific task names are used, define them to PowerHA SystemMirror. (Optional ) If your cluster has a large number of volume groups, create LUN ID mapping files before startup as described in Improving the performance of volume groups. Related reference Improving the performance of volume groups on page 39 During resource acquisition, PPRC automatically creates LUN ID mappings. If your cluster contains a large number of volume groups, you can save time by creating these mapping files manually before bringing up the cluster. c. d. e. f. Configuring PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management support: Direct management configuration relies on tasks configured through ESS Copy Services and activated on the ESS system. For information about tasks, see the section Planning tasks for the ESS. Configure the ESS system in preparation for PowerHA SystemMirror Enterprise Edition for Metro Mirror before making configuration changes in PowerHA SystemMirror Enterprise Edition for Metro Mirror.
30
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference Planning tasks for the ESS on page 25 Tasks are a set of actions to be executed on an ESS. They allow you to automate a series of steps that otherwise would be performed through the ESS Copy Services Web Interface. Configuring PPRC tasks: You create 24 PPRC tasks for each PPRC-protected volume group in the cluster. This section lists each of these tasks and lists the options in ESS Remote Copy Services that you use to set up the task. When you create tasks, use the ESS Copy Services Web Interface. In this interface, serial numbers identify volumes. You select disks by serial number, and then create tasks that start with the volume group name. Consult the PPRC-Mirrored Volumes Worksheet for the volume group name. For more information on using the ESS Copy Services Web Interface, see the ESS Web Interface User's Guide on the IBM website at the following URL: http://publibfp.boulder.ibm.com/epubs/pdf/f2bui04.pdf Note: Use the task names recommended, unless there is a very strong reason to do otherwise . If you do change names, ensure that these names are entered correctly in ESS Copy Services and in PowerHA SystemMirror Enterprise Edition for Metro Mirror. If you name a task other than the recommended name, see the section PPRC Tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management. Note: Enter the task names carefully. Direct management does not work if any of the task names are inaccurate. To configure tasks for a volume group to use PowerHA SystemMirror Enterprise Edition for Metro Mirror: 1. Log into ESS Copy Services as a user with administrative privileges. 2. Use the ESS Copy Services Web Interface to define the ESS tasks needed to manage the paths (ESCON links) that PPRC uses for the volume group. Create the tasks as shown in the following list. Use the task names shown here , substituting the name of the volume group for $vgname. This name can be up to seven characters long, for example vg1. Note: Task names are case-sensitive. Consult the PPRC-Mirrored Volumes Worksheet for the volume group name. For more information about volume group names, see (Identifying Volume Groups).
Task Name $vgname EstPtPS Action Establish path(s) from the Logical Subsystem (LSS) in the primary ESS to the LSS in the secondary ESS. Remove paths from the LSS in the primary ESS to the LSS in the secondary ESS. Remove paths from the LSS in the primary ESS to the LSS in the secondary ESS. Option in ESS Copy Services to Define Task Do not establish paths if they already exist
$vgname DelPtPSF $vgname DelPtPS $vgname EstPtSP $vgname DelPtSP $vgname DelPtSPF
Establish paths from the LSS in the Do not establish paths if they already exist secondary ESS to the LSS in the primary ESS. Remove paths from the LSS in the secondary ESS to the LSS in the primary ESS. Remove paths from the LSS in the secondary ESS to the LSS in the primary ESS. Force removal of PPRC path even if pairs exist
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
31
3. Use the ESS Copy Services Web Interface to define the tasks needed to manage the pairs of volumes that are PPRC-protected. If you define multiple volumes in the volume group, you can use the Multiple Selection option on the ESS Copy Services Web Interface to define a single set of tasks to manage the entire volume group. Create the tasks as shown in the following list. Use the task names shown here , substituting the name of the volume group for $vgname. Consult the PPRC-Mirrored Volumes Worksheet for the volume group name. For more information about volume group names, see Identifying volume groups.
Task Name $vgname EstPrPSFC Action Establish a PPRC relationship from the source volume(s) in the primary ESS to the target volume(s) in the secondary ESS. Establish a PPRC relationship from the source volume(s) in the primary ESS to the target volume(s) in the secondary ESS. Establish a PPRC relationship from the source volume(s) in the primary ESS to the target volume(s) in the secondary ESS. Option in ESS Copy Services to Define Task Copy Entire Volume Permit Read from Secondary Do Not Copy Volume Permit Read from Secondary Copy Out-of-Sync Cylinders Only Permit Read from Secondary
$vgname EstPrPSNC
$vgname EstPrPSSC
$vgname EstPrPSFO
Establish a PPRC relationship from the target PPRC Failover volume(s) in the primary ESS to the source volume(s) in the secondary ESS. Establish a PPRC relationship from the target PPRC Failback volume(s) in the primary ESS to the source Permit Read from Secondary volume(s) in the secondary ESS. Suspend a PPRC relationship between the source volume in the primary ESS and the target volume in the secondary ESS. This task is executed on the primary ESS.
$vgname EstPrPSFB
$vgname SusPrPSP
$vgname SusPrPSS
Suspend a PPRC relationship between the source volume in the primary ESS and the target volume in the secondary ESS. This task is executed on the secondary ESS.
$vgname TerPrPSP
Terminate a PPRC relationship from the source volume in the primary ESS to the target volume in the secondary ESS. This task is executed on the primary ESS. Terminate a PPRC relationship from the source volume in the primary ESS to the target volume in the secondary ESS. This task is executed on the secondary ESS.
$vgname TerPrPSS
$vgname EstPrSPFC
Establish a PPRC relationship from the source volume(s) in the secondary ESS to the target volume(s) in the primary ESS. Establish a PPRC relationship from the source volume(s) in the secondary ESS to the target volume(s) in the primary ESS. Establish a PPRC relationship from the source volume(s) in the secondary ESS to the target volume(s) in the primary ESS.
Copy Entire Volume Permit Read from Secondary Do Not Copy Volume Permit Read from Secondary Copy Out-of-Sync Cylinders Only Permit Read from Secondary
$vgname EstPrSPNC
$vgname EstPrSPSC
$vgname EstPrSPFO
Establish a PPRC relationship from the target PPRC Failover volume(s) in the secondary ESS to the source volume(s) in the primary ESS.
32
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Action
Establish a PPRC relationship from the target PPRC Failback volume(s) in the secondary ESS to the source Permit Read from Secondary volume(s) in the primary ESS. Terminate a PPRC relationship from the source volume(s) in the secondary ESS to the target volume(s) in the primary ESS. This task is executed on the secondary ESS.
$vgname TerPrSPS
$vgname TerPrSPP
Terminate a PPRC relationship from the source volume(s) in the secondary ESS to the target volume(s) in the primary ESS. This task is executed on the primary ESS.
$vgname SusPrSPS
Suspend a PPRC relationship between the source volume in the secondary ESS and the target volume in the primary ESS. This task is executed on the secondary ESS.
$vgname SusPrSPP
Suspend a PPRC relationship between the source volume in the secondary ESS and the target volume in the primary ESS. This task is executed on the primary ESS.
4. Run the ESS rsExecuteTask.sh command to verify that you can execute the tasks created in the previous step. Note that the command name is case-sensitive. The rsExecuteTask.sh command should return to a command prompt. If it does not, PowerHA SystemMirror Enterprise Edition for Metro Mirror does not work. If the command does not return to a command prompt: v Review task configuration and make corrections as needed. v Run the rsExecuteTask.sh command again. Note: A host server IP address not being defined to the ESS causes a known configuration problem with the ESS. If this is present, the rsExecuteTask.sh command executes the ESS task, but the program does not return. For information about the rsExecuteTask.sh command, see the ESS (Command-Line Interface User's Guide) on the IBM website at the following URL: http://www.storage.ibm.com/disk/ess/ 5. Repeat steps 1 - 4 for each volume group that will use PowerHA SystemMirror Enterprise Edition for Metro Mirror.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
33
Related tasks Identifying volume groups on page 23 You use the volume group name when configuring PPRC tasks in ESS Copy Services and when you identify the volume group to an PowerHA SystemMirror node. Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources Defining the PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management on page 37 You define PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror only in cases where you named the PPRC tasks differently than recommended. If you used the recommended naming convention, you can skip this section. Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Support for Copy Services CLI (Command Line Interface) Importing mirrored volume groups at the secondary site: If you have already created volume groups and are sure that volumes are mirroring correctly, skip this section. If you have not created the volume groups, complete the procedure in this section. Refer to the PPRC-Mirrored Volumes Worksheet for the names of the volume groups. The secondary volume in a PPRC pair is not visible to the nodes at the secondary site when the PPRC relationship is active. As a result, use the following procedure to define the disks and volume groups for PowerHA SystemMirror. To define disks and volume groups to PowerHA SystemMirror: 1. Log into ESS Copy Services as a user with administrative privileges. 2. Run the $vgname EstPtPS task, where $vgname is the name of the volume group. This establishes the path(s) from the primary volume to the secondary volume. 3. Run the $vgname EstPrPSFC task, where $vgname is the name of the volume group. This establishes the PPRC pair(s) from the primary volume to the secondary volume and synchronizes the two copies. This may take awhile. 4. Create the volume group using the AIX Logical Volume Manager (LVM) on nodes on the active site. 5. Run the $vgname TerPrPSP task, where $vgname is the name of the volume group. This terminates the PPRC pair(s). 6. Run the $vgname DelPtPS task, where $vgname is the name of the volume group. This terminates the path(s). 7. Run Configuration Manager ( cfgmgr ) on the backup host server to move ESS hdisks to the available state. 8. Import the volume group to all nodes on the secondary site. The volume group can now be defined to PowerHA SystemMirror Enterprise Edition for Metro Mirror. Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources Configuring the PowerHA SystemMirror Enterprise Edition Metro Mirror Cluster for Direct management:
34
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Configure PowerHA SystemMirror Enterprise Edition for Metro Mirror after you configure tasks for PPRC on the ESS systems and define sites to support PPRC replicated resources in PowerHA SystemMirror. Defining PPRC replicated resources: Most of the configuration options for PowerHA SystemMirror Enterprise Edition for Metro Mirror are available from the Define PPRC Replicated Resources panel. To define PPRC replicated resources: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resources Configuration > Configure PPRC Replicated Resources > Define ESS PPRC Replicated Resources and press Enter. From this panel you can: v Define ESS disk subsystems. v Define a PPRC replicated resource. v Define PPRC tasks. v Synchronize the PPRC configuration. v Verify the PPRC configuration. Defining the ESS Disk Subsystems to PowerHA SystemMirror: You define the ESS subsystems included in the sites that support PowerHA SystemMirror Enterprise Edition for Metro Mirror to enable PowerHA SystemMirror to process fallovers for PPRC replicated resources. Consult your completed ESS Disk Subsystem Worksheet for the values planned for your cluster. To define an ESS system to PowerHA SystemMirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Configuration > Define PPRC Replicated Resources > Define ESS Disk Subsystem > Add an ESS Disk Subsystem and press Enter. 3. In the Add an ESS Disk Subsystem panel, enter field values as follows:
ESS Subsystem Name ESS site name ESS IP Address ESS User ID ESS password The name that identifies the ESS subsystem to PowerHA SystemMirror. The name may consist of up to 32 alphanumeric characters, and may include underscores. The name of the site where the ESS resides. The site is already defined in PowerHA SystemMirror for the site name to be available from the picklist. The IP address, in dotted decimal notation, that the Copy Services Server for an ESS uses. Note: The IP address specified here is the ESSNet address of the ESS at this site. The user ID used to authenticate logging into the ESS. The password associated with the specified ESS User ID .
4. Press Enter. 5. Review the settings for the ESS subsystem. From the Define ESS Disk Subsystem panel, select Change/Show an ESS Disk Subsystem and select an ESS system. 6. Make changes to the configuration settings as needed.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
35
Related reference ESS Disk Subsystem Worksheet on page 86 Use the ESS Disk Subsystem Worksheet to record information about ESS disk subsystems that contains PPRC pairs Defining the PPRC pairs to PowerHA SystemMirror: You define the PPRC pairs, the primary volume to secondary volume mappings, to allow the PowerHA SystemMirror Enterprise Edition for Metro Mirror to manage them. A volume group with multiple disks should have been defined with a single set of tasks in the section Configuring PPRC tasks. Define one PPRC pair for each mirrored volume in the volume group. Consult your completed PPRC-Mirrored Volumes Worksheet for the values planned for your cluster. To define a PPRC Pair to PowerHA SystemMirror Enterprise Edition for Metro Mirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Configuration > Define PPRC Replicated Resources > Define a PPRC Replicated Resource > Add a PPRC Resource and press Enter. 3. In the Add a PPRC Resource panel, enter field values as follows:
PPRC Resource Name The name of the PPRC volume pair. The name may consist of up to 32 alphanumeric characters, and may include underscores. For example, a PPRC replicated resource could be named pprc4.1, with pprc signifying the type of replicated resource, 4 identifying the volume group to which the PPRC-protected volume belongs, and 1 enumerating a PPRC-protected volume in the volume group. Another PPRC replicated resource that includes a different volume in the same volume group could be named pprc4.2. PowerHA SystemMirror Sites Primary Site ESS Info The two PowerHA SystemMirror sites, with the primary site for this PPRC pair listed first. The sites are already defined in PowerHA SystemMirror. The identifier for the volume in the primary ESS in the form: volume_id @ ESS_name where volume_id is the ESS volume ID of the disk, and ess_name is the name of the ESS subsystem containing this volume. The primary ESS is the ESS at the site that you chose as primary for this PPRC replicated resource. Consult the PPRC-Mirrored Volumes Worksheet for the volume ID, and the ESS Disk Subsystem Worksheet for the ESS name. For information about determining a volume ID, see the section Completing the PPRC-Mirrored Volumes Worksheet. Secondary Site ESS Info The identifier for a volume in the secondary ESS in the form: volume_id @ ESS_name where volume_id is the ESS volume ID of the disk, and ess_name is the name of the ESS subsystem containing this volume. The secondary ESS is the ESS at the site that you chose as secondary for this PPRC replicated resource. Consult the PPRC-Mirrored Volumes Worksheet for the volume ID, and the ESS Disk Subsystem Worksheet for the ESS name. For information about determining a volume ID, see the section Completing the PPRC-Mirrored Volumes Worksheet.
36
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
4. Press Enter. 5. Review the settings for the PPRC pairs. From the Define a PPRC Replicated Resource panel, select Change/Show a PPRC Resource and select a PPRC pair. 6. Make changes to the configuration settings as needed. Related tasks Configuring PPRC tasks on page 31 You create 24 PPRC tasks for each PPRC-protected volume group in the cluster. This section lists each of these tasks and lists the options in ESS Remote Copy Services that you use to set up the task. Completing the PPRC-Mirrored Volumes Worksheet on page 24 Complete the PPRC-Mirrored Volumes Worksheet to record information that you use to define PPRC replicated resources. Related reference PPRC-Mirrored Volumes Worksheet on page 83 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources ESS Disk Subsystem Worksheet on page 86 Use the ESS Disk Subsystem Worksheet to record information about ESS disk subsystems that contains PPRC pairs Defining the PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management: You define PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror only in cases where you named the PPRC tasks differently than recommended. If you used the recommended naming convention, you can skip this section. See Configuring PPRC tasks. Related tasks Configuring PPRC tasks on page 31 You create 24 PPRC tasks for each PPRC-protected volume group in the cluster. This section lists each of these tasks and lists the options in ESS Remote Copy Services that you use to set up the task. Defining PPRC path tasks: If you specified another name for a PPRC path task in ESS Remote Copy Services, identify that task to PowerHA SystemMirror Enterprise Edition for Metro Mirror. Consult your completed User-Specific Tasks Worksheets for the values planned for your cluster. To define a user-specific PPRC path task name to PowerHA SystemMirror Enterprise Edition for Metro Mirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Configuration > Define PPRC Replicated Resource > Define PPRC Tasks > Define PPRC Path Tasks > Add a Group of PPRC Path Tasks and press Enter. 3. In the Add a Group of PPRC Path Tasks panel, enter field values as follows:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
37
Volume Group Name Establish Path Pri - Sec Delete Path Pri - Sec Delete Path Pri - Sec FORCED Establish Path Sec - Pri Delete Path Sec - Pri Delete Path Sec - Pri FORCED
The name of the volume group associated with the PPRC path task. The name of the task that establishes a PPRC path from the primary ESS to the secondary ESS as defined in ESS Copy Services. The name of the task that deletes a PPRC path from the primary ESS to the secondary ESS as defined in ESS Copy Services. The name of the task that deletes a PPRC path from the primary ESS to the secondary ESS with the Forced option as defined in ESS Copy Services. The name of the task that establishes a PPRC path from the secondary ESS to the primary ESS as defined in ESS Copy Services. The name of the task that deletes a PPRC path from the secondary ESS to the primary ESS as defined in ESS Copy Services. The name of the task that deletes a PPRC path from the secondary ESS to the primary ESS with the Forced option as defined in ESS Copy Services.
4. Press Enter. 5. Review the settings for the PPRC path tasks, from the Define PPRC Path Tasks panel, select Change/Show a Group of PPRC Path Tasks . 6. Make changes to the configuration settings as needed. Related reference User-Specific Tasks Worksheets on page 84 Use the User-Specific Tasks Worksheet to record User-specific task names (not needed for most configurations). Use this worksheet only if you use task names which are different from the recommended ones. Defining PPRC pair tasks: If you specified another name for a PPRC pair task in ESS Remote Copy Services, identify that task to PowerHA SystemMirror Enterprise Edition for Metro Mirror. Consult your completed User-Specific Tasks Worksheets for the values planned for your cluster. Note: When you specify a user-specific task name, specify task names for all eighteen tasks (whether or not they are different from the recommended names) for each volume group. To define a group of user-specific PPRC pair task names to PowerHA SystemMirror Enterprise Edition for Metro Mirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > Define PPRC Tasks > Define PPRC Pair Tasks > Add a Group of PPRC Pair Tasks and press Enter. 3. In the Add a Group of PPRC Pair Tasks panel, enter field values as follows:
Volume Group Name The name of the volume group associated with the PPRC pair task.
Direction of the operation: Primary to Secondary Establish Pair Pri - Sec NO COPY Establish Pair Pri - Sec FULL COPY Establish Pair Pri - Sec RESYNC The name of the task that establishes a PPRC pair from the primary ESS to the secondary ESS with the No Copy option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the primary ESS to the secondary ESS with the Full Copy option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the primary ESS to the secondary ESS with the Copy Out-of-sync Cylinders Only option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the primary ESS to the secondary ESS with the Failover option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the primary ESS to the secondary ESS with the Failback option set as defined in ESS Copy Services.
Establish Pair Pri - Sec FAILOVER Establish Pair Pri - Sec FAILBACK
38
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Suspend Pair Pri - Sec on Pri Suspend Pair Pri - Sec on Sec
The name of the task executed on the primary ESS that suspends PPRC mirroring from the primary ESS to the secondary ESS as defined in ESS Copy Services. The name of the task executed on the secondary ESS that suspends PPRC mirroring from the primary ESS to the secondary ESS as defined in ESS Copy Services. The name of the task executed on the primary ESS that ends PPRC mirroring from primary ESS to secondary ESS as defined in ESS Copy Services. The name of the task executed on the secondary ESS that ends PPRC mirroring from primary ESS to secondary ESS as defined in ESS Copy Services.
Terminate Pair Pri - Sec on Pri Terminate Pair Pri - Sec on Sec
Direction of the operation: Secondary to Primary Establish Pair Sec - Pri NO COPY Establish Pair Sec - Pri FULL COPY Establish Pair Sec - Pri RESYNC The name of the task that establishes a PPRC pair from the secondary ESS to the primary ESS with the No Copy option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the secondary ESS to the primary ESS with the Full Copy option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the secondary ESS to the primary ESS with the Copy Out-of-sync Cylinders Only option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the secondary ESS to the primary ESS with the Failover option set as defined in ESS Copy Services. The name of the task that establishes a PPRC pair from the secondary ESS to the primary ESS with the Failback option set as defined in ESS Copy Services. The name of the task executed on the secondary ESS that suspends PPRC mirroring from the secondary ESS to the primary ESS as defined in ESS Copy Services. The name of the task executed on the primary ESS that suspends PPRC mirroring from the secondary ESS to the primary ESS as defined in ESS Copy Services. The name of the task executed on the secondary ESS that ends PPRC mirroring from the secondary ESS to the primary ESS as defined in ESS Copy Services. The name of the task executed on the primary ESS that ends PPRC mirroring from the secondary ESS to the primary ESS as defined in ESS Copy Services.
Establish Pair Sec - Pri FAILOVER Establish Pair Sec - Pri FAILBACK Suspend Pair Sec - Pri on Sec
Suspend Pair Sec - Pri on Pri Terminate Pair Sec - Pri on Sec Terminate Pair Sec - Pri on Pri
4. Press Enter. 5. Review the settings for the PPRC pair tasks, from the Define PPRC Pair Tasks panel, select Change/Show a Group of PPRC Pair Tasks. 6. Make changes to the configuration settings as needed. Related reference User-Specific Tasks Worksheets on page 84 Use the User-Specific Tasks Worksheet to record User-specific task names (not needed for most configurations). Use this worksheet only if you use task names which are different from the recommended ones. Improving the performance of volume groups: During resource acquisition, PPRC automatically creates LUN ID mappings. If your cluster contains a large number of volume groups, you can save time by creating these mapping files manually before bringing up the cluster. To create these mapping files, run the /usr/es/sbin/cluster/pprc/utils/cl_store_LUNPairs command on each cluster node, and pass the names of your volume groups as parameters as shown in the following example:
cl_store_LUNPairs MyVg1 MyVg2 MyVg3
This populates the /tmp directory with files named VolumePairs.VolumeGroupName , where VolumeGroupName corresponds with the volume group names you specified as parameters (for example, MyVg1, MyVg2, and so on). Each file contains pairs of LUN IDs that resemble the following:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
39
Verifying and synchronizing the PPRC configuration: The configuration changes that you have completed to this point need to be synchronized to the other cluster nodes. Verifying configuration for PPRC replicated resources checks the configuration and reports on the following issues: v PowerHA SystemMirror Enterprise Edition for Metro Mirror classes in the PowerHA SystemMirror Configuration Database are identical on all nodes. v The PPRC Command Line Interface is installed correctly on each node. v The PPRC volume groups are not in concurrent volume groups that span nodes on different sites. v Sites defined in the PowerHA SystemMirror disk subsystem exist in an PowerHA SystemMirror site. v The IP addresses of the ESS systems exist and are reachable. v The ESS systems for the PPRC replicated resources are defined as ESS disk subsystems to PowerHA SystemMirror. v The two volumes of a PPRC pair are on different ESS systems and on different PowerHA SystemMirror sites. v The volume IDs correspond to physical volumes defined to PowerHA SystemMirror luster nodes. v The PVIDs of the disks in a volume group at each end of a pair are the same. Typically, you synchronize the PowerHA SystemMirror Enterprise Edition or Metro Mirror configuration with PowerHA SystemMirror cluster configuration. For information about verifying and synchronizing a cluster, see Verifying and synchronizing cluster configuration in the Administration Guide. You can also verify and synchronize only PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration. To verify and synchronize PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Configuration > Define PPRC Replicated Resources > and press Enter. 3. Select: v Verify PPRC Configuration then v Synchronize PPRC Configuration To synchronize PowerHA SystemMirror Enterprise Editionfor Metro Mirror changes from the command line interface, you can use the cl_sync_pprc_config command. To verify PowerHA SystemMirror Enterprise Edition for Metro Mirror changes from the command line interface, you can use the cl_verify_pprc_config command.
40
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related information Verifying and synchronizing an HACMP cluster Configuring resource groups: After defining PPRC replicated resources, you can add them to a resource group. Configure resource groups following the procedures in the chapter on Configuring PowerHA SystemMirror resource groups (Extended) in the Administration Guide. Make sure that you understand site support for PowerHA SystemMirror resource groups, see Planning resource groups in the Planning Guide. When you configure a resource group ensure that: v The site policy is set to Prefer Primary Site or Online on Either Site. v A Startup policy other than Online on All Available Nodes is specified. v The Resource Group Processing Ordering is set to serial. Consult your completed Resource Groups for PPRC Replicated Resources Worksheet for the values planned for your cluster. To add a PPRC replicated resource to a resource group: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Group Configuration > Change/Show Resources and Attributes for a Resource Group and press Enter. 3. In the Change/Show Resources and Attributes for a Resource Group panel specify: v The name of the PPRC replicated resource in the PPRC Replicated Resources field. v The volume groups associated with the individual PPRC replicated resources. 4. Verify and synchronize the cluster. For information about synchronizing a cluster, see Verifying and synchronizing a cluster configuration in the Administration Guide. Related reference Resource Groups for PPRC Replicated Resources Worksheet on page 82 Use the Resource Groups for PPRC Replicated Resources Worksheet to record resource groups that contain PPRC replicated resources. Related information Configuring HACMP resource groups (extended) Verifying and synchronizing an HACMP cluster Planning resource groups Starting the cluster: After verifying and synchronizing the PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration changes, start the PowerHA SystemMirror cluster. All PPRC pairs must be in a simplex-none-simplex-none state at the initial cluster startup. This means that no relationship exists between the disk volumes at cluster startup. To view and modify the state of the PPRC pairs, use ESS Copy Services. For information about ESS Copy Services, see the ESS Web Interface User's Guide on the IBM Web site at the following URL:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
41
http://publibfp.boulder.ibm.com/epubs/pdf/f2bui04.pdf Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Changing PPRC replicated resources configuration: Use SMIT to change configuration for PPRC replicated resources. Note: Changing resource configuration requires that PowerHA SystemMirror services be stopped on all nodes at both sites in the cluster. Any configuration changes you make to any of the following components effects the others listed: v Sites v PPRC replicated resources v Volumes v Resource groups. After you make a configuration change, verify and synchronize the configuration. Changing the configuration for sites: You can change the configuration for sites. To change site configuration in PowerHA SystemMirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Topology Configuration > Configure PowerHA SystemMirror Sites > Change/Show a Site and press Enter. For information about field values, see the section PowerHA SystemMirror Enterprise Edition Metro Mirror Cluster for Direct management. Related reference Configuring the PowerHA SystemMirror Enterprise Edition Metro Mirror Cluster for Direct management on page 34 Configure PowerHA SystemMirror Enterprise Edition for Metro Mirror after you configure tasks for PPRC on the ESS systems and define sites to support PPRC replicated resources in PowerHA SystemMirror. Changing the configuration for PPRC replicated resources: You can change the configuration for PPRC replicated resources. To change the configuration for PPRC replicated resources: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > PowerHA SystemMirror Extended Resource Configuration > Define PPRC Replicated Resources and press Enter. From this panel, select: v Define ESS Disk Subsystem For information about field values, see the section Defining the ESS Disk Subsystems to PowerHA SystemMirror. v Define a PPRC Replicated Resource
42
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
For information about field values, see the section Defining the PPRC pairs to PowerHA SystemMirror. v Define PPRC Tasks For information about field values, see the section Defining the PPRC Tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management. v Synchronize PPRC Configuration v Verify PPRC Configuration 3. After selecting a configuration option, select the Change/Show option for the value you want to change. Related tasks Defining the PPRC pairs to PowerHA SystemMirror on page 36 You define the PPRC pairs, the primary volume to secondary volume mappings, to allow the PowerHA SystemMirror Enterprise Edition for Metro Mirror to manage them. Defining the ESS Disk Subsystems to PowerHA SystemMirror on page 35 You define the ESS subsystems included in the sites that support PowerHA SystemMirror Enterprise Edition for Metro Mirror to enable PowerHA SystemMirror to process fallovers for PPRC replicated resources. Related reference Defining the PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror for Direct management on page 37 You define PPRC tasks to PowerHA SystemMirror Enterprise Edition for Metro Mirror only in cases where you named the PPRC tasks differently than recommended. If you used the recommended naming convention, you can skip this section.
PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management
These topics describe the planning, installation and configuration tasks for PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management, from here on referred to as DSCLI management. DSCLI management simplifies how you manage PPRC replicated resources on IBM TotalStorage systems and how you can integrate PPRC replicated resources into an PowerHA SystemMirror configuration. You are not required to define tasks on the ESS Web Interface when you use this management system. Plan which resource groups will contain the DSCLI-managed PPRC replicated resources (if not already done in Chapter 2). Overview of the DSCLI management system: DSCLI management provides the ability to execute Copy Services operations directly without relying on previously saved GUI tasks. The software dynamically manages DSCLI PPRC-controlled disks, providing a fully automated, highly available disaster recovery management solution. The PowerHA SystemMirror interface is designed to communicate with the DSCLI so that once the basic PPRC environment is configured, PPRC relationships are created automatically and there is no need for manual access to the DSCLI. Integration of DSCLI and PowerHA SystemMirror provides: v Support for either Prefer Primary Site or Online on Either Site inter-site management policies. v Flexible user-customizable resource group policies. v Support for cluster verification and synchronization. v Limited support for the PowerHA SystemMirror Cluster Single Point Of Control (C-SPOC). See Installing the DSCLI management for PPRC Filesets.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
43
v Automatic fallover/reintegration of server nodes attached to pairs of PPRC-protected disk subsystem within and across sites. See Installing the DSCLI management for PPRC Filesets. v Management of PPRC for: Fallover/failback of PPRC paths and instances for automatic movement of PPRC-protected disks between PowerHA SystemMirror sites. Automatic fallover of PPRC-protected volume groups between nodes within a site. See Installing the DSCLI management for PPRC Filesets. Using DSCLI allows PowerHA SystemMirror to: v Automatically set up PPRC paths and instances that PowerHA SystemMirror will manage. v Manage switching the direction of the PPRC relationships when a site failure occurs, so that the backup site is able to take control of the PowerHA SystemMirror-managed resource groups from the primary site. Related reference Installing the DSCLI management for PPRC Filesets on page 55 These topics describe how to install DSCLI management for PPRC Filesets. You must be logged in as root to perform installation tasks. Planning for DSCLI management: You should be familiar with the planning tasks for PowerHA SystemMirror. For information about planning PowerHA SystemMirror clusters, see the Planning Guide. It is assumed at this point that: v PowerHA SystemMirror Sites have been planned. v Basic DSCLI (and ESS CLI, if required for ESS storage,) support has been completely configured. Refer to the appropriate documentation on how to install and configure each. To plan for a DSCLI management in an PowerHA SystemMirror cluster, complete the following tasks: v Identify the Copy Services Servers (CSS) to be used. v Identify the disk subsystems to be used in the cluster. v Identify the vpaths to be used in the configuration, including the volume IDs for each that corresponds to the storage unit and LUN. v Identify the PPRC Replicated Resources to be used v Identify the Port Pairs to be used for PPRC Paths v Identify Volume Pairs (LUNs) v Identify the volume groups to be managed by PPRC Replicated Resources v Plan which resource groups will contain the DSCLI-managed PPRC replicated resources (if not already done in the General Planning section). Related information Planning guide Limitations and restrictions for DSCLI management: The current release of PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management has some limitations and restrictions. Check the IBM Web site for the latest information on TotalStorage models and PowerHA SystemMirror support. http://www.ibm.com/servers/storage/disk/index.html
44
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Refer to the README packaged with the DSCLI management filesets for the most up-to-date Limitations and Restrictions. Volume group limitations 1. A volume group must have the same volume major number across all cluster nodes. (It has been known to cause problems during cluster function time, and it is not guaranteed to be fixed during cluster verification.) 2. Resource groups to be managed by PowerHA SystemMirror cannot contain volume groups with both PPRC-protected and non PPRC-protected disks. For example: v VALID: RG1 contains VG1 and VG2, both PPRC-protected disks. v INVALID: RG2 contains VG3 and VG4, VG3 is PPRC-protected, and VG4 is not . v INVALID: RG3 contains VG5, which includes both PPRC-protected and non-protected disks within the same volume group 3. Only non-concurrent volume groups can be used with PPRC. Managed resource limitations Resource groups cannot manage both DSCLI and Direct management (ESS CLI)-managed PPRC resources simultaneously: Note: ESS Storage resources (LSS and LUNs) are considered DSCLI PPRC Resources in this type of configuration because they are managed via the DSCLI interface and not the ESS CLI. IBM TotalStorage Copy Services functions limitations Only IBM TotalStorage Copy Services function Synchronous PPRC (Metro Mirror) is supported (No Global Copy or Global Mirror). C-SPOC limitations C-SPOC operations on nodes at the same site as the source volumes successfully perform all tasks supported in PowerHA SystemMirror. C-SPOC operations will not succeed on nodes at the remote site (that contain the target volumes) for the following LVM operations: v Creating or extending a volume group v Operations that require nodes at the target site to write to the target volumes (for example, changing filesystem size, changing mount point, adding LVM mirrors) cause an error message in CSPOC. However, nodes on the same site as the source volumes can successfully perform these tasks. The changes are subsequently propagated to the other site via lazy update. v For C-SPOC operations to work on all other LVM operations, it is highly recommended that you perform all C-SPOC operations when the cluster is active on all PowerHA SystemMirror nodes and the underlying SVC consistency groups are in a consistent_synchronized state. Related information IBM Disk Storage Systems: Overview Sample configuration for DSCLI management: You can set up a mutual recovery configuration in which each site acts as a production site with the other site acting as an associated backup site. Implementing a mutual recovery configuration requires: v Two PowerHA SystemMirror sites (the same as a single recovery configuration)
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
45
v Two resource groups. A standard configuration includes two PowerHA SystemMirror sites, each comprised of nodes attached to two PPRC-managed groups of ESSes spread across both sites. A CLI client (ESSNI client) must be installed on the PowerHA SystemMirror hosts. The ESSNI client is the interface between the application that intends to invoke PPRC commands and the ESSNI or HMC; it must be installed on all the ESSNI server nodes. PPRC services are invoked using the DSCLI within the ESSNI clients. The ESSNI client communicates with the ESSNI server. The ESSNI Server runs in an HMC for the ESS 2107 and on the management server for ESS 1750. It runs directly on the ESS clusters for the 2105. The ESS Server, in turn, communicates the CLI commands to the ESS disk controllers. A PPRC replicated resource contains the ESS disk volume pairs information. The PowerHA SystemMirror resource group definition includes the volume groups built on top of the PPRC replicated volumes. PowerHA SystemMirror manages PPRC processing by dynamically executing the DSCLI commands. You no longer have to define tasks on the ESS Web Interface. The example shows a typical implementation of two ESS model 2107s with PPRC in a four-node PowerHA SystemMirror geographic cluster. The cluster consists of four System p nodes. Each ESS is connected to each node (server) via a SCSI or Fibre Channel connection. Two PPRC links (ESCON or FC) between the ESSes provide the basic level of redundancy. One link carries data from the source Logical Subsystem (LSS) on the Primary site to the target LSS on the Secondary site, and the other link carries data in the other direction (source is on the Secondary site, target is on the Primary site). The configuration also includes point-to-point networks for heartbeating to connect the cluster nodes.
46
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Example listing for DSCLI-managed configuration for mutual takeover: Here is an example of the configuration information for a DSCLI-managed mutual takeover configuration.
DS Subsystems ------------m222: Cluster1 IP: 9.22.22.22 Cluster2 IP: 9.44.44.44 ESS Storage ID: IBM.2107-2222222 Associated CS Server: m222h m555: Cluster1 IP: 9.55.55.55 Cluster2 IP: 9.77.77.77 ESS Storage ID: IBM.2107-5555555 Associated CS Server: m555h Copy Services Servers --------------------m222h: IP address 9.112.112.2 m555h: IP address 9.115.115.2 Available IO ports -----------------m222: I0002, I0012 m555: I0005, I0015 Volumes (LUNs) to be used ------------------------m222: 1200, 1201
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
47
m555: 1200, 1201 LSS to be used -------------m222: 12 m555: 12 DSCLI-Managed PPRC Replicated Resource for Resource Group RG1 --------------------------------------PPRC Resource Name: sample_res1 PowerHA SysteMirror Sites: SiteA SiteB Volume Pairs: 1200->1200 ESS Pair: m222 m555 LSS Pair: 12 12 PPRC Type: mmir PRI-SEC PortPairIDs 0002->I0005 SEC-PRI PortPairIDs I0015->I0012 PPRC Link Type: fcp PPRC Recovery Action AUTOMATED Volume Group sample_VG1 DSCLI-Managed PPRC Replicated Resource for Resource Group RG2 --------------------------------------PPRC Resource Name: sample_res2 PowerHA SystemMirror Sites: SiteB SiteA Volume Pairs: 1201->1201 ESS Pair: m555 m222 LSS Pair: 12 12 PPRC Type: mmir PRI-SEC PortPairIDs I0005->I0002 SEC-PRI PortPairIDs I0012->I0015 PPRC Link Type: fcp PPRC Recovery Action AUTOMATED Volume Group sample_VG2
Note that the definitions for RG2 of volume pairs, ESS Pair, LSS Pair and the PortPairIDs are listed in reference to the source of the PPRC instance that will be used by the resource: RG2 is intended to come ONLINE with the PowerHA SystemMirror secondary site as the "source." RG1 is intended to come ONLINE with the PowerHA SystemMirror primary site as "source." From this point, resource groups RG1 and RG2 will be configured to include the DSCLI-Managed PPRC Replicated Resources sample_res1 and sample_res2 respectively. Setting up volume groups and file systems on DSCLI-protected disks: Although not required, you should complete these steps prior to planning. These steps must be completed prior to the initial PowerHA SystemMirror verification to avoid verification errors. 1. Make sure that the hdisks and corresponding vpaths made available to your nodes are visible at those nodes. If they are not , and you can verify that the nodes have been cabled and configured correctly to make the vpaths available, reboot the node and run cfgmgr to make the disks viewable. 2. Based on the LUNs you have selected for a given PPRC relationship, determine which vpaths and hdisks correspond. Use the following utility on a node at both the primary and backup PowerHA SystemMirror sites:
/usr/sbin/lsvpcfg
Note: It is likely (although not required) that the LUNS will be different at each site, and it is also likely (although not required) that the vpaths will be different on each node in each site.
48
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
The output will look something like this: (this output would be from an PowerHA SystemMirror node in siteA of the example configuration. Assume more than one type of storage unit is connected to this node.)
smithers) /usr/sbin/lsvpcfg vpath12 (Avail ) 13AABKK1602 = hdisk14 (Avail ) hdisk44 (Avail ) hdisk74 (Avail ) hdisk104 (Avail ) vpath13 (Avail ) 13AABKK1603 = hdisk15 (Avail ) hdisk45 (Avail ) hdisk75 (Avail ) hdisk105 (Avail ) vpath14 (Avail ) 13AABKK1604 = hdisk16 (Avail ) hdisk46 (Avail ) hdisk76 (Avail ) hdisk106 (Avail ) vpath15 (Avail ) 22222221100 = hdisk17 (Avail ) hdisk47 (Avail ) hdisk77 (Avail ) hdisk107 (Avail ) vpath16 (Avail pv sample_VG1) 22222221200 = hdisk18 (Avail ) hdisk48 (Avail ) hdisk78 (Avail ) hdisk108 (Avail ) vpath17 (Avail pv sample_VG2) 22222221201 = hdisk19 (Avail ) hdisk49 (Avail ) hdisk79 (Avail ) hdisk109 (Avail )pilot> lshostvol.sh
The third column in this output corresponds to the storage system ID and LUN associated with the vpath listed in column one. Examples vpath12 above (which has no volume group created on it yet) is on storage system IBM.XXXX-13AABKK, LSS 16, LUN 002. vpath17 (which has volume group sample_VG2 created on it) is on the same system, with a different LSS/LUN: IBM.XXXX-2222222, LSS 12, LUN 001. To fill in the 'XXXX' from the system ID above: either from your own documentation, or use
lsdev -Ccdisk | grep <hdisk associated with the vpath in question>
to display the underlying disk type: IBM FC 1750 or IBM FC 2107. Example:
smithers) /usr/sbin/lsdev -Ccdisk | grep hdisk14 hdisk14 Available 2A-08-02 IBM FC 1750
You see that vpath12 is on IBM.1750-13AABKK, LSS 16, LUN 002. 3. Create volume groups and file systems at the Primary PowerHA SystemMirror site. a. On one of the nodes at the Primary PowerHA SystemMirror site, on the Vpaths) that correspond to the volume pairs for a given PPRC Relationship, set up the volume group(s) and file system(s) to be managed by PowerHA SystemMirror. Ensure that the Volume Major Numbers for the volume groups can be used on all PowerHA SystemMirror cluster nodes, and that the physical volume name for the filesystem can also be used across all PowerHA SystemMirror cluster nodes. Note: Use the /usr/sbin/lvlstmajor command on each node in the cluster to list the available volume major numbers and select a free number on all nodes. b. After successfully creating all necessary volume groups and file systems on the first node, import the data to all other nodes at the same site. 4. Use PPRC to mirror disk to the backup PowerHA SystemMirror site 5. Create a temporary PPRC relationship in order to copy the volume group/file set information to the remote disk. Run these commands to set up the PPRC path and instance, and to copy over the local disk information (refer to the DSCLI documentation, or run dscli help <command> for more details): a. mkpprcpath
/opt/ibm/dscli/dscli -user <userid> -passwd <password> -hmc1 <local hmc name> mkpprcpath -dev <local storage device ID> -remotedev <remote storage device ID> -srclss -tgtlss -remotewwnn <WWNN> <local port>:<remote port>
Example, where IBM.2107-2222222 is at the Primary PowerHA SystemMirror site, IBM.2107-5555555 is at the Backup PowerHA SystemMirror site:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
49
/opt/ibm/dscli/dscli -user sluggo -passwd batterup -hmc1 m222h mkpprcpath -dev IBM.2107-2222222 -remotedev IBM.2107-5555555 -srclss 12 -tgtlss 13 -remotewwnn 6005076303FFC354 I0002:I0005
b. mkpprc
/opt/ibm/dscli/dscli -user <userid> -passwd <password> -hmc1 <local hmc name> mkpprc -dev <local storage device ID> -remotedev <remote storage device ID> -type <mmir| -mode full <local LUN>:<remote LUN>
Example:
/opt/ibm/dscli/dscli -user sluggo -passwd batterup -hmc1 m222h mkpprc -dev IBM.2107-2222222 -remotedev IBM.2107-5555555 -type mmir -mode full 1002:1002
At this point, you should have a PPRC instance available, and in a copying state:
/opt/ibm/dscli/dscli -user <userid> -passwd <password> -hmc1 <local hmc name> lspprc -dev <local storage device ID> -remotedev <remote storage device ID> <local LUN>:<remote LUN>
c. rmpprc Once the PPRC relationship has completed copying, delete the relationship (if you do not , then the Backup PowerHA SystemMirror site nodes will not have write access to the LUNs, and so you will not be able to import the new volume group(s):
/opt/ibm/dscli/dscli -user <userid> -passwd <password> -hmc1 <local hmc name> rmpprc -quiet -dev <local storage device ID> -remotedev <remote storage device ID> <local LUN>:<remote LUN>
This step is necessary in order for the next LVM operations to complete successfully. d. Using SMIT or the command line on the backup PowerHA SystemMirror site (the site that is connected to the remote disk subsystem), import the volume group(s) created in step b.) At this point, the volume groups and filesystems necessary to configure PowerHA SystemMirror have been created. Planning Copy Services Servers (CSS): The first step in configuring DSCLI management is to plan which CSS systems will interact with which PowerHA SystemMirror sites, and record some basic information about the CSS systems. It is assumed at this point that PowerHA SystemMirror sites have already been planned and/or defined. Use the Copy Services Server Worksheet to complete this next section. The table is constructed so that CSS can be easily identified compared to the PowerHA SystemMirror cluster they will communicate with primarily. Record the following information, one entry per CSS (or HMC, SMC, whichever is appropriate for your configuration):
CSS site name CSS Subsystem Name CLI Type Name of the PowerHA SystemMirror site where the CSS resides. The site should already be defined in PowerHA SystemMirror. Short name for the system hosting CSS. This is the host name of the CSS or controller the services reside on. For DSCLI management configurations, this entry must be DSCLI. Note: This field toggles but the second entry, ESSCLI is not applicable for this configuration; it is used for Direct management configurations. IP address, in dotted decimal notation, of the system hosting CSS; it is used by PowerHA SystemMirror to submit DSCLI commands User ID used to authenticate logging into the CSS; necessary to complete DSCLI commands. Password associated with the specified CSS User ID, necessary to complete ADSCLI commands.
50
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference Copy Services Server Worksheet on page 86 Use the Copy Services Server Worksheet to record Copy Services Server information to support PPRC DCSLI Management Planning DS/ESS Disk Subsystems: In the worksheet, plan which Disk Subsystems will interact with the PowerHA SystemMirror sites. It is assumed that PowerHA SystemMirror sites have already been planned and/or defined. It is suggested that you do this step prior to setting up any volume groups, as it may assist in keeping things straight during that process. Use the DS/ESS Disk Subsystem Worksheet in (Worksheets for DSCLI management) to complete this section 1. Record the PowerHA SystemMirror Sites on a copy of the worksheet. 2. Record the following information on a copy of the worksheet, one entry per storage system to be configured. The worksheet is set up so that storage systems sharing PPRC connections can be written up together. (Optional) Note: The term ESS found on the worksheets also refers to DS systems for this context:
ESS site name ESS Cluster 1 IP Address ESS Cluster 2 IP Address ESS User ID ESS password Full ESS Storage ID Name of the PowerHA SystemMirror site where this storage system will be directly connected. IP address, in dotted decimal notation of the first controller on the storage system. IP address, in dotted decimal notation of the second controller on the storage system. User ID used to authenticate logging into the ESS, if available. Password associated with the specified ESS User ID, if available. Enter the fully qualified ESS storage image ID. This includes the manufacturer, device type, model and serial numbers (MTMS). The format is: manufacture.type-model-serial number. For example: IBM.2107-921-75FA120 List the two CSS servers that will communicate with this Disk Subsystem. This information was recorded in the Copy Services Server worksheets in the previous step.
List of CS Servers
3. Repeat for all disk subsystems to be included in the configuration. Related reference DS ESS Disk Subsystem Worksheet on page 88 Use the DS ESS Disk Subsystem Worksheet to record information about the ESS disk subsystems that contain PPRC pairs Planning Primary and Secondary site layout for resource groups: In order for a resource group to come ONLINE correctly at cluster startup time, the definition of the Primary site must be the same for the PPRC replicated resource and the resource group that contains that resource. Defining the Primary Site for a DSCLI-Managed PPRC replicated resource The order that data is entered into the SMIT panel fields when defining a PPRC Replicated Resource indicates which site will be primary. The following entries are order-sensitive. In all cases (unless otherwise specified), the first entry corresponds to information for the primary site. v PowerHA SystemMirror Sites v PPRC Volume Pairs
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
51
Primary site LUNS are 1200 & 1201. Secondary site LUNS are 1300 & 1301 v ESS Pair v LSS Pair v Pri-Sec Port Pair IDs For both Pri-Sec and Sec-Pri, the following format is correct when entering data in the SMIT panels:
[I0022->I0052 I0023->I0053]
where I0022 & I0023 correspond to Ports on the storage system directly connected to the site that will be primary for the PPRC Replicated Resource, and I0052 & I0053 are Ports on the storage system directly connected to the secondary site. v Sec-Pri Port Pair IDs If you were going to use the same Port pairs as in the example above, the Sec-Pri listing would look like this:
[I0052->I0022 I0053->I0023]
where the Ports are on the same systems as described above. For more examples and information, see the Sample Configuration section of this chapter. Defining the Primary Site for an PowerHA SystemMirror resource group When defining the Primary and Secondary site nodes on creating an PowerHA SystemMirror resource group, select nodes from the site where you want the resource group to come ONLINE during cluster startup as the Primary Site nodes. Note that combining the Online on Either Site inter-site management policy with the Online on First Available Node startup policy will allow the resource group to come ONLINE on a node other than those defined as belonging to its defined Primary site. Example:
PowerHA SystemMirror Site 1: node11, node12 PowerHA SystemMirror Site 2: node21, node22 To define an PowerHA SystemMirror Resource Group that will come up on Site 1: Participating Nodes from Primary Site [node11, node12]
Participating Nodes from Secondary Site[node21,node22] To define an PowerHA SystemMirror Resource Group that will come up on Site 2: Participating Nodes from Primary Site [node21, node22]
For more examples and information, see the Sample configuration section.
52
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference Sample configuration for DSCLI management on page 45 You can set up a mutual recovery configuration in which each site acts as a production site with the other site acting as an associated backup site. Planning DSCLI-Managed PPRC replicated resource: For this step, complete a DSCLI Replicated Resource Worksheet. Also on this sheet, you will find a column for entering the PowerHA SystemMirror Resource Group that will manage this Replicated Resource. This is to serve as a cross-reference for the next step in Planning. Note: When configuring DSCLI-managed PPRC Replicated Resources, the layout of the information is pertinent to where (which PowerHA SystemMirror site) a resource will come ONLINE during cluster startup. See Planning Primary and Secondary site layout for resource groups and the Names of the DS or ESS Storage systems Logical Subsystems that the LUN pairs exist on. For each PPRC Replicated Resource to be configured, enter the following:
PPRC Resource Name PowerHA SystemMirror Sites Name that will be used by PowerHA SystemMirror to create the DSCLI Volume Group. Use no more than 64 alphanumeric characters and underscores. Enter the name of the PowerHA SystemMirror primary site followed by the name of the PowerHA SystemMirror secondary site. The primary site is the one where the PPRC Replicated Resource will come ONLINE initially during cluster startup. LUN pairs that correspond to the vpaths that will support managed volume groups. The format is: Primary Volume ID:Secondary Volume ID v All PPRC volume pairs in a PPRC replicated resource consist of volumes from the same LSS pair. v Volumes in a PPRC replicated resource must be from the same volume group. v A volume group can span more than one LSS. v A volume group can span more than one disk subsystem. ESS pair LSS Pair PPRC Type Names of the DS or ESS storage systems that contain the LUNs the volume pairs correspond to. The first name in the list is the primary and the second is the secondary. Names of the DS or ESS Storage systems Logical Subsystems that the LUN pairs exist on. The first name in the list is the primary LSS and the second is the secondary LSS. Indicates whether the PPRC volume relationships will be Metro Mirror (Synchronous) mmir or Global Copy (Extended Distance) gep relationships. (Global Copy is not yet supported for PowerHA SystemMirror Enterprise Edition.) v Metro Mirror (Synchronous) maintains the PPRC relationship in a consistent manner. I/O write completion status is returned to the application once the updates are committed to the target ESS. v Global Copy (Extended Distance) maintains the PPRC relationship in a non-synchronous manner. I/O write completion status is returned to the application once the updates are committed to the source ESS. Updates to the target volume are performed at a later point in time. The original order of updates is not strictly maintained. Pri-Sec Port Pair ID s I/O Ports that will be used for PPRC communication between the primary and secondary storage systems named in the ESS Pair entry. The source and target port must be a Fibre Channel / ESCON I/O port that is configured for point-to-point or switch fabric topology. You can have up to 8 PPRC Path Port Pair IDs defined for each pair of LSSs. Example of three Port pairs: I1A10:I2A20<space>I1A11:I2A21<space>I1A12:I2A22
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
53
I/O Ports that will be used for PPRC communication between the secondary and primary storage systems named in the ESS Pair entry. The example here shows the reverse source and target order of the pri-sec pair IDs (12 > 11 instead of 11 > 12): I2A10:I1A20<space>I2A11:I1A21<space>I2A12:I1A22
Select ESCON or FCP depending on the physical connection you are using between the two storage systems listed in the ESS Pair entry. This option is used to write-protect the source volume. If the last path fails between the pairs, resulting in the inability to send information to the target, the source becomes write-protected. Current updates and subsequent attempts to update the source will fail, with a check condition on SCSI. Values are ON (set critmod) or OFF (do not set critmod). The default is OFF.
Select the PowerHA SystemMirror recovery action for a site failure for PPRC-XD type volume pairs. MANUAL: User action required or AUTOMATED: No user action required.
Volume Group
Volume group that will exist on this PPRC instance (replicated resource). The volume group can include volumes pairs from different LSSes as well as from different ESSes.
Related reference Planning Primary and Secondary site layout for resource groups on page 51 In order for a resource group to come ONLINE correctly at cluster startup time, the definition of the Primary site must be the same for the PPRC replicated resource and the resource group that contains that resource. PPRC Replicated Resources Worksheet on page 88 Use the PPRC Replicated Resources Worksheet to record information about the PPRC replicated resources that go in PowerHA SystemMirror resource groups Completing the Resource Group Worksheet: Complete a Resource Groups Worksheet for each resource group that includes DSCLI-managed PPRC replicated resources. To identify information required to include DSCLI-managed PPRC replicated resources in a resource group, enter the following information on the Resource Groups Worksheet: 1. Record the Resource Group Name for the resource group that will contain DSCLI-managed PPRC replicated resources. The name can contain up to 64 characters and underscores, but without a leading numeric character. 2. Record the startup, fallover, and fallback policies for the resource group. Note: PPRC enforces that disks at the secondary site are not I/O accessible. Because of this, the following Resource Group policies are not supported: v Inter-Site Management Policies: Online on Both Sites v Startup Policy: Online Using Node Distribution Policy, Online On All Available Nodes v Fallover Policy: Fallover Using Dynamic Node Priority 3. Record the Inter-Site Management Policy: Prefer Primary Site or Online on Either Site. 4. Record the names of the nodes in the resource group, listing the nodes in the prioritized order in which they will appear in the node list for the resource group. 5. Record the name of each DSCLI-managed PPRC replicated resource (volume set) to be included in the resource group. 6. Record the names of the volume groups associated with the volume sets to include in the resource group.
54
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference Resource Groups for PPRC Replicated Resources Worksheet on page 82 Use the Resource Groups for PPRC Replicated Resources Worksheet to record resource groups that contain PPRC replicated resources. Installing the DSCLI management for PPRC Filesets: These topics describe how to install DSCLI management for PPRC Filesets. You must be logged in as root to perform installation tasks. See Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror for general installation instructions. Related reference Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror on page 11 These topics describe how to install base PowerHA SystemMirror Enterprise Edition filesets. Details for installing filesets and prerequisites for specific types of PPRC support are contained in subsequent sections. Installing prerequisite software: Before installing PowerHA SystemMirror Enterprise Edition DSCLI management for PPRC (spprc filesets), the prerequisite software must be installed on cluster nodes. Note: Check the README for updates on versions supported after the first release of this software. 1. The latest PowerHA SystemMirror Enterprise Edition version. (Refer to Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror for more information) 2. IBM Subsystem Device Driver (SDD) for the storage system you are using. Check the documentation and/or Web site for the currently approved SDD version to use with a given microcode version. Make sure the following filesets are installed: a. devices.fcp.disk.ibm.rte (1.0.0.0) Note: It may not be clear during installation of the SDD filesets that this first fileset is necessary, but it is critical for correct fallover behavior. b. devices.sdd.**.rte (latest version) c. devices.ibm2105.rte (latest version) Note: This fileset provides connection scripts for all of the ESS and DS disk types, including ESS 800, DS 8000 and DS 6000. Refer to the storage website, www.storage.ibm.com for more information on current levels of this driver. 3. DSCLI client software and other configuration specific prerequisites, as shipped with the microcode for the storage systems to be used in this cluster. Refer to the DSCLI documentation for more information on how to install and configure this software. 4. (Optional) If ESS storage systems are going to be included in your cluster) ESS CLI software is shipped with the microcode for the storage hardware. DSCLI management code expects the ESS CLI to be installed in the following (non-standard) directory
/opt/ibm/ibm2105cli
so you may have to create a link from the actual installation location to this location. Be mindful of this, as it will cause problems during both the verification and run-time for your cluster.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
55
Related reference Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror on page 11 These topics describe how to install base PowerHA SystemMirror Enterprise Edition filesets. Details for installing filesets and prerequisites for specific types of PPRC support are contained in subsequent sections. Installing the DSCLI management filesets: You need to install the necessary filesets for DSCLI management. If you have not already done so, install the filesets listed in Contents of the installation media for DSCLI management. Related reference Contents of the installation media on page 12 The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group. Installation directories: All PowerHA SystemMirror-PPRC programs and scripts are located in specific directory and sub-directories. These include:
/usr/es/sbin/cluster/pprc /usr/es/sbin/cluster/pprc/spprc
All DSCLI programs and scripts are located in the following directory and sub-directories:
/opt/ibm/dscli
All ESSCLI programs and scripts are expected to be located in the following directory:
/opt/ibm/ibm2105cli
Upgrading to the lastest version of PowerHA SystemMirror Enterprise Edition for Metro Mirror: When upgrading from a previous version of PowerHA SystemMirror Enterprise Edition for Metro Mirror, you can choose to upgrade the base pprc filesets only, or you can add the cluster.es.spprc filesets (to add DSCLI management). Note: Currently there is no migration path from eRCMF management or SVC management to DSCLI management. The statements here apply only to Direct management. You can install the cluster.es.spprc filesets in your current base (Direct management) PPRC environment and continue to operate the current environment. This is possible because the base pprc and spprc configuration information is stored in different configuration databases (ODMs). The spprc filesets are installed automatically if you already have a previous version of them on your system when you install via smitty update_all. Configuring PowerHA SystemMirror Enterprise Edition for MetroMirror using DSCLI management: These topics explain how to configure the DSCLI management with PowerHA SystemMirror. Configuration requirements
56
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Before configuring PowerHA SystemMirror Enterprise Edition for Metro Mirror with the DSCLI management interface, ensure that: v PPRC is configured and running on the storage systems. v ESSNI client and server software is installed (DSCLI software on all PowerHA SystemMirror cluster nodes, for example). v You have a good understanding of PowerHA SystemMirror sites for PPRC replicated resources. For more information about sites, see the section PowerHA SystemMirror Sites. v Both the base PowerHA SystemMirror Enterprise Edition for Metro Mirror and DSCLI management filesets are installed on each cluster node. v The PowerHA SystemMirror cluster is configured for: Nodes Sites Networks and network interfaces Service labels, application monitors, etc. Initial resource groups
You can modify the attributes for a resource group later to accommodate PPRC replicated resources. Steps for setting up the DSCLI management interface: 1. Configure PPRC-Managed Replicated Resources (use the SMIT panels at the bottom of the main PowerHA SystemMirror PPRC-Managed Replicated Resources menu): a. Configure Copy Services Servers b. Configure disk systems to be included c. Configure DSCLI-Managed PPRC replicated resources 2. Configure PowerHA SystemMirror resource groups to include PPRC-Managed replicated resources. Configuring DSCLI-managed PPRC replicated resources: You should configure DSCLI-managed PPRC replicated resources using SMIT panel. To define DSCLI-Managed PPRC replicated resources: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resource Configuration > Configure PPRC Replicated Resources and press Enter. From this panel you can: v Configure Copy Services Server v Configure DS ESS disk subsystems v Configure DSCLI-Managed PPRC replicated resources. Configuring Copy Services Server: Configure Copy Service Server using the SMIT panel. To configure the Copy Services Server: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > HACMP Extended Resources Configuration > Configure PPRC Replicated Resources > Configure Copy Services Server > Add a Copy Services Server and press Enter. 3. In the Add a Copy Services Server panel, enter field values as follows:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
57
CSS Subsystem Name CSS site name CLI Type CSS IP Address CSS User ID CSS password
Name that identifies the Copy Service Server. The name may consist of up to 64 alphanumeric characters, and may include underscores. Name of the PowerHA SystemMirror site where the CSS resides. The site must already be defined in PowerHA SystemMirror for the site name to be available from the picklist. Select DSCLI if you are using the ESS 2107. Select ESSCLI if you are using the ESS 2105. IP address, in dotted decimal notation, that the Copy Services Server uses. (This is different from the ESS IP address.) User ID used to authenticate logging into the CSS. Password associated with the specified CSS User ID.
4. Press Enter. 5. Repeat these steps for the CSS on the other site. Defining the DS ESS Disk Subsystems to PowerHA SystemMirror: You define the DS ESS subsystems included in the sites that support PowerHA SystemMirror Enterprise Edition for MetroMirror to enable PowerHA SystemMirror to process fallovers for PPRC replicated resources. Consult your completed ESS Disk Subsystem Worksheet for the values planned for your cluster. To define a DS ESS system to PowerHA SystemMirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > Configure PPRC Replicated Resources > Configure DS ESS Disk Subsystem > Add an ESS Disk Subsystem and press Enter. 3. In the Add an ESS Disk Subsystem panel, enter field values as follows:
ESS Subsystem Name ESS site name ESS Cluster 1 IP Address ESS Cluster 2 IP Address ESS User ID ESS password Full ESS Storage ID Name that identifies the ESS subsystem to PowerHA SystemMirror. The name may consist of up to 64 alphanumeric characters, and may include underscores. Name of the site where the ESS resides. The site is already defined in PowerHA SystemMirror for the site name to be available from the picklist. IP address, in dotted decimal notation of the ESS disk subsystem in cluster 1. Note: The IP address specified here is the ESSNet address of the ESS at this site. IP address, in dotted decimal notation of the ESS disk subsystem in cluster 2. Note: The IP address specified here is the ESSNet address of the ESS at this site. User ID used to authenticate logging into the ESS, if available. Password associated with the specified ESS User ID, if available. Enter the fully qualified ESS storage image ID. This includes the manufacturer, device type, model and serial numbers (MTMS). The format is: manufacture.type-model-serial number. For example: IBM.2107-921-75FA120 From the list, select the CSS that will manage the PPRC of this Disk Subsystem.
List of CS Servers
4. Press Enter. 5. Review the settings for the ESS subsystem. From the Configure an ESS Disk Subsystem panel, select Change/Show an ESS Disk Subsystem and select an ESS system to view. Make changes if necessary and press Enter. 6. Repeat these steps to enter the information for the DS ESS at the second site.
58
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference DS ESS Disk Subsystem Worksheet on page 88 Use the DS ESS Disk Subsystem Worksheet to record information about the ESS disk subsystems that contain PPRC pairs Adding a DSCLI-managed PPRC replicated resource: You can add a DSCLI-managed PPRC replicated resource to your configuration. To add a PPRC replicated resource: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > Configure PPRC Replicated Resources > DSCLI-Managed PPRC Replicated Resource Configuration > Add a PPRC Replicated Resource and press Enter. 3. In the Add a Replicated Resource panel, enter field values as follows:
PPRC Resource Name PowerHA SystemMirror Sites PPRC Volume Pairs Enter a name of the set of PPRC volume pairs that make up the PPRC replicated resource. Use no more than 64 alphanumeric characters and underscores. Enter the names of the PowerHA SystemMirror sites (already defined to PowerHA SystemMirror). Enter the name of the primary site followed by the secondary site. List of PPRC volume pairs that are contained in this PPRC replicated resource. The format is: Primary Volume ID:Secondary Volume ID v All PPRC volume pairs in a PPRC replicated resource consist of volumes from the same LSS pair. v Volumes in a PPRC replicated resource must be from the same volume group. v A volume group can span more than one LSS. v A volume group can span more than one ESS disk subsystem. ESS pair LSS Pair PPRC Type Set of ESSs associated with this PPRC resource. The first name in the list is the primary ESS and the second is the secondary ESS. Set of LSSs associated with this PPRC resource. The first name in the list is the primary LSS and the second is the secondary LSS. Indicates whether the PPRC volume relationships will be Metro Mirror (Synchronous) mmir or Global Copy (Extended Distance) gep relationships. v Metro Mirror (Synchronous) maintains the PPRC relationship in a consistent manner. I/O write completion status is returned to the application once the updates are committed to the target ESS. v Global Copy (Extended Distance) maintains the PPRC relationship in a non-synchronous manner. I/O write completion status is returned to the application once the updates are committed to the source ESS. Updates to the target volume are performed at a later point in time. The original order of updates is not strictly maintained. Pri-Sec Port Pair ID List of PPRC Path Port Pair IDs of PPRC Links between a primary LSS and secondary LSS. The source and target port must be a Fibre Channel / ESCON I/O port that is configured for point-to-point or switch fabric topology. A PPRC Path Port Pair ID consists of two Port IDs, one designated as the source port and the other as the target port for a PPRC path. The first Port ID is the designated source Port. The second LSS ID is the designated target Port. Separate the two Port IDs of a PPRC Path Port Pair ID with a colon and no whitespace. You can have up to 8 PPRC Path Port Pair IDs defined for each pair of LSSs. Use a white space to separate multiple PPRC Path Port Pair IDs. Example of three Port pairs: I1A10:I2A20<space>I1A11:I2A21<space>I1A12:I2A22
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
59
Same as above. The example here shows the reverse source and target order of the pri-sec pair IDs (12 > 11 instead of 11 > 12): I2A10:I1A20<space>I2A11:I1A21<space>I2A12:I1A22
Select ESCON or FCP depending on the connection you are using for the PPRC path. This option is used to write-protect the source volume. If the last path fails between the pairs, resulting in the inability to send information to the target, the source becomes write-protected. Current updates and subsequent attempts to update the source will fail, with a unit check on S/390 or a check condition on SCSI. Values are ON ( set critmod) or OFF (do not set critmod). The default is OFF.
Select the recovery action for PPRC-XD volume pairs. MANUAL : User action required or AUTOMATED : No user action required.
Volume Group
Volume group that contains the PPRC volume pairs included in this PPRC replicated resource. The volume group can include volumes pairs from different LSSes as well as from different ESSes.
4. Press Enter. 5. Repeat as necessary to define more DSCLI-Managed PPRC replicated resources. Configuring resource groups: After defining PPRC replicated resources, you can add them to a resource group. Configure resource groups following the procedures in Configuring HACMP resource groups (Extended) in the Administration Guide. Make sure that you understand site support for PowerHA SystemMirror resource groups, see Planning resource groups in the Planning Guide. When you configure a resource group ensure that: v The site policy is set to Prefer Primary Site or Online on Either Site. v A Startup policy other than Online on All Available Nodes is specified. Consult your completed Resource Groups for PPRC Replicated Resources Worksheet for the values planned for your cluster. To add a PPRC replicated resource to a resource group: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resource Group Configuration > Change/Show Resources and Attributes for a Resource Group and press Enter. 3. In the Change/Show Resources and Attributes for a Resource Group panel specify: v The name of the PPRC replicated resource in the PPRC Replicated Resources field. v The volume groups associated with the individual PPRC replicated resources. The PPRC Replicated Resources entry is a picklist that displays the resource names created in the previous step. Make sure that the volume groups selected on the Resource Group configuration screen match the volume groups used in the PPRC Replicated Resource. 4. You must verify before synchronizing the cluster.
60
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related tasks Verifying the DSCLI-managed PPRC configuration Verifying the configuration for DSCLI-managed PPRC replicated resources checks the configuration. Related reference Resource Groups for PPRC Replicated Resources Worksheet on page 82 Use the Resource Groups for PPRC Replicated Resources Worksheet to record resource groups that contain PPRC replicated resources. Related information Configuring HACMP resource groups (extended) Planning resource groups Verifying the DSCLI-managed PPRC configuration: Verifying the configuration for DSCLI-managed PPRC replicated resources checks the configuration. It also reports on the following issues: v SPPRC information in the PowerHA SystemMirror Configuration Database (ODM) is identical on all nodes. v The DSCLI Command Line Interface is installed correctly on each node. v v v v The PPRC volume groups are not in concurrent volume groups that span nodes on different sites. Sites are properly defined for the PowerHA SystemMirror and PPRC configuration. The IP addresses of the ESS systems exist and are reachable. The ESS systems for the PPRC replicated resources are defined as ESS disk subsystems to PowerHA SystemMirror.
v The two volumes of a PPRC pair are on different ESS systems and on different PowerHA SystemMirror sites. v The volume IDs correspond to physical volumes defined to PowerHA SystemMirror cluster nodes. v The PVIDs of the disks in a volume group at each end of a pair are the same. v All PPRC volumes pairs in a PPRC replicated resource can only consist of volumes from the same LSS pair. v v v v Volumes in a PPRC replicated resource must be from the same volume group Correct PPRC links and their Port IDs have been defined The defined CLI path and the ESSNI client jar file exist on all the PowerHA SystemMirror servers The volume pairs have volumes that do exist on the ESSs defined to PowerHA SystemMirror.
To verify the PowerHA SystemMirror Enterprise Edition DSCLI-managed PPRC configuration: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Verification and Synchronization and press Enter. Since the cluster is inactive, the following options appear. Make sure to select Verify in the first field ( not both, you do not synchronize the configuration yet):
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
61
Select Verify only. No is the default. PowerHA SystemMirror performs corrective actions without prompting you to perform any action. If you select Interactively, during verification you will be prompted when PowerHA SystemMirror finds a problem it can correct related to the following, for example: v Importing a volume group v Exporting and re-importing shared volume groups (mount points and filesystems issues) You then choose to have the action taken or not. For more information, see Verifying and synchronizing an PowerHA SystemMirror cluster in the Administration Guide.
No is the default. If you select Yes, cluster verification runs but verification errors are ignored and the cluster is synchronized. Use the default. No is the default. (Run the full check on resource and topology configuration.) Use the default. Standard is the default. You can also select Verbose. Verification messages are logged to /var/hacmp/clverify/clverify.log.
3. Press Enter. The verification output appears in the SMIT Command Status window. 4. If any error messages appear, make the necessary changes and run the verification procedure again. You may see Warnings if the configuration has a limitation on its availability; for example, only one interface per node per network is configured. Related information Verifying and synchronizing an HACMP cluster Synchronizing the cluster: Typically, you synchronize the PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration with PowerHA SystemMirror cluster configuration. To synchronize the PowerHA SystemMirror Enterprise Edition DSCLI-managed PPRC configuration: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Verification and Synchronization and press Enter. 3. The PowerHA SystemMirror Verification and Synchronization panel appears. Select either both or Synchronize in the first field and press Enter. The cluster is synchronized. Starting the cluster: Verification runs automatically at cluster startup unless you turn this option off. After completing the steps above that set volume groups to show up only as vpaths, cluster verification will fail. To avoid this issue, set the ignore verification errors? field to true on the Start Cluster Services SMIT panel. 1. Enter the fastpath smit cl_admin 2. In SMIT, select Manage HACMP Services > Start Cluster Services. 3. Make the selections for the fields on this panel, setting the field Ignore verification errors? to true. 4. Press Enter to start cluster services. PowerHA SystemMirror will start up and manage the PPRC resources. See Starting and stopping cluster services in the Administration Guide for more details about starting a cluster.
62
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related information Starting and stopping cluster services Changing the configuration for sites: Use SMIT to change configuration for PPRC replicated resources. Note: Changing resource configuration requires that PowerHA SystemMirror services be stopped on all nodes at both sites in the cluster. Configuration changes you make to any of the following components affects the others listed: v Sites v PPRC replicated resources v Volumes v Resource groups. After you make a configuration change, verify and synchronize the configuration. To change site configuration in PowerHA SystemMirror: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Topology Configuration > Configure HACMP Sites > Change/Show a Site and press Enter. For information about field values, see the section Changing the configuration of DSCLI-managed PPRC replicated resources. Related tasks Changing the configuration of DSCLI-managed PPRC replicated resources You can change the configuration of DSCLI-managed PPRC replicated resources using the SMIT panel. Changing the configuration of DSCLI-managed PPRC replicated resources: You can change the configuration of DSCLI-managed PPRC replicated resources using the SMIT panel. To change or remove the configuration for DSCLI-Managed PPRC replicated resources: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resource Configuration > Configure PPRC Replicated Resources and press Enter. From this panel, select:
Configure Copy Services Server Configure DS ESS Disk Subsystem Configure a DSCLI-Managed PPRC Replicated Resource For information about field values, see the section Configuring Copy Services Server. For information about field values, see the section Defining the DS ESS Disk Subsystems to PowerHA SystemMirror. For information about field values, see the section Adding a DSCLI-Managed PPRC Replicated Resource.
3. After selecting a configuration option, select the Change/Show option for the value you want to change, or the Remove option for the value you want to remove.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
63
Related tasks Configuring Copy Services Server on page 57 Configure Copy Service Server using the SMIT panel. Defining the DS ESS Disk Subsystems to PowerHA SystemMirror on page 58 You define the DS ESS subsystems included in the sites that support PowerHA SystemMirror Enterprise Edition for MetroMirror to enable PowerHA SystemMirror to process fallovers for PPRC replicated resources. Adding a DSCLI-managed PPRC replicated resource on page 59 You can add a DSCLI-managed PPRC replicated resource to your configuration. Configuring PPRC consistency groups with PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management: These topics describe the planning, installation, and configuration tasks for maintaining the consistency of disk volumes as PPRC consistency groups within PowerHA SystemMirror resource groups. Related reference Installation prerequisites for PowerHA SystemMirror Enterprise Edition Metro Mirror on page 11 Before installing PowerHA SystemMirror Enterprise Edition Metro Mirror, be sure to have the necessary base PowerHA SystemMirror filesets installed. Overview of consistency groups: When applications have one write that is dependent on the completion of another write, these applications are said to have dependent writes. Using dependent writes, these applications can manage the consistency of their data, so that a consistent state of the application data on disk is maintained if a failure occurs in the host machine, software, or the storage subsystem. Common examples of application dependent writes are databases and their associated log files. Database data sets are related, with values and pointers from indexes to data. Databases have pointers inside the data sets, the catalog and directory data sets, and in the logs. Therefore, data integrity must always be kept across these components of the database. In disaster situations, it is unlikely that the entire complex will fail at the same moment. Failures tend to be intermittent and gradual, and disaster can occur over many seconds, even minutes. Because some data might have been processed, and other data lost in this transition, data integrity on the secondary volumes is exposed. The mirrored data at the recovery site must be managed so that data consistency across all the volumes is preserved during an intermittent or gradual failure. For example, the situation where a fire starts in the data center. As the fire spreads, perhaps the adapters or the connectivity that is responsible for mirroring some of the data is damaged. If the storage system is able to continue operating, some transactions can continue mirroring while others cannot. This situation becomes a serious issue where dependent writes are concerned. To maintain the consistency of data across multiple disk volumes at a backup location, the IBM TotalStorage disk subsystem Peer-to-Peer Remote Copy function supports the concept of a PPRC consistency group. Disk volumes in a PPRC relationship that are configured into a PPRC Consistency Group are maintained to ensure that a group of dependent updates made to the disk volumes at the primary location are made together as a unit on the disk volumes at the backup location to maintain data consistency. The PPRC Consistency Group attribute changes the behavior of volume pairs when an error occurs that affects any of the volumes in the group. Without the PPRC Consistency Group option, the DSS causes the volumes where the error is detected to enter a suspended state, which means that PPRC mirroring is suspended. This still allows updates to that volume. If the PPRC Consistency Group option is activated, the volume becomes suspended and additionally enters a "long busy" state, where updates are not possible in that state.
64
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Rather than depending solely on the affected volume that remains in long busy state, the PowerHA SystemMirror Enterprise Edition initiates the freeze function to quickly suspend all mirroring between all volume pairs within the Consistency Group that is protected by the freeze. As a result, if you place all mirrored pairs into the same Consistency Group, the consistency of dependent writes is protected on all volumes, LSSs, and disk systems. As a result, if some disk volumes at the backup location cannot be updated, all are bared from update. PowerHA SystemMirror Enterprise Edition for Metro Mirror provides support for the configuration of disk volumes as PPRC consistency groups within PowerHA SystemMirror resource groups. For more information on PPRC consistency groups refer to: v IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments ITSO Redbook SG24-5757 (section 4.6) Planning for PPRC consistency groups: You should be familiar with the planning tasks for PowerHA SystemMirror. For information about planning PowerHA SystemMirror clusters, see the Planning Guide. To use PowerHA SystemMirror Enterprise Edition for Metro Mirror with TotalStorage disk volumes with the PPRC consistency group option, you define two PowerHA SystemMirror sites, each comprised of a number of AIX servers attached to a group of PPRC-managed disk subsystems spread across both sites. PowerHA SystemMirror PPRC replicated resources are defined in the usual way to contain the ESS/DS disk volume pairs. Volume Groups are built on top of the PPRC-managed disk volumes and added to PowerHA SystemMirror Resource Groups. In addition, PPRC consistency groups are defined to consist of Logical Subsystem (LSS) disk pairs and the corresponding PPRC path information. All PPRC replicated resources that are part of a consistency group are required to be part of the same PowerHA SystemMirror resource group. This means that the PPRC volumes in a consistency group cannot span more than one PowerHA SystemMirror resource group. However, an PowerHA SystemMirror esource group can contain one or more PPRC consistency groups. PowerHA SystemMirror maintains a common state of all consistency groups in a resource group - either all (or none) of them replicate data to the backup site. Cluster Administrators will most likely find the most convenient configuration where all the volume groups in a resource group are part of the same consistency group. Multiple consistency groups in the same resource group are appropriate when the configuration of paths and LSSes do not allow everything to be placed in the same consistency group. See the explanation associated with Figure 2 in Sample Configuration for an example of this. The following prerequisites are required: v PowerHA SystemMirror sites have been planned. v Basic DSCLI support has been completely configured. Refer to the appropriate documentation on how to install and configure each. v DSCLI management in an PowerHA SystemMirror cluster has been planned and the following tasks are completed: Identify the Copy Services Servers (CSS) to be used. Identify the disk subsystems to be used in the cluster. Identify the vpaths (for configurations using SDD) to be used in the configuration, including the volume IDs for each that correspond to the storage unit and LUN. Identify the PPRC Replicated Resources to be used. Identify the Port Pairs to be used for PPRC Paths. Identify Volume Pairs (LUNs).
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
65
Identify the volume groups to be managed by PPRC Replicated Resources. v Plan PowerHA SystemMirror Resource Groups. v Identify consistency groups that will be managed in a Resource Group. Related reference Sample consistency group configuration These illustration display sample consistency group configuration. Related information Planning guide Planning a resource group for PPRC consistency group: You need to plan your resource groups. In addition to basic resource group planning, it is assumed that resource group attributes, as well as the inter-site management policies for site support, has been planned. Follow the guidelines in Planning resource groups in the Planning Guide. You must also identify the following: v The PPRC replicated resources to be added to the Resource Group. v The Consistency Group that will be managed by the Resource Group. Note: The freeze action is directed to all Consistency Groups in a Resource Group. Therefore, you should ensure that all consistency groups in a resource group have dependencies. Since the PPRC "freeze" action is performed at a resource group level, all disk resources in the same resource group are frozen if there is total loss of the PPRC link between one or two PPRC pairs in a resource group. This ensures that data of dependent applications are not mirrored to the remote site when data of one or two applications in the same resource group cannot be mirrored to the remote site. If an application depends on more than one volume group, the volume groups might be defined to be part of the same consistency group or different consistency groups. However, it is recommended that they both be added to the same PowerHA SystemMirror Resource Group. Related information Planning resource groups Sample consistency group configuration: These illustration display sample consistency group configuration. Figure 1 illustrates a four-node, two site cluster on which PowerHA SystemMirror Enterprise Edition for Metro Mirror has been configured to provide a high availability disaster recovery solution. Site A and Site B are two geographically separated sites. Each site includes two nodes and a disk storage subsystem (labeled DSS A and DSS B). The DSS units are connected by redundant FCP links and utilize PPRC to replicate data from one storage unit to the other.
66
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Figure 2 shows a cross-section of two disk subsystems, DSS A and DSS B, located at two geographically separated sites, siteA and siteB. As an example, 3 logical PPRC paths Path1, Path2 and Path3 have defined between the logical subsystems LSS1 on DSS A and LSS1 on DSS B. A typical configuration, using Database as an example, would require the definition of two volume groups: one for the application data and the other for the log data. In this configuration, it would be required that both the application data (datavg1) and the log data (logvg1) be managed together in a consistency group. For example, a Volume Group (datavg1) that consists of four LUNs, carved from LSS1, on DSS A and a corresponding target Volume Group datavg1 with four LUNs from LSS1 on DSS B, has been defined. A set of four PPRC replicated resources (datapair1, datapair2, datapair3 and datapair4) needs to be defined over the four LUN pairs. Similarly, logvg1 has been defined with source LUNs from LSS1 on DSS A and target LUNs from LSS1 on DSS B and the corresponding PPRC pair logpair1 defined over the LUN pairs. To manage all the four PPRC pairs together in a consistency group, a consistency group CG1, is defined for the LSS pairs LSS1 on DSS A and LSS1 on DSS B over the PPRC path Path 1. Using Path1 as the path for this consistency group, whenever the path is established with the consistency group option enabled, all the PPRC pairs that are established between the source volumes of datavg1 and the target volumes of datavg1 are automatically put into a consistency group and associated with Path1. Therefore, all PPRC relationships in datavg1 and logvg1 are all part of the same PPRC consistency group.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
67
On the other hand, CG2 and CG3 are created with the same source LSS (LSS2) but different target LSSes. The application data (datavg2) and log data (logvg2) both have their source volumes in the same LSS but have their target volumes in different LSSes. While datavg2 is associated with Path2, logv2 is associated with Path3. They are, therefore, not part of the same PPRC consistency group. If an application depends on datavg2 and logvg2, they might be defined to be part of the same consistency group or different consistency group. However, it is recommended that they both be added to the same PowerHA SystemMirror Resource Group because the PPRC "freeze" action is performed at a resource group level. All disk resources in the same resource group are frozen if there is total loss of the PPRC link between any of the PPRC pairs in a resource group. This ensures that data of dependent applications are not mirrored to the remote site when data of one or two applications in the same resource group can not also be mirrored to the remote site. Example configuration of consistency group CG1 described above:
Add a PPRC Consistency Group Type or select values in entry fields. Press Enter AFTER making all desired changes.
* * * * *
PPRC Consistency Group Name LSS Pair Primary - Secondary Port Pair IDs Secondary - Primary Port Pair IDs Resource Group
F4=List F8=Image
68
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror consistency groups filesets: If you have not already done so, install the filesets according to instructions listed in Chapter 3: Installing PowerHA SystemMirror Enterprise Edition for Metro Mirror.
cluster.es.cgpprc.cmds cluster.es.cgpprc.rte PowerHA SystemMirror Enterprise Edition for Metro Mirror Consistency Group commands PowerHA SystemMirror Enterprise Edition for Metro Mirror Consistency Group runtime commands
Configuring PowerHA SystemMirror Enterprise Edition for Metro Mirror consistency groups: Configuration of Consistency Groups with PowerHA SystemMirror Enterprise Edition for Metro Mirror with DSCLI management is accomplished in three steps. The disk storage configuration steps are accomplished through either WebSM or line mode commands. The PowerHA SystemMirror configuration steps are accomplished through a series of SMIT panels. Configure the disk storage units to issue SNMP traps: The storage units must be configured to send traps to one or more of the nodes configured to receive them. You do this by using WebSM. The precise method is hardware dependent; consult the publications associated with the storage subsystem. Configure PowerHA SystemMirror to receive and handle SNMP traps: Before using the following procedure to enable SNMP Traps, you should ensure that any other consumer of SNMP traps - such as Netview or Tivoli or other network management software - is not already running. Otherwise the PowerHA SystemMirror Cluster Information Daemon (clinfo) is not able to receive traps. Conversely, if PowerHA SystemMirror is configured to receive SNMP traps, no other management software is able to receive them. PowerHA SystemMirror supports the receiving and handling of the following SNMP trap messages: Generic Type = 6 Specific Types:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
69
link degraded link down link up LSS pair-consistency group error session consistency group error LSS is suspended
PowerHA SystemMirror tests each SNMP trap received to ensure: v It is from a valid storage unit (check by storage unit ID). v It is from a storage unit that has been defined to PowerHA SystemMirror. v It is from an LSS that has been previously configured into a PPRC Resource Group. If PowerHA SystemMirror receives an SNMP trap that does not meet the above criteria, it is logged, but otherwise ignored. To enable SNMP Traps in PowerHA SystemMirror you must start the Cluster Information Daemon with SNMP Traps enabled, which provides Consistency Groups support. (If clinfo is already running, you must first stop it before using the following steps to restart it.) You do this on the Start Cluster Services SMIT panel (fast path: clstart):
Start Cluster Services
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Start now, on system restart or both Start Cluster Services on these nodes * Manage Resource Groups BROADCAST message at startup? Startup Cluster Information Daemon? Ignore verification errors? Automatically correct errors found during cluster start? now [A2] Automatically true false false Interactively + + + + + + +
F4=List F8=Image
Place the cursor on the Startup Cluster Information Daemon? field and hit F4. This gives the following options:
70
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Start now, on system restart or both now + Start Cluster Services on these nodes [A2] + BROADCAST message at startup? true + Startup Cluster Information Daemon? false + Ignore verification errors? false + ----------------------------------------------------------------| Startup Cluster Information Daemon? | | | | Move cursor to desired item and press Enter. | | | | false | | true | | true with consistency group support | | | | F1=Help F2=Refresh F3=Cancel | F1| F8=Image F10=Exit Enter=Do | F5| /=Find n=Find Next | F9 ----------------------------------------------------------------
The options are: v false - do not start cluster services v true - start cluster services v true with consistency group support - start cluster services with SNMP traps enabled Select true with consistency group support and press Enter. Configure PowerHA SystemMirror consistency group(s): Before configuring a Consistency Group, you should have performed all the steps required to configure all the PPRC Replicated Resources that will be added to the consistency groups. The PPRC Replicated Resources should also be added to a Resource Group. After defining a PPRC replicated resources and added to a Resource Group, you can add them to a PPRC consistency Group. To configure a PPRC consistency Group: 1. Enter smit hacmp 2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure PPRC-Managed Replicated Resources > DSCLI-Managed PPRC Replicated Resource > PPRC Consistency Groups Configuration and press Enter. The PPRC Consistency Groups Configuration SMIT panel can also be accessed directly with the SMIT fast path: def_consistgrp. 3. In the Add a PPRC Consistency Group panel, enter the field values as follows:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
71
Copy Services Server Configuration DS ESS Disk Subsystem Configuration DSCLI-Managed PPRC Replicated Resource Configuration PPRC Consistency Groups Configuration Verify PPRC Configuration
F1=Help F9=Shell
F2=Refresh F10=Exit
F3=Cancel Enter=Do
F8=Image
4. Select the PPRC Consistency Groups Configuration field to bring up the PPRC Consistency Groups Configuration SMIT panel (fast path: def_consistgrp):
PPRC Consistency Groups Configuration
Add a PPRC Consistency Group Change/Show a PPRC Consistency Group Remove a PPRC Consistency Group
F1=Help F9=Shell
F2=Refresh F10=Exit
F3=Cancel Enter=Do
F8=Image
5. Select Add a PPRC Consistency Group to bring up the Add a PPRC Consistency Group SMIT panel (fast path: claddconsistgrp.cmdhdr):
Add a PPRC Consistency Group
Type or select values in entry fields. Press Enter AFTER making all desired changes.
* * * * *
PPRC Consistency Group Name LSS Pair Primary - Secondary Port Pair IDs Secondary - Primary Port Pair IDs Resource Group
[Entry Fields] [] [] [] [] []
F4=List F8=Image
72
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Enter the name of the PPRC consistency group. PPRC Consistency Group is an alphanumeric string, up to 32 characters long. The name can be freely chosen and has no other restrictions. Enter the IDs of the LSS pair. This is the set of LSSs associated with this PPRC Consistency Group separated by a space. The first ID in the list is the LSS in the primary ESS/DSS, and the second is the LSS in the secondary ESS/DSS. Entries are of the format lss_ID@ess_name , where: lss_ID is the LSS ID and ess_name is the name of the ESS/DSS Disk Subsystem containing this LSS as listed in the disk subsystem definition. In smitty, you can use the F4 option to list the available LSSs on each disk subsystem. The selection screen is there only to aid you with this information. You must verify that any selections made through the F4 pick list are listed in the correct order and in the proper format. Enter the port pair IDs for establishing path(s) between the Primary and Secondary LSS. The primary port ID is listed first, followed by the secondary with the string "->" between them. You can list additional port pairs as a space separated list. These should match the port pairs in the resource group definition.
Enter the port pair IDs for establishing path(s) between the Secondary and Primary LSS. The secondary port ID is listed first, followed by the primary with the string "->" between them. You can list additional port pairs as a space separated list. These should match the port pairs in the resource group definition.
Resource Group
Select the Resource Group that has PPRC replicated resources associated with this Consistency Group. This Resource Group must be defined and contain the PPRC replicated resources defined from this LSS pair. The resource group must be selected through the F4 pick list in SMIT.
7. Press Enter to apply the configuration settings. Changing the configuration of a PPRC consistency group: You can change the configuration of a PPRC consistency group using SMIT. To change or remove the configuration of an existing PPRC Consistency Group: 1. From the SMIT command line enter hacmp 2. In SMIT, select Cluster Applications and Resources Configuration Resources Configuration Configure PPRC-Managed Replicated Resources DSCLI-Managed PPRC Replicated Resource PPRC Consistency Groups Configuration Change/Show a PPRC Consistency Group and press Enter. 3. Select the PPRC consistency group you want to change or view.
PPRC Consistency Groups Configuration
Add a PPRC Consistency Group Change/Show a PPRC Consistency Group Remove a PPRC Consistency Group
-------------------------------------------------------- | Select the PPRC Consistency Group to show or change | | | | Move cursor to desired item and press Enter. | | | | CG1 | | CG2 | | | | F1=Help F2=Refresh F3=Cancel | | F8=Image F10=Exit Enter=Do | F1 | /=Find n=Find Next | F9 ---------------------------------------------------------
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
73
4. Position the cursor to the consistency group to show or change and then press Enter to bring up the following SMIT panel:
Change / Show a PPRC Consistency Group Type or select values in entry fields. Press Enter AFTER making all desired changes.
* PPRC Consistency Group Name New PPRC Consistency Group Name * LSS Pair * Primary - Secondary Port Pair IDs * Secondary - Primary Port Pair IDs Resource Group
F4=List F8=Image
5. Make the desired changes and then press Enter. 6. To remove a previously configured consistency group, select Remove a PPRC Consistency Group to bring up an inset panel from which to select the consistency group to remove:
PPRC Consistency Groups Configuration
Add a PPRC Consistency Group Change/Show a PPRC Consistency Group Remove a PPRC Consistency Group
---------------------------------------------------------------| Remove a PPRC Consistency Group | | | | Move cursor to desired item and press Enter. | | | | CG1 | | CG2 | | | | F1=Help F2=Refresh F3=Cancel | | F8=Image F10=Exit Enter=Do | F1 | /=Find n=Find Next | F9 ---------------------------------------------------------------
7. Position the cursor to the consistency group to remove and then press Enter to bring up the following confirmation:
74
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Add a PPRC Consistency Group Change/Show a PPRC Consistency Group Remove a PPRC Consistency Group
---------------------------------------------------------------| ARE YOU SURE? | | | | Continuing may delete information you may want | | to keep. This is your last chance to stop | | before continuing. | | Press Enter to continue. | | Press Cancel to return to the application. | | | | F1=Help F2=Refresh F3=Cancel | F1 | F8=Image F10=Exit Enter=Do | F9 -----------------------------------------------------------------
8. Press Enter to confirm the removal of the PPRC consistency group. Verifying the PowerHA SystemMirror Enterprise Edition for Metro Mirror consistency groups configuration: The configuration of SNMP Traps and Consistency Groups is verified during PowerHA SystemMirror cluster verification. You can also verify the Consistency Group configuration through the Verify PPRC Configuration option in the main DSCLI-Managed Replication Resource Configuration SMIT Panel (fast path: pprc_ds), that appears as follows:
Verify PPRC Configuration
Type or select values in entry fields. Press Enter AFTER making all desired changes.
F4=List F8=Image
Press Enter to begin PPRC configuration verification. Verify first checks the Copy Services Server, DS ESS Disk Subsystem and PPRC Replicated Resources configuration. Upon successful verification and the existence of Consistency Group configuration, verification of Consistency Groups takes place. Verification checks the following for each Consistency Group definition: v LSS Pair: for both the source and target (LSS@Disk_Subsys) verify that the LSS exists on the DS ESS Disk Subsystem. v Resource Group: verify that the specified Resource Group is a valid resource. v Port Pair IDs: for both the primary and secondary list of Port Pair IDs, verify that each port exists on the respective primary and secondary disk subsystems. Troubleshooting PPRC consistency groups:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
75
Use these tips to troubleshoot your PPRC consistency groups. The following is an example of an Error Log entry made for an SNMP Trap:
LABEL:OPMSG IDENTIFIER: AA8AB241
Date/Time:Mon Jun 25 12:20:04 2007 Sequence Number: 16480 Machine Id: 0025A45C4C00 Node Id: regaa07 Class: O Type: TEMP Resource Name: SNMP_TRAP_NOTIF Description OPERATOR NOTIFICATION User Causes ERRLOGGER COMMAND Recommended Actions REVIEW DETAILED DATA Detail Data MESSAGE FROM ERRLOGGER COMMAND 2007/06/25 10:17:53 PDT PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-16231 00 SEC: IBM 1750-511 13-AAY4A 00 Path: Type PP PLink SP SLink 1: FIBRE 0101 XXXXXX 0001 XXXXXX 2007/06/25 13:10:17 EDT PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2105-800 13-22012 10 SEC: XXX 2105-XXX XX-16231 FD Path: Type PP PLink SP SLink 1: FIBRE 0004 XXXXXX 0101 XXXXXX
RC OK
RC OK
The following steps should be followed when an PowerHA SystemMirror cluster does not appear to exercise consistency group operations: 1. Check that the storage unit HMC is configured to send the SNMP traps to the IP address(es) of the PowerHA SystemMirror cluster node(s) that processes them. Normal best practice is to have the HMC configured to send SNMP traps to all PowerHA SystemMirror nodes. If that is not possible, at a minimum, it should send SNMP traps to two nodes, one at each site. 2. Check that the IP address of the copy services server that controls the storage subsystems is configured to PowerHA SystemMirror. This can be checked by doing smitty pprc_def > DSCLI-Managed PPRC Replicated Resource Configuration > Copy Services Server Configuration > Change / Show a Copy Services Server. 3. Before starting clinfo with consistency group support, verify that there is no other consumer of SNMP traps running on the PowerHA SystemMirror nodes that process them. Run the following command:
Netstat -an | grep 162
There should not be any output from the command. If there is, it will look like the following:
udp40 0 *.162 *.*
Programs cannot share access to port 162. 4. Try pinging the nodes that are to receive the SNMP traps from the storage subsystem HMC, using the IP address configured in that HMC for the SNMP trap destination. 5. Try pinging the storage subsystem HMC, using the IP address defined for the copy services server, from the PowerHA SystemMirror cluster nodes that are to process SNMP traps.
76
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
6. Check the error log using errpt -a and see if there are any SNMP trap entries from the storage subsystem. An example of such an entry is:
LABEL:OPMSG IDENTIFIER: AA8AB241
Date/Time:Mon Jun 25 12:20:04 2007 Sequence Number: 16480 Machine Id: 0025A45C4C00 Node Id: regaa07 Class: O Type: TEMP Resource Name: SNMP_TRAP_NOTIF Description OPERATOR NOTIFICATION User Causes ERRLOGGER COMMAND Recommended Actions REVIEW DETAILED DATA Detail Data MESSAGE FROM ERRLOGGER COMMAND 2007/06/25 10:17:53 PDT PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-16231 00 SEC: IBM 1750-511 13-AAY4A 00 Path: Type PP PLink SP SLink 1: FIBRE 0101 XXXXXX 0001 XXXXXX 2007/06/25 13:10:17 EDT PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2105-800 13-22012 10 SEC: XXX 2105-XXX XX-16231 FD Path: Type PP PLink SP SLink 1: FIBRE 0004 XXXXXX 0101 XXXXXX
RC OK
RC OK
7. Send a trap from one PowerHA SystemMirror cluster node to another using the AIX snmptrap command. This should log an entry in the error log (see step 6) if the node can receive SNMP traps. In this example, 9.3.18.126 -> target node for trap 9.3.18.240 -> ipaddress of the storage unit HMC. Change only the addresses listed above. Leave all the other data the same.
snmptrap -v 1 -c public 9.3.18.126 1.3.6.1.4.1.2.6.130 9.3.18.240 6 202 1 1.3.6.1.4.1.2.6.130.3.2 s "2007/06/27 06:25:34 CDT Primary PPRC Devices on LSSSuspended Due to Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 1750-511 13-AAY4A 06 02 04 SEC: IBM 2107-922 75-16231 07 02 00 Start: 2007/06/19 11:25:42 CDT PRI Dev Flags (1bit/dev, 1=Suspended): 1000000000000000000000000000000000000"
Maintaining and troubleshooting PowerHA SystemMirror Enterprise Edition for Metro Mirror
These topics present general information for maintaining and troubleshooting an PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration. Issues specific to Direct management (ESS CLI) or to DSCLI management support are noted as such. For SVC-PPRC Management troubleshooting information, see Troubleshooting PowerHA SystemMirror Enterprise Edition for Metro Mirror for SVC.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
77
Related reference Troubleshooting PowerHA SystemMirror Enterprise Edition for Metro Mirror for SVC on page 118 These topics provide information that might help you with troubleshooting SVC PPRC clusters. Logging messages: PowerHA SystemMirror Enterprise Edition for Metro Mirror uses the standard logging facilities for PowerHA SystemMirror. For information about logging in PowerHA SystemMirror, see the PowerHA SystemMirror Administration Guide. Related information Administration guide Maintaining PowerHA SystemMirror Enterprise Edition for Metro Mirror: This section describes situations you might encounter in a running cluster after synchronizing your PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration. This information applies to Direct management. Using C-SPOC for configuration changes: You can use C-SPOC to add, change, and delete logical volumes dynamically on nodes connected to the source volumes for a PPRC pair. However, these operations cannot involve allocating additional storage that requires modification of the PPRC pairs. If you make configuration changes that allocate new storage or create logical units, first stop the associated PPRC volume pairs. From C-SPOC, you cannot modify the configuration on nodes connected to target volumes for a PPRC pair. Restarting ESS Copy Services Server on the backup CSS: An ESS system with a microcode level less then vrmf 2.2.x.x does not support dual active Copy Services Servers. If the primary ESS Copy Services Server fails, you manually start the backup Copy Services Server as the active Copy Services Server for the ESS. You do this through the using the ESS Web Interface. For information about starting Copy Services Server, see the ESS Web Interface User's Guide. For information about setting up a Copy Services Server for an PowerHA SystemMirror Enterprise Edition for Metro Mirror configuration, see Planning Copy Services Server on ESS. Related reference Planning Copy Services Server on ESS on page 21 Considerations for using Copy Service Server differ depending on the PPRC version. Related information IBM TotalStorage Enterprise Storage Server: Web Interface User's Guide Detecting ESS failures: There is no simple way to detect the failure of an ESS subsystem in a direct management environment.
78
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
You can check for the loss of quorum for all the volume groups indicated by a LVM_SA_QUORCLOSE error. These errors are written to the AIX error log. PowerHA SystemMirror checks whether the LVM_SA_QUORCLOSE error appeared in the AIX error log file and if so, informs the Cluster Manager to selectively move the affected resource group. This PowerHA SystemMirror function is called selective fallover for volume group loss. For more information about selective fallover for volume group loss, see the Administration Guide. Note: If fallover does not occur, check that the LVM_SA_QUORCLOSE error appeared in the AIX error log. When the AIX error log buffer is full, new entries are discarded until space becomes available in the buffer and an error log entry informs you of this problem.. Certain storage subsystems have been found to generate large amounts of error log entries when they are experienceing problems. If the error log is not large enough, these error log entries can overrun the error log causing the LVM_SA_QUORCLOSE to be lost. To avoid this problem, configure the error log to have adequate space using the following command:
errdemon -B 1048576 -s 10485760
This sets the error log buffer size to 1MB and the log size to 10 MB. Related information Administration guide Troubleshooting DSCLI-managed clusters: This section provides information that may help with troubleshooting DSCLI management clusters. When a DSCLI management PowerHA SystemMirror Cluster is in a less than optimal functioning state, check the /tmp/hacmp.out file for any obvious issues, and then, if necessary, check the states of the PPRC instances and paths that underlay the volume groups to make sure they are write-accessible. Use the commands listed below to discover what the PPRC instances states are, as this will be a major indicator of the health of your volume groups. If you encounter problems with your cluster that are not covered in this section or are not easily remedied, contact your IBM Service Representative. Common cluster startup problems: This section discusses some common cluster startup problems. 1. If a resource group will not stabilize on its primary node and site, it is probable that the PPRC Replicated Resource and Resource Group are defined to be primary on different sites. Refer to the Planning Primary and Secondary Site Layout for DSCLI-Managed PPRC Replicated Resources in PowerHA SystemMirror Resource Groups. Make sure all the appropriate entries in the PPRC replicated resource are aligned in the correct direction, as described in this and the Sample Configuration section. 2. Another reason a resource group may not come ONLINE, may be unstable, or go into ERROR state is if there is trouble with the PPRC instance or path that the volume group and PPRC replicated resource being managed requires. See the descriptions below for commands to use to display PPRC instance states and a description of normal working states. 3. Keep in mind the limitation that a volume group needs to have the same volume major number across all cluster nodes. This has been known to cause instability in a resource group when it initially comes online. 4. If the Pri-Sec and Sec-Pri port pairs are not declared correctly, this will lead to errors in creating the initial PPRC paths, which will keep the cluster from coming up correctly. In the /tmp/hacmp.out file, there will be an error message returned from the mkpprcpath call for the PPRC replicated resource associated with the resource group that will not come ONLINE.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
79
5. In certain instances, disk reserves can be left on vpaths or hdisks, that DSCLI management is not able to break. These will keep a resource group from being able to come ONLINE because the disk that has a reserve on it will never be write accessible. See Other Handy AIX Commands below for ideas on how to break disk reserves. Handy DSCLI commands to know: Refer to the DSCLI documentation to become familiar with these management commands. Ideally you will not have to use the DSCLI interface except in troubleshooting situations. n that case, however, the following commands are useful (all run through the /opt/ibm/dscli/dscli command): v lspprc: list the current state of the PPRC instances (see below for valid and common operational states). v lspprcpath: list the current state of the PPRC paths configured. v lsavailpprcport: list the available PPRC port pairs between two storage systems. v v v v failoverpprc, failbackpprc: switch the direction of PPRC instances. rmpprc: remove an existing PPRC instance. rmpmkpprc: create a new PPRC instances. mkpprcpath: create a new PPRC path.
Refer to the DSCLI documentation for detailed descriptions of these commands, and of others that will allow you to view details of the connected storage systems. PPRC instance states: PPRC instance states exist during normal operation. The states are described in pairs because if you poll the DSCLI at the primary site and the secondary site you will get different state information which reflects the I/O accessibility of the disk at that site, as well as the state of the PPRC instance. Normal Working State Primary state: Full Duplex Secondary state: Target Full Duplex This is the normal working state for a PPRC instance. In this state, the Primary is accessible for read and write I/O but the Secondary is not accessible for either. Suspended State Primary state: Full Duplex Secondary state: Suspended In this state, the Primary is accessible for read and write I/O and the Secondary is as well. Data integrity between the two sides of the PPRC instance is not ensured in this state. In the context of PowerHA SystemMirror clusters, this PPRC instance state occurs when there has been a failure on the PowerHA SystemMirror site connected to the primary site, causing a failover to the secondary state.
80
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Copy State Primary and secondary state: Copy Pending In this state, there is a full disk copy in process between one side of the PPRC instance and the other. Handy AIX commands: This section discusses some commonly used AIX commands. Refer to the AIX documentation for more details on the following commands: To discover whether a disk reserve exists on a given disk:
lquerypr -Vh /dev/<disk name>
To list the existing vpath to storage volume ID (LUN + storage ID) mapping:
lsvpcfg
To list the existing vpaths on your system, which will display the volume group mapping:
lspv | grep vpath
Planning worksheets
Use the planning worksheets to record information that you need to plan your PowerHA SystemMirror Enterprise Edition for Metro Mirror configurations. General PowerHA SystemMirror planning worksheets: You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. PowerHA SystemMirror Site Worksheet: Use the PowerHA SystemMirror Site Worksheet to record PowerHA SystemMirror sites to support PPRC replicated resources.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
81
Site 2
Site Nodes
Site Nodes
n3 n4
n1 n2
Resource Groups for PPRC Replicated Resources Worksheet: Use the Resource Groups for PPRC Replicated Resources Worksheet to record resource groups that contain PPRC replicated resources.
Resource Group Name Management Policies: Startup Fallover Fallback Intersite Management Policy Nodes (in prioritized order as they will appear in the nodelist)
82
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
P or S P or S P or S P or S P or S P or S
* Circle one: P-primary, or S-secondary Sample Resource Groups for PPRC Replicated Resources Worksheet
Resource Group Name Management Policies: Startup Fallover Fallback Intersite Management Policy Nodes (in prioritized order as they will appear in the nodelist) Online on home node only Fallover to next priority node in the list Fallback to higher priority node in the list Prefer primary site n3 n4 n1 n2 rgpprc1
Related tasks Completing the PPRC resource group worksheet on page 10 Complete a Resource Groups for PPRC Replicated Resources Worksheet for each resource group that contains PPRC replicated resources. Planning Worksheets for Direct Management (ESS CLI): You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. PPRC-Mirrored Volumes Worksheet: Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
83
P/S *
Volumes Volume ID
Volumes Volume ID
Logical Subsystem ID
*P/SSpecify primary or secondary Note: Enter the volume information so that the information for each volume in a PPRC pair appears on the same line. Sample PPRC-Mirrored Volumes Worksheet
Site Name Site_1 ESS Serial Number 22011 PPRC Replicated Resource Name pprc4.1 pprc4.2 pprc5.1 pprc5.2 Site Name Site_2 ESS Serial Number 22012
P/S * P P P P
Logical Subsystem ID 16 16 16 16
User-Specific Tasks Worksheets: Use the User-Specific Tasks Worksheet to record User-specific task names (not needed for most configurations). Use this worksheet only if you use task names which are different from the recommended ones.
84
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Task Name
Description
Sample User-Specific Tasks Worksheet Use this worksheet only if you use task names which are different from the recommended ones.
Task Name vg4EstPtPS vg4DelPtPSF vg4DelPtPS vg4EstPtSP vg4DelPtSP vg4DelPtSPF vg4EstPrPSFCex vg4EstPrPSNC vg4EstPrPSSC vg4EstPrPSFO vg4EstPrPSFB vg4SusPrPSP vg4SusPrPSS vg4TerPrPSP vg4TerPrPSS vg4EstPrSPFC vg4EstPrSPNC vg4EstPrSPSC vg4EstPrSPFO vg4EstPrSPFB vg4TerPrSPS Description This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. Example task to establish a PPRC pair from the source volume on ESS 22011 to the target volume on ESS 22012. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
85
Description This name is the same as the recommended name. This name is the same as the recommended name. This name is the same as the recommended name.
ESS Disk Subsystem Worksheet: Use the ESS Disk Subsystem Worksheet to record information about ESS disk subsystems that contains PPRC pairs
Site Name ESS Disk Subsystem Name ESS IP Address ESS User ID ESS Password Site Name
Planning Worksheets for DSCLI management: You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. Copy Services Server Worksheet: Use the Copy Services Server Worksheet to record Copy Services Server information to support PPRC DCSLI Management
CSS (PowerHA SystemMirror) Site Name CSS Subsystem name CSS Type CLI IP Address CSS User ID CSS password . DSCLI DSCLI CSS (PowerHA SystemMirror) Site Name
86
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Site A CSS subsystem name CLI Type CSS IP Address CSS User ID CSS password m222h DSCLI 9.112.112.2 .xyz 123xyz
PPRC-Mirrored Volumes Worksheet: Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources
Site Name ESS Serial Number PPRC Replicated Resource Name Site Name ESS Serial Number
P/S *
Volumes Volume ID
Volumes Volume ID
Logical Subsystem ID
*P/S - Specify primary or secondary Note: Enter the volume information so that the information for each volume in a PPRC pair appears on the same line. Sample PPRC-Mirrored Volumes Worksheet
Site Name Site1 ESS Serial Number 22011 PPRC Replicated Resource Name pprc4.1 pprc4.2 pprc5.1 Site Name Site2 ESS Serial Number 22012
P/S * P P P
Logical Subsystem ID 16 16 16
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
87
Site Name Site1 ESS Serial Number 22011 PPRC Replicated Resource Name pprc5.2
P/S * P
Logical Subsystem ID 16
DS ESS Disk Subsystem Worksheet: Use the DS ESS Disk Subsystem Worksheet to record information about the ESS disk subsystems that contain PPRC pairs
Site Name ESS Disk Subsystem Name ESS Cluster 1 IP Address ESS Cluster 2 IP Address ESS User ID ESS Password ESS Storage ID Associated CSS Site Name
PPRC Replicated Resources Worksheet: Use the PPRC Replicated Resources Worksheet to record information about the PPRC replicated resources that go in PowerHA SystemMirror resource groups Fill in one table per PPRC resource to be configured. Note that the information in all rows except the PPRC Volume Pairs (last row) will be the same for each replicated resource.
Primary Storage ID PowerHA SystemMirror Sites ESS Pair LSS Pair PPRC Link Type PPRC Critical Mode PPRC Recovery Action Volume Group This information will be the same for both sites. Secondary
88
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Primary Port Pair ID-1 Port Pair ID-2 PPRC Volume Pairs Continue adding pairs as needed.
Secondary
Sample PPRC Replicated Resource Worksheet This sample shows the information for several PPRC volume pairs (replicated resources). Note the information is the same except for the PPRC Volume Pair numbers. In SMIT, you will need to add all the information for each replicated resource.
Primary Storage ID PowerHA SystemMirror Sites ESS Pair LSS Pair PPRC Link Type PPRC Critical Mode PPRC Recovery Action Volume Group Port Pair ID -1 Port Pair ID-2 PPRC Volume Pairs 10020 11024 1000 1001 1002 1003 1004 1005 1006 10101 10001 3033 3034 3035 3036 3037 3038 3039 IBM.2107-111111 Site A 10 DS8000 FCP on Automatic VG1 Secondary IBM.1750-222222 Site B 10 DS6000
Planning Worksheets for SVC PPRC: You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. SVC Cluster Worksheet: Use the SVC Cluster Worksheet to record information for defining SVC clusters to be accessed by PowerHA SystemMirror
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
89
PowerHA SystemMirror Site SVC Cluster Name SVC Cluster Role (Master or Auxiliary) SVC Cluster IP Address Remote SVC Partner .
PPRC-Mirrored Volumes Worksheet: Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources
Site Name ESS Serial Number PPRC Replicated Resource Name Site Name ESS Serial Number
P/S *
Volumes Volume ID
Volumes Volume ID
Logical Subsystem ID
*P/S - Specify primary or secondary Note: Enter the volume information so that the information for each volume in a PPRC pair appears on the same line. Sample PPRC-Mirrored Volumes Worksheet
90
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Site Name Site1 ESS Serial Number 22011 PPRC Replicated Resource Name pprc4.1 pprc4.2 pprc5.1 pprc5.2
P/S * P P P P
Logical Subsystem ID 16 16 16 16
SVC PPRC Relationships Worksheet: Use the SVC PPRC Relationships Worksheet to record information about volume groups in SVC PPRC relationships. Fill in the worksheet so that the information for each volume group appears on the same line.
Site name SVC Cluster Name VG name VG Major Num CVC PPRC Rel. AIX vpath name name SVC Disk Name & Role Site name SVC Cluster name SVC Disk name & Role
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
91
Use the SVC PPRC Replicated Resources Worksheet to list the SVC Consistency Group names (and their related SVC Clusters) that will be used as the PPRC Replicated Resource names when defining PowerHA SystemMirror Resource Group components.
SVC PPRC Consistency Group Name (rep. resource name) PowerHA SystemMirror Resource Group Assoc. PowerHA SystemMirror Site Assoc. PowerHA SystemMirror Site
Recovery Action
List of SVC relation- ships Copy Type svcRel1 svcRel2 svcRel3 svcRel4
Recovery Action
92
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
93
If you do not have any application data, then follow the migration recommendations and steps described in the DS8700 migration documentation. To plan for the deployment of the IBM DS8700 across two sites, complete the following steps: 1. Identify your production site for the application. A production site is the location where your application and its data would primarily reside and function. 2. Identify the volumes on the DS8700 Storage System that contain the application data that you want to be Highly Available with Disaster Recovery. 3. Identify the storage unit(s) and AIX hosts that will be running on the recovery site. 4. Ensure that there are sufficient number of volumes and Fiber Channel ports available on the storage systems for the production site and recovery site. This allows a mirror path or PPRC path to be created between the storage units. 5. Verify that there is a FlashCopy relationship established for each volume that resides on the production site and recovery site. Planning the PowerHA SystemMirror implementation: To successfully implement PowerHA SystemMirror you must plan accordingly. Before you implement PowerHA SystemMirror you must complete the following: v Collect the following information for all the HMCs in your environment: IP addresses Login names and passwords Associations with various storage units v Verify that all the data volumes that need to be mirrored are visible to all relevant AIX hosts. The DS8700 volumes should be appropriately zoned so that the FlashCopy volumes are not visible to the PowerHA SystemMirror nodes. v Ensure all HMCs are accessible using the TCP/IP network for all PowerHA SystemMirror nodes where you want to run Global Mirror.
Installation Prerequisites
Before installing PowerHA SystemMirror for AIX Enterprise Edition for Global Mirror, make sure the following is true for each node in your cluster: v You have root access to each node.
Software Requirements
The following software is required for PowerHA SystemMirror for AIX Enterprise Edition Enterprise Edition for Global Mirror: v Global Mirror functionality works with all the AIX levels that are supported by PowerHA SystemMirror Standard Edition. v PowerHA SystemMirror for AIX Enterprise Edition 6.1, or later, with the APARs stated in the support flash and README. v IBM DS8700 microcode bundle must be 75.1.145.0, or later. v Latest DS8000 DSCLI client interface must be installed on each PowerHA SystemMirror node. v Add the pathname for the dscli client into the PATH for the root user on each PowerHA SystemMirror node.
94
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Installation Steps
To install Global Mirror, complete the following steps: 1. Insert the PowerHA SystemMirror for AIX Enterprise Edition installation media into the CD or DVD drive. 2. From the command line, enter smitty install. 3. From SMIT, select Install and Update Software Install Software. 4. Select the following filesets from the list, cluster.es.genxd.rte, cluster.es.genxd.cmds, and cluster.msg.genxd.xx_XX where xx_XX is the message fileset for your language, and press Enter.
95
Before you start configuring PowerHA SystemMirror Enterprise Edition for Global Mirror, your environment must meet the following requirements: v All PowerHA SystemMirror clusters are defined. v All PowerHA SystemMirror nodes are defined. v All PowerHA SystemMirror sites are defined. v All PowerHA SystemMirror resource groups and associated resources are configured and working. Configuring a Storage Agent: A Storage Agent is a generic name given by PowerHA SystemMirror for an entity such as the IBM DS8000 Hardware Management Console (HMC). Storage Agents typically provide a one-point coordination point and often use TCP/IP as their transport for communication. You must provide the IP address and authentication information that will be used to communicate with the HMC. Adding a Storage Agent: To add a Storage Agent, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Agents Add a Storage Agent and press Enter. 3. Complete the following fields:
Table 1. Adding a Storage Agent fields
Fields Storage Agent Name IP Addresses User ID Password Values Enter the PowerHA SystemMirror name for this HMC. Select list of IP addresses for the HMC. Enter the User ID that can access the HMC. Enter the password for the User ID that can access the HMC.
4. Verify all fields are correct and press Enter. Changing an existing Storage Agent: To change an existing Storage Agent, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Agents Change/Show Storage Agent and press Enter. 3. From the list select the name of the Storage Agent that you want to change and press Enter. 4. Specify the changes you want to make in the fields. 5. Verify the changes are correct and press Enter. Removing a Storage Agent: To remove a Storage Agent, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Agents Remove Storage Agent and press Enter.
96
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
3. From the list of Storage Agents select the name of the Storage Agent you want to remove and press Enter. 4. Press Enter to confirm the selected Storage Agent is the one you want to remove. Configure a Storage System: A Storage System is a generic name given by PowerHA SystemMirror for an entity such as a DS8700 Storage Unit. When using Global Mirror, you must associate one Storage Agent with each Storage System. You must provide the IBM DS8700 system identifier for the Storage System. For example, IBM.2107-75ABTV1 is an storage identifier for a DS8000 Storage System. Adding a Storage System: To add a Storage System, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Systems Add a Storage System and press Enter. 3. Complete the following fields:
Table 2. Adding a Storage System fields
Fields Storage System Name Site Association Vendor Specific Identification WWNN Storage Agent Name Values Enter the PowerHA SystemMirror name for the storage system. This name must be unique within the cluster definition. Enter the PowerHA SystemMirror site name. Enter the vendor specified unique identifies for the storage system. The World Wide Node Name for this storage system. Every storage system that supports FCP has a unique name. Press F4 to select the Name(s) of the Storage Agent that manage this Storage System from a list.
4. Verify all fields are correct and press Enter. Changing a Storage System: To change a Storage System, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Systems Change/Show Storage System and press Enter. 3. From the list select the name of the Storage System that you want to change and press Enter. 4. Specify the changes you want to make in the fields. 5. Verify the changes are correct and press Enter. Removing a Storage System: To remove a Storage System, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Storage Systems Remove Storage System and press Enter.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
97
3. From the list of Storage Systems select the name of the Storage System you want to remove and press Enter. 4. Press Enter to confirm the selected Storage System is the one you want to remove. Configuring Mirror Group: A Mirror Group is a generic name given by PowerHA SystemMirror for a logical collection of volumes that have to be mirrored to another Storage System that resides on a remote site. A Global Mirror Session represents a Mirror Group. Adding a Mirror Group: . To add a Mirror Group, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Mirror Groups Add Mirror Group and press Enter. 3. Complete the following fields:
Table 3. Adding a Mirror Group fields
Fields Mirror Group Name Values Enter the PowerHA SystemMirror for the replicated resource. This is what will be included into a PowerHA SystemMirror Resource Group. From the list select the Storage Systems on the production site that have data volumes forming this Mirror Group. Enter the Global Mirror Sessions Identifier. Specify the Disaster Recovery Policy to indicate the action to be taken by PowerHA SystemMirror in case of a site fallover. You can enter Manual if manual intervention is required when site fallover occurs, or Automated if you do not want manual intervention when site fallover occurs. If you specify Manual does not indicate that a manual intervention is required for all failover scenarios. There are some conditions, such as cluster partition, in which doing an automatic failover from one site to another, can cause potential data divergence and integrity issues. If PowerHA SystemMirror detects the potential for such a case, and if the Recovery Action associated with the Mirror Group is set to Manual then PowerHA SystemMirror does not execute an automatic failover. Maximum Coordination Time Enter the maximum amount of time (in seconds) that the DS8700 can hold an I/O issued from the host for the purpose of Global Mirror operations. Enter the maximum amount of time (in seconds) allowed for data to be drained before failing the current consistency group. Enter the time (in seconds) to wait between the formation of two consistency groups.
4. Verify all fields are correct and press Enter. Changing a Mirror Group: To change a mirror group, complete the following steps: 1. From the command line enter, smit hacmp.
98
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Mirror Groups Change/Show Mirror Group and press Enter. 3. From the list select the name of the Mirror Group that you want to change and press Enter. 4. Specify the changes you want to make in the fields. 5. Verify the changes are correct and press Enter. Removing a Mirror Group: To remove a Mirror Group, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Configure DS8000 Global Mirror Resources Configure Mirror Groups Remove Mirror Group and press Enter. 3. From the list of Mirror Groups select the name of the Mirror Group you want to remove and press Enter. 4. Press Enter to confirm the selected Mirror Group is the one you want to remove. Configuring Resource Groups: After configuring the Mirror Group, include the Mirror Group into the desired PowerHA SystemMirror Resource Group. Configure resource groups following the procedures in the Configuring HACMP resource groups topic. Make sure that you understand site support for HACMP resource groups. For more information about site support for HACMP resource groups, see the Planning resource groups topic. When you configure a resource group you must meet the following requirements: v The site policy is set to Prefer Primary Site or Online on Either Site. v A startup policy other than Online on All Available Nodes is specified. v The Resource Group Processing Ordering is set to Serial. To add a Global Mirror replicated resource to a resource group, complete the following steps: 1. From the command line enter, smit hacmp. 2. From the SMIT interface, select Extended Configuration Extended Resource Configuration HACMP Extended Resource Configuration Change/Show Resources and Attributes for a Resource Group and press Enter. 3. Enter the following: a. The name of the mirror groups in the Global Mirror Replicated Resources field. b. The name of the volume groups associated with the individual Global Mirror replicated resources in the Volume Groups field. Note: The volume group names must be listed in the same order as the DS8700 mirror group names. 4. Verify and synchronize the cluster. For information on how to verify and synchronize a cluster, see the Verifying and synchronizing a cluster configuration topic.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
99
. SVC virtualizes storage, assists in managing Peer to Peer Remote Copy (PPRC) services on TotalStorage systems and simplifies how you can integrate PPRC replicated resources into an PowerHA SystemMirror configuration. SVC management allows SVC PPRC resources to be managed by PowerHA SystemMirror with a minimum of user configuration. IBM TotalStorage SAN Volume Controller versions 4.2 are supported. For more information on the IBM TotalStorage SAN Volume Controller, see: http://www.redbooks.ibm.com/redbooks/pdfs/sg246423.pdf Related information IBM System Storage SAN Volume Controller
100
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
v SVC PPRC Command Line Interface and/or GUI to manually manage SVC PPRC consistency groups and relationships.
Definition of terms
It is important to understand how the terms master and auxiliary are used in the PowerHA SystemMirror Enterprise Edition Metro Mirror for SVC environment. In general, master and auxiliary refer to the SVC virtual disks that are on either end of a SVC PPRC link. Primary and Secondary refer to the PowerHA SystemMirror sites that host the resource groups that manage the SVC PPRC replicated resources that contain those SVC PPRC links. The terms master and auxiliary can also refer to the SVC clusters themselves. In general, the Master SVC cluster is connected to the PowerHA SystemMirror Production Site, and the Auxiliary SVC Cluster is connected to the PowerHA SystemMirror Backup/Recovery Site. See the Installation Guide and Glossary, as well as Overview for PowerHA SystemMirror Enterprise Edition for Metro Mirror for more definitions used in PowerHA SystemMirror and PPRC. Related concepts Overview of PowerHA SystemMirror Enterprise Edition for Metro Mirror on page 1 PowerHA SystemMirror Extended Distance (PowerHA SystemMirror Enterprise Edition) for synchronous Peer-to-Peer Remote Copy (PPRC), now known as and referred to as Metro Mirror increases data availability for IBM TotalStorage volumes that use to copy data to a remote site for disaster recovery purposes. Related information Master glossary Installation guide
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
101
Limitations and restrictions for PowerHA SystemMirror Enterprise Edition Metro Mirror for SVC
The current release of PowerHA SystemMirror Enterprise Edition SVC PPRC has some limitations and restrictions. These include: v SVC Host Naming Convention. Although SVC Host Name Aliases are arbitrary, for PowerHA SystemMirror Enterprise Edition SVC PPRC, they must match the node names you define for the PowerHA SystemMirror Enterprise Edition Sites. This ensures that the ssh commands used to execute SVC tasks are completed for the correct nodes. v SSH must be installed and configured. PowerHA SystemMirror uses commands to communicate with the SVC PPRC cluster that require ssh . Therefore, some version of ssh must be installed and configured on all SVC PPRC cluster nodes. v Volume Groups. Resource Groups to be managed by PowerHA SystemMirror cannot contain volume groups with both SVC PPRC-protected and non SVC-PPRC protected disks. For example: VALID: RG1 contains VG1 and VG2, both PPRC-protected disks. INVALID: RG2 contains VG3 and VG4, VG3 is PPRC-protected, and VG4 is not . INVALID: RG3 contains VG5, which includes both PPRC-protected and non-protected disks within the same volume group. v SVC Cluster names and PowerHA SystemMirror Host aliases. The Host Aliases on your SVC Clusters must match the node names (short names) used in PowerHA SystemMirror to define each cluster node. v CSPOC Operations. You cannot use C-SPOC for the following LVM operations to configure nodes at the remote site (that contain the target volumes): Creating or extending a volume group Operations that require nodes at the target site to write to the target volumes (for example, changing filesystem size, changing mount point, adding LVM mirrors) cause an error message in CSPOC. This includes functions such as changing filesystem size, changing mount points, and adding LVM mirrors. However, nodes on the same site as the source volumes can successfully perform these tasks. The changes are subsequently propagated to the other site via lazy update. For C-SPOC operations to work on all other LVM operations, it is highly recommended that you perform all C-SPOC operations when the cluster is active on all PowerHA SystemMirror nodes and the underlying SVC consistency groups are in a consistent_synchronized state.
102
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
In a typical setup with AIX server nodes in a wide-area PowerHA SystemMirror cluster configuration, the server nodes at each PowerHA SystemMirror cluster site are connected directly to the local SVC Clusters. Two or more inter-cluster links are required between the two SVC clusters. The PowerHA SystemMirror nodes at the same geographical site access the same shared volume groups, but the nodes at each site access them from different physical volumes. The SVC PPRC maintains separate identical local copies of the application data on two separate back end storage subsystems. Several virtual disks are mirrored via master SVC cluster PPRC from the primary site to the standby site. When node or site failure occurs, access to disk resources are not passed from one node to another. Instead, all highly available applications are restarted at the standby site using data copy on secondary volumes. Under normal operation, the application is active on a server at the production site, and all updates to the application data are automatically replicated to the backup disk subsystem by the SVC PPRC framework. PPRC protects the backup copy from inadvertent modification. When a total site failure occurs, the application is restarted on a backup server at the remote site. Prior to restarting the application, which, in this context, means not only the application that end users interact with, but also all dependent data base software or other middleware, PowerHA SystemMirror Enterprise Edition initiates SVC actions to ensure that the backup disk volumes are in the appropriate state to allow application access. PowerHA SystemMirror support for geographic clustering is based on the concept of replicated resources . These are defined as a resource type that has a primary and secondary instance, corresponding to the source and target of data copies that are replicated across two locations. In SVC PPRC, a PPRC consistency group, comprised of a list of SVC relationships defined to SVC and associated with an PowerHA SystemMirror geographically separated cluster definition and defined to PowerHA SystemMirror is referred to as an SVC PPRC replicated resource . The definition for an SVC PPRC replicated resource contains the VDisk name and the PowerHA SystemMirror volume groups associated with the VDisk pairs. SVC recognizes which volumes mirror each other for each PPRC replicated resource. In the example shown here, the PowerHA SystemMirror production site includes: v Server A and Server B v The ESS labeled Primary ESS v A SAN Volume Controller connected by Fibre Channel PPRC links to the SVC at the secondary site and two connections to the Primary ESS. The PowerHA SystemMirror recovery site includes: v Server C and Server D v The ESS labeled Secondary ESS v A SAN Volume Controller connected by Fibre Channel PPRC links to the SVC at the primary site and two connections to the Primary ESS. The PowerHA SystemMirror resource group contains: v The four server nodes v IP networks to connect the server nodes v One or more shared volume groups v The PPRC replicated resource associated with the volumes in the volume group. The configuration also includes point-to-point networks for heartbeating to connect the cluster nodes. Note that all nodes also require connections to the SVC. At least one storage controller is connected to each SVC cluster via a SCSI or Fiber Channel connection.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
103
Example PowerHA SystemMirror Enterprise Edition Metro Mirror for SVC mutual takeover configuration: This example lists the information for a two-node SVC PPRC cluster configured for mutual takeover.
SVC Clusters -----------svc_Cl1, svc_Cl2 SVC PPRC Relationships Configuration ------------------------------------Relationship Name = sample_rel1 Master Vdisk info = volume_id1@svc_Cl1 Auxiliary Vdisk info = volume_id2@svc_Cl2 SVC PPRC Relationships Configuration ------------------------------------Relationship Name = sample_rel2 Master Vdisk info = volume_id3@svc_Cl1 Auxiliary Vdisk info = volume_id4@svc_Cl2 SVC PPRC-Replicated Resource Configuration for Resource Group RG1 -----------------------------------------SVC PPRC Consistency Group Name = CG1 Master SVC Cluster Name = svc_Cl1 Auxiliary SVC Cluster Name = svc_Cl2 List of Relationships = sample_rel1, sample_rel2 ** SVC PPRC Relationships Configuration for Consistency Group CG3 ------------------------------------Relationship Name = sample_rel3 Master Vdisk info = volume_id5@svc_Cl2
104
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Auxiliary Vdisk info = volume_id6@svc_Cl1 SVC PPRC-Replicated Resource Configuration for Resource Group RG2 -----------------------------------------SVC PPRC Consistency Group Name = CG1 Master SVC Cluster Name = svc_Cl2 Auxiliary SVC Cluster Name = svc_Cl1 List of Relationships = sample_rel3
From this point, resource groups RG1 and RG2 are configured to include the SVC PPRC-Replicated Resources CG1 and CG2 respectively. Note: Since nodes located at each site treat the pairs differently in some situations, using PPRC pairs in resource groups that have no site preference could lead to unpredictable results.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
105
This procedure assumes that the hdisks and corresponding vpaths made available to your nodes are visible at those nodes. If they are not , and you can verify that SVC has been configured correctly to make the virtual disks (vDisks) available, reboot the node and run cfgmgr to make the disks viewable. Note: We recommend making a copy of the SVC PPRC Mirrored Volumes Worksheet from (See Planning Worksheets for SVC PPRC), to record the information gathered below. Related concepts Planning Worksheets for SVC PPRC on page 89 You should print and use the paper planning worksheets from the PDF version of this guide instead of the HTML version. The PDF version has each worksheet aligned to start at the top of a page. You may need more than one copy of some of the worksheets. Discover AIX vpaths associated with SVC vDisks: You can use SVC vDisks to discover AIX vpaths. 1. Select vDisks from your SVC configuration to be used in SVC PPRC relationships (that will be grouped in consistency groups) that will be managed by PowerHA SystemMirror. 2. Based on the vDisks you have chosen to use in a given SVC PPRC relationship, find out which vpaths and hdisks correspond. a. On an PowerHA SystemMirror node, run:
lsdev -Ccdisk | more
From this output, you can discover which hdisks are associated with the SAN Volume Controller:
HAnode1> lspv ... hdisk32 hdisk33 hdisk34 hdisk35 hdisk36 hdisk37 vpath0 vpath1 vpath2 vpath3 Available Available Available Available Available Available Available Available Available Available 81-08-02 81-08-02 81-08-02 81-08-02 81-08-02 81-08-02 Data Data Data Data SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device SAN Volume Controller Device Optimizer Pseudo Device Driver Optimizer Pseudo Device Driver Optimizer Pseudo Device Driver Optimizer Pseudo Device Driver
In the example listing above, hdisk32 - hdisk37 are all SAN Volume Controller Devices. b. Then run one of the following commands: lsvpcfg | grep <hdisk>, to list the vpath for a specific hdisk lsvpcfg | grep vpath, to list information for all vpaths. SVC-related vpaths have an output similar to this:
[HAnode1][/usr/es/sbin/cluster/svcpprc/utils]> lsvpcfg | grep vpath vpath2 (Avail pv ) 6005076801898009980000000000012A = hdisk4 (Avail ) hdisk11 (Avail ) hdisk24 (Avail ) hdisk31 (Avail ) vpath3 (Avail pv ) 60050768018980099800000000000129 = hdisk5 (Avail ) hdisk12 (Avail ) hdisk25 (Avail ) hdisk32 (Avail )
Run the following command to get any associated vDisk value. (If there is doubt as to the output, run the command without the trailing 'grep' and 'cut' ):
ssh admin@<SVC Cluster IP address> svcinfo lshostvdiskmap -delim : | grep <SVC volume ID value> | cut -f5 -d":"
Example:
106
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
HAnode1> ssh admin@9.22.22.22 svcinfo lshostvdiskmap -delim : |grep 6005076801898009980000000000012C |cut -f5 -d":" vDisk1 HAnode1>
The value returned should be the name given to the vDisk during its creation on the SVC system. Set up volume group and filesystem for PowerHA SystemMirror management: On the vpath(s) that corresponds to the master vDisk(s) for a given SVC PPRC Relationship, set up the volume group and filesystem to be managed by PowerHA SystemMirror. Ensure that the Volume Major Number for the volume group can be used on all PowerHA SystemMirror cluster nodes, and that the physical volume name for the file system can also be used across all PowerHA SystemMirror cluster nodes. 1. Use the lvlstmajor command on each cluster node to display the available Volume Major Numbers. 2. Vary off (using varyoffvg ) the newly created volume group and import it to all nodes in the local PowerHA SystemMirror site. 3. Create a temporary SVC PPRC relationship to copy the volume group/file set information to the auxiliary vDisk. Run the following commands. (Refer to the SVC CLI documentation for more details.)
ssh admin@<master SVC Cluster IP> svctask mkrcrelationship -master <vDisk_name>-aux <vDisk_name> -cluster <Aux Cluster name> -name <relationship name> ssh admin@<master SVC Cluster IP> svctask startrcrelationship <relationship name>
At this point, wait until the relationship moves from inconsistent_copying to consistent_synchronised state. Check the state by running:
ssh admin@master<master SVC Cluster IP> svcinfo lsrcrelationship [relationship name]
4. Once the SVC PPRC relationship has completed copying, delete the relationship: ssh admin@<master SVC Cluster IP> svctask rmrcrelationship <relationship name> This step is necessary in order for the next LVM operations to complete successfully. 5. Using SMIT or the command line on the Backup PowerHA SystemMirror site (the site that is connected to the Auxiliary SVC Cluster,) import the volume group(s) created in step 2c. 6. Ensure that on ALL cluster nodes, the AUTO VARYON feature for volume groups is set to NO: On each node run: chvg -a 'n' -Q 'y' <volume group name> PowerHA SystemMirror will attempt to auto-correct this during verification but will not be able to do so in the case of remote PPRC. (If done at this point, it will save you time later.) At this point, the volume groups and file systems necessary to configure PowerHA SystemMirror have been created.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
107
The PowerHA SystemMirror site associated with this SVC cluster. Enter the same name for the SVC cluster. This name cannot be more than 20 alphanumeric characters and underscores. Select Master or Auxiliary . The Master SVC Cluster is usually defined at the Primary PowerHA SystemMirror site, the Auxiliary SVC Cluster at the Backup PowerHA SystemMirror site. IP address of this cluster. used by PowerHA SystemMirror to submit PPRC management commands. The name of the SVC Cluster that will be hosting vDisks from the other side of the SVC PPRC link.
Related reference SVC Cluster Worksheet on page 89 Use the SVC Cluster Worksheet to record information for defining SVC clusters to be accessed by PowerHA SystemMirror
Role
4. Continue to fill in the table for all volume groups to be managed in this cluster
108
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related tasks Discover AIX vpaths associated with SVC vDisks on page 106 You can use SVC vDisks to discover AIX vpaths. Related reference PPRC-Mirrored Volumes Worksheet on page 87 Use the PPRC-Mirrored Volumes Worksheet to record PPRC pairs and the associated PPRC replicated resources
Related reference SVC PPRC Replicated Resources Worksheet on page 91 Use the SVC PPRC Replicated Resources Worksheet to list the SVC Consistency Group names (and their related SVC Clusters) that will be used as the PPRC Replicated Resource names when defining PowerHA SystemMirror Resource Group components.
109
3. 4. 5. 6.
Resource groups that include PPRC replicated resources do not support the startup policy Online on All Available Nodes. Record the Site Management Policy: Prefer Primary Site or Online on Either Site. Prefer Primary Site is recommended for use with PPRC. Record the names of the nodes in the resource group, listing the nodes in the prioritized order in which they will appear in the nodelist for the resource group. Record the name of each SVC-managed PPRC replicated resource (volume set) to be included in the resource group. Record the names of the volume groups associated with the volume sets to include in the resource group.
Related reference Resource Groups for PPRC Replicated Resources Worksheet on page 82 Use the Resource Groups for PPRC Replicated Resources Worksheet to record resource groups that contain PPRC replicated resources.
110
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Related reference Contents of the installation media on page 12 The PowerHA SystemMirror Enterprise Edition for Metro Mirror installation media provides the images for installation on each node in the cluster that can take over a PPRC mirrored volume group.
Check the host names listed there against your PowerHA SystemMirror node names. If they differ, then refer to the SVC documentation on how to change the names to match, either via the SVC CLI or GUI interface.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
111
SVC Cluster Name SVC Cluster Role PowerHA SystemMirror site SVC Cluster IP Address Remote SVC Partner
Enter the same name used by SVC. This name cannot be more than 20 alphanumeric characters and underscores. Select Master or Auxiliary . Select the PowerHA SystemMirror site associated with this SVC cluster. IP address of this cluster Name of the SVC Cluster that will be hosting vDisks from the other side of the SVC PPRC link
4. Press Enter when you have finished the definition. 5. Create as many SVC PPRC Relationships as will be necessary to manage all the vpaths used for volume groups that will be managed by HAMCP. Related reference Setting up volume groups and filesets on SVC PPRC-protected disks on page 105 As part of planning your HACMp/XD PPRC SVC environment, decide what vDisks will be used to support volume groups and file systems on the PowerHA SystemMirror cluster nodes.
112
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
SVC PPRC Consistency Group Name Master SVC Cluster Name Auxiliary SVC Cluster Name List of Relationships
The name to be used by SVC and also to be used in the resource group configuration. Use no more than 20 alphanumeric characters and underscores. Name of the Master cluster is the SVC Cluster connected to the PowerHA SystemMirror Primary Site Name of the SVC Cluster connected to the PowerHA SystemMirror Backup/Recovery Site List of names of the SVC PPRC relationships
4. Press Enter.
Setting up PowerHA SystemMirror Enterprise Edition for SVC Metro Mirror on AIX Virtual I/O clients
Virtual I/O Server allows a machine to be divided into LPARs, with each LPAR running a different OS image allowing the sharing of physical resources between the LPARs including virtual SCSI and virtual networking. The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN), but lets other LPARs share them remotely using the built-in Hypervisor services. These other LPARs are called Virtual I/O client partitions or VIO clients. And because they don't need real physical disks or real physical Ethernet adapters to run, they can be created quickly and inexpensively.
As an example, in Figure 1 above, the VIO Server has a few disks that could be SCSI or fiber channel storage area network (SAN) disks. The VIO clients use the VIO client device driver just as they would a regular local device disk to communicate with the matching server VIO device driver, then the VIO Server actually does the disk transfers on behalf of the VIO client.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
113
When PowerHA SystemMirror Enterprise Edition for SVC Metro Mirror is configured on VIO clients the SVC clusters are not directly attached to the VIO clients, therefore the normally SCSI query commands can not be used to extract the necessary SVC vdisk information. Even though no special configuration steps are required to define the SVC PPRC resource to PowerHA SystemMirror Enterprise Edition, it is required that the following procedures have been performed already before verification can succeed. If this has not been done perform the following actions before proceeding to perform the usual SVC PPRC configuration steps. These steps assume that the disks subsystems have already been physically attached to the SVC clusters. All the necessary SCSI server adapters have been created on the servers and virtual client SCSI adapters have been mapped to client partitions. 1. On the SVC clusters, perform these actions: a. Identify the managed disks MDisks using the svcinfo lsmdisk command b. Identify or create the managed disk groups MDiskgrp using svcinfo lsmdiskgrp or the svctask mkmdiskgrp command c. Identify or Create the Virtual Disks using the svcinfo lsvdisk or the svctask mkvdisk command d. Map the VDisks to the VIO Servers as hosts using the svctask mkvdiskhostmap. 2. On the VIO servers, perform these actions To get access to the regular AIX command line interface on the VIO server it is recommended that you run oem_setup_env. a. Run cfgmgr b. Use odmget -q "id=unique_id" CuAt to identify the hdisks/vpaths mapped to the SVC vdisks on the servers c. Select the disk to export by running lsdev to show the virtual SCSI server adapters that could be used for mapping with a physical disk d. Run the mkvdev command using the appropriate hdisk#s respectively to create the Virtual Target Device (this commands maps the LUNs to the virtual I/O clients:
$ mkvdev -vdev hdisk# -vadapter vhost# -dev vhdisk#
e. Map the VDisks to the VIO Servers as hosts using the svctask mkvdiskhostmap. 3. On the VIO clients After the mkvdev command runs successfully on the VIO server, the LUNs will be exported to the VIO clients. a. Run lsdev -Cc disk to identify the LUN info on the client b. Run cl_vpath_to_vdisk command to identify the SVC vdisk LUN mapping on the VIO clients.
114
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
2. Select Extended Configuration > Extended Verification and Synchronization and press Enter. Enter the field values as follows:
Verify Synchronize or Both Automatically correct errors found during verification? Force synchronization if verification fails? Verify changes only? Logging Select Verify . Select Yes . Select No (the default.) Select no to run all verification checks that apply to the current cluster configuration. Standard is the default. You can also select Verbose . Verification messages are logged to /var/hacmp/clverify/clverify.log .
3. Press Enter. The output from the verification is displayed in the SMIT Command Status window. 4. If you receive error messages, make the necessary changes and run the verification procedure again. You may see Warnings if the configuration has a limitation on its availability; for example, only one interface per node per network is configured. Verifying the SVC PPRC configuration: You can verify the SVC PPRC configuration using the CLI. To run the SVC PPRC verification, execute the following command:
/usr/es/sbin/cluster/svcpprc/utils/cl_verify_svcpprc_config
If any configuration errors appear, go back into the PowerHA SystemMirror SMIT panels and correct the errors, then re-run this script. All the SVC PPRC Relationships and Consistency Groups named in the previous steps are created during this step. If there were any errors during execution of this script, the SVC PPRC consistency groups and relationship may not be created. To verify that the configuration exists, run the following two commands against either of the SVC clusters:
ssh admin@<SVC Cluster IP> svcinfo lsrcrelationship ssh admin@<SVC Cluster IP> svcinfo lsrcconsistgrp
Synchronizing the PowerHA SystemMirror cluster configuration: You can propagate the new SVC PPRC configuration information (and possibly PowerHA SystemMirror site information) across your PowerHA SystemMirror cluster. Follow these steps: 1. Enter smit hacmp 2. Select Extended Configuration > Extended Verification and Synchronization and press Enter. Enter the field values as follows:
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
115
Verify Synchronize or Both Automatically correct errors found during verification? Force synchronization if verification fails? Verify changes only? Logging
Select Synchronize. Select No (the default.) Select No (the default.) Select no to run all verification checks that apply to the current cluster configuration. Standard is the default. You can also select Verbose. Verification messages are logged to /var/hacmp/clverify/clverify.log.
3. Press Enter. The cluster is synchronized. The output is displayed in the SMIT Command Status window.
Changing an PowerHA SystemMirror Enterprise Edition for Metro Mirror SVC configuration
Using SMIT, you can change, show, or remove cluster configurations, PPRC relationships, and resources. Be sure to update the PowerHA SystemMirror resource group information and synchronize the cluster after making changes.
116
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure SVC-PPRC Replicated Resources > SVC Clusters Definition to HACMP > Remove an SVC Cluster and press Enter: 3. Select the SVC cluster to remove from the picklist. and press Enter. 4. Confirm that you want to remove this SVC cluster definition by pressing Enter again.
5. Press Enter when you have finished the definition. 6. Create as many SVC PPRC Relationships as will be necessary to manage all the vpaths used for volume groups that will be managed by HAMCP. Related reference Setting up volume groups and filesets on SVC PPRC-protected disks on page 105 As part of planning your HACMp/XD PPRC SVC environment, decide what vDisks will be used to support volume groups and file systems on the PowerHA SystemMirror cluster nodes.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
117
2. Select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure SVC-PPRC Replicated Resources > > SVC PPRC-Replicated Resource Configuration > Change/Show an SVC PPRC Resource , and press Enter. 3. Select the SVC PPRC resource to change and press Enter. 4. Enter field values as follows:
SVC PPRC Consistency Group Name New SVC PPRC Consistency Group Name Master SVC Cluster Name Auxiliary SVC Cluster Name List of Relationships The current name to used by SVC and also used in the Resource Group Configuration. The new name to used by SVC and also to be used in the Resource Group Configuration. The name of the Master cluster is the SVC Cluster connected to the PowerHA SystemMirror Primary Site. The name of the SVC Cluster connected to the PowerHA SystemMirror Backup/Recovery Site. A list of names of the SVC PPRC relationships.
5. Press Enter.
Troubleshooting PowerHA SystemMirror Enterprise Edition for Metro Mirror for SVC
These topics provide information that might help you with troubleshooting SVC PPRC clusters.
118
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
119
where <SVC Cluster IP Address> is for either SVC cluster being used by PowerHA SystemMirror.
The command lists information about all SVC clusters in the PowerHA SystemMirror configuration or a specific SVC cluster. If no SVC is specified, all SVC clusters defined will be listed. If a specific SVC cluster is provided via the -n flag, information about this SVC only will be displayed. The -c flag displays information in a colon-delimited format. Sample output
[/usr/es/sbin/cluster/svcpprc/cmds]> cllssvc svc9A svc78 [/usr/es/sbin/cluster/svcpprc/cmds]> cllssvc -n svc9A svccluster_name svccluster_role sitename cluster_ip r_partner svc9A AuxiliaryVancouver 9.114.230.93 svc78 [/usr/es/sbin/cluster/svcpprc/cmds]> cllssvc -n svc9A -c #SVCNAME:ROLE:SITENAME:IPADDR:RPARTNER svc9A:Auxiliary:Vancouver:9.114.230.93:svc78
cllssvcpprc command: List information about all SVC PPRC resources or a specific SVC PPRC resource.
cllssvcpprc [-n < svcpprc_consistencygrp >] [-c] [-a] [-h]
If no resource name is specified, the names of all PPRC resources defined will be listed. If the -a flag is provided, full information about all PPRC resources is displayed. If a specific resource is provided via the -n flag, information about this resource only will be displayed. The -c flag displays information in a colon-delimited format. The -h flag turns off the display of column headers. Sample output
[/usr/es/sbin/cluster/svcpprc/cmds]> cllssvcpprc< HASVC1 [/usr/es/sbin/cluster/svcpprc/cmds]> cllssvcpprc -n HASVC1 svcpprc_consistencygrp MasterCluster AuxiliaryCluster relationships HASVC1 svc78 svc9A svcrel1 [/usr/es/sbin/cluster/svcpprc/cmds]> cllssvcpprc -n HASVC1 -ca #NAME:MASTER:AUXILIARY:RELATIONSHIPS HASVC1:svc78:svc9A:svcrel1
cllsrelationship command: List information about all SVC PPRC relationships or a specific PPRC relationship.
cllsrelationship [-n <relationship_name>] [-c] [-a] [-h]
120
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
If no resource name is specified, the names of all PPRC resources defined will be listed. If the -a flag is provided, full information about all PPRC relationship is displayed. If a specific relationship is provided via the -n flag, information about this relationship only will be displayed. The -c flag displays information in a colon-delimited format. The -h flag turns off the display of column headers. Sample output
[/usr/es/sbin/cluster/svcpprc/cmds]> cllsrelationship svcrel1 [/usr/es/sbin/cluster/svcpprc/cmds]> cllsrelationship -n svcrel1 relationship_name MasterVdisk_info AuxiliaryVdisk_info svcrel1 c48f1rp06_075@svc78 c48f2rp08_095@svc9A [/usr/es/sbin/cluster/svcpprc/cmds]> cllsrelationship -n svcrel1 -c #RELATIONSHIP:MASTERVDISK:AUXVDISK svcrel1:c48f1rp06_075@svc78:c48f2rp08_095@svc9A
cl_verify_svcpprc_config command: Verifies the SVC definition in the PowerHA SystemMirror configuration. After the successful verification of the SVC configurations, it establishes all the SVC relationships defined to PowerHA SystemMirror on the SVC clusters and adds them to the corresponding consistency groups.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
121
122
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
v Mirroring to BCV devices is not supported. v Concurrent RDF configuration is not supported. v PowerHA SystemMirror Enterprise Edition does not support EMC storages that are indirectly connected to the AIX host system through a host computer. v PowerHA SystemMirror does not trap SNMP notification events for DMX storage. If the SRDF link goes down when the cluster is up, and later the link is repaired, you need to manually resync the pairs. v Pair creation must be done outside of the cluster control. You must create the pairs before starting cluster services. v PowerHA SystemMirror Enterprise Edition SRDF does not correct the pairs or resync the pairs if the pair states are in the Invalid state. Note: A pair in an invalid state might lead to data corruption if PowerHA SystemMirror Enterprise Edition tries to recover the state. v Resource groups that are managed by PowerHA SystemMirror cannot contain volume groups with disks that are SRDF-protected and disks that are not SRDF-protected. For example: Correct Example: RG1 contains VG1 and VG2, both SRDF-protected disks IncorrectExample: RG2 contains VG3 and VG4, VG3 is SRDF-protected, and VG4 is not SRDF-protected. v Using C-SPOC for the following LVM operations to configure nodes at the remote site that contain the target volumes: Creating a volume group. Operations that require nodes at the target site to write to the target volumes (for example, changing filesystem size, changing mount point, adding LVM mirrors) cause an error message in CSPOC. However, nodes on the same site as the source volumes can successfully perform these tasks. The changes are subsequently propagated to the other site via lazy update. Note: For C-SPOC operations to work on all other LVM operations, you must perform all C-SPOC operations when the cluster is active on all PowerHA SystemMirror nodes, and the underlying SRDF pairs are in a Synchronized state.
123
The following image displays a typical implementation of two EMC DMX storage with SRDF in a four-node PowerHA SystemMirror geographic cluster. The cluster consists of four System p nodes. Each of the storage devices are connected to each node (Server A, Server B, Server C, and Server D) with SCSI or Fibre Channel links. SRDF links are established between the Primary storage and the Secondary storage. The configuration also includes point-to-point networks for heartbeating to connect the cluster nodes together.
The order of the nodes in each resource groups, nodelist indicates which site is considered to be the production site and which site is considered to be the backup site for that resource group. The site that includes the nodes with the highest priority is considered the production site for the resource group. In the preceding image the configuration for resource group 1 is the following: v Ordered sitelist Production site, Recovery site v Ordered nodelist Server A, Server B, Server C, and Server D The data mirroring goes from Primary storage, which is owned by the Production site, to the Secondary storage which is owned by the Secondary site.
124
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Complete the following worksheets to help you plan your environment's SRDF management configuration: v General PowerHA SystemMirror planning worksheets on page 81 v Planning worksheets for SRDF on page 126
2. From the list shown in Step 1, select the devices where the application data will be stored. 3. Identify the AIX disks for the Symmetrix device you selected by running the following command:
# powermt display dev=Symmetrix_device_name Pseudo name=Symmetrix_device_name
In the following example, the Symmetrix device maps to hard disks hdisk105, hdisk50, hdisk120, and hdisk175 through the two I/O paths, fscsi1 and fscsi0:
Symmetrix ID=000190100304 Logical device ID=0026 state=alive; policy=SymmOpt; priority=0; queued-IOs=0 ============================================================================== ---------------- Host --------------- Stor -- I/O Path - -- Stats --### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 1 fscsi1 hdisk105 FA 13aB active alive 0 0 0 fscsi0 hdisk120 FA 4aB active alive 0 0 0 fscsi0 hdisk175 FA 13aB active alive 0 0 1 fscsi1 hdisk50 FA 4aB active alive 0 0
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
125
4. Create a volume group with the physical volumes of the Symmetrix device. For example, when you create these Volume groups you are creating file systems and logical volumes. These configurations using Logical Volume Manager (LVM) must be done on the primary site. 5. Create composite groups and device groups on the EMC storage and add Symmetrix devices to these groups. 6. Establish full mirroring between the SRDF pairs so that the entire LVM metadata and other data are copied over to the mirrored volumes. 7. Configure the PowerHA SystemMirror Enterprise Edition SRDF replicated resources. 8. Split the SRDF by using the split command to access the mirrored data on the secondary site 9. Import the volume groups on the secondary site by specifying the same volume groups name and major number as that on the primary site for the mirrored device hard disk. 10. Varyon the volume groups and mount the filesystems.
126
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Prerequisite Software
Before installing PowerHA SystemMirror Enterprise Edition SRDF management, the following software must be installed on the cluster nodes: v Your environment must have the one of the following installed: AIX Version 6.1 with the 6100-02 Technology Level, or later AIX 5L Version 5.3 with the 5300-09 Technology Level, or later v PowerHA SystemMirror Enterprise Edition Version 6.1 (cluster.es.server.rte 6.1), or later. v Solution Enabler for AIX software. SYMCLI software for AIX. The fileset level for SYMCLI.SYMCLI.rte must be 7.0.0.0, or later.
Installing Filesets
If you have not already done so, install the required filesets. The following filesets are included on the PowerHA SystemMirror Enterprise Edition media and must be installed for SRDF Management to work: v cluster.es.sr.cmds - PowerHA SystemMirror Enterprise Edition for SRDF Mirror runtime commands for SRDF support v cluster.es.sr.rte - PowerHA SystemMirror Enterprise Edition for SRDF Mirror commands for SRDF support v cluster.msg.en_US.sr - PowerHA SystemMirror Enterprise Edition for SRDF Mirror commands for SRDF support. Complete the following steps to install the PowerHA SystemMirror SRDF filesets: 1. From the SMIT interface enter, install. 2. Select Install and Update Software. 3. Select Install Software. 4. Select the appropriate message files for you location. For example, select the cluster.msg.srdf.en_US fileset to install the English fileset. Note: All PowerHA SystemMirror SRDF filesets are installed in the /usr/es/sbin/cluster/emcsrdf directory.
127
v The PowerHA SystemMirror cluster is configured for the following: Nodes Sites Networks and network interfaces Service labels and application monitors Initial resource groups Note: You can modify the attributes for a resource group later to accommodate SRDF-replicated resources.
128
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
4. Follow instructions in the /usr/lpp/EMC/README file to configure an EMC powerpath emcpowerrese custom disk processing method.
129
5. For the Force synchronization if verification fails field, select No. 6. For the Verify changes only field, select No. 7. For the Logging field, select Standard . Note: The verification messages are stored in the /var/hacmp/clverify/clverify.log file. 8. Press Enter to display the output from the verification. If you receive error messages, make the necessary changes and run the verification procedure again. You might see warnings if the configuration has a limitation on its availability For example, only one interface per node per network is configured.
130
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Recovery Action The PowerHA SystemMirror Recovery Action on SRDF volumes during the failover process. This value is used by PowerHA SystemMirror to decide whether or not to complete the failover in cases where it is not possible for the PowerHA SystemMirror to automatically decide if the Primary storage or Secondary storage contains valid data. Consistency Enabled Value specifying whether consistency is enabled or not for the composite group. You must update the PowerHA SystemMirror resource group information and synchronize the cluster after making these changes.
SRDF Mode
The following SRDF modes are allowed for SRDF-managed replicated resources: SRDF Synchronous Replication When the Synchronous mode of mirroring is chosen, the SRDF pair mode is set to sync. In the synchronous mode, the EMC storage responds to the host that issued a write operation to the source of the composite group only after the EMC storage containing the target of the composite group acknowledges that it has received and checked the data. SRDF Asynchronous Replication When the Asynchronous mode of mirroring is chosen, the SRDF pair mode is set to async. In the asynchronous mode, the EMC storage provides a consistent point-in-time image on the target of the composite group, which is a short period of time behind the source of the composite group. Managed in sessions, asynchronous mode transfers data in predefined timed cycles or delta sets to ensure that data at the remote target of the composite group site is dependent write consistent.
131
replicated resource, which is defined as the composite group on the EMC storage. Otherwise, consistency is not enabled for the resource.
Synchronized
Split
Failed Over
R1 Updated
R1 UpdInProg
Suspended
Consistent
Transmit Idle
132
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Hitachi Storage system support short distance replication and long-distance replication through Truecopy synchronous and asynchronous replication with Hitachi Universal Replicator (HUR) technologies. PowerHA SystemMirror Enterprise Edition enables integrated discovery and management of mirrored resources for management of Highly Available (HA) resource groups. PowerHA SystemMirror Enterprise Edition enablement for High-Availability and Disaster Recovery (HADR) of Hitachi mirrored storage involves the following steps: 1. Plan the storage deployment and replication necessary for your environment. This process is related to the applications and middleware being deployed in the environment which would eventually be HA managed by PowerHA SystemMirror Enterprise Edition. 2. Use the provided storage configuration tools provide by Hitachi to configure the storage devices you defined in step 1 and deploy them. 3. Use PowerHA SystemMirror Enterprise Edition interfaces to discover the deployed storage devices and define the HA policies for the applications or resource groups that are consuming the mirrored storage.
133
v Only fence level NEVER is supported for synchronous mirroring. v Only HUR is supported for asynchronous mirroring. v The dev_name must map to a logical devices and the dev_group should be defined under HORCM_LDEV section in the horcm.conf file. v The PowerHA SystemMirror Enterprise Edition Truecopy/HUR solution uses dev_group for any basic operation. For example, pairresync/pairevtwait/horctakeover. If there are several dev_names in a dev_group, the dev_group must be consistency enabled. v PowerHA SystemMirror Enterprise Edition does not trap SNMP notification events for Truecopy/HUR storage. If a Truecopy link goes down when the cluster is up and later the link is repaired you need to manually resync the pairs. v The creation of pairs is done outside the cluster control. You must create the pairs before starting cluster services. v Resource groups that are managed by PowerHA SystemMirror Enterprise Edition cannot contain volume groups with both Truecopy/HUR-protected and non-Truecopy/HUR-protected disks. A Resource group must have either a Truecopy protected disk or a HUR protected disk. v All nodes in the PowerHA SystemMirror Enterprise Edition cluster must use same horcm instance. v You cannot use C-SPOC for the following LVM operations to configure nodes at the remote site that contain the target volume: Creating a volume group. Operations that require nodes at the target site to write to the target volumes. For example, changing file system size, changing mount point, or adding LVM mirrors cause an error message in CSPOC. However, nodes on the same site as the source volumes can successfully perform these tasks. The changes are then propagated to the other site using lazy update. Note: For C-SPOC operations to work on all other LVM operations, it is highly recommended that you perform all C-SPOC operations when the cluster is active on all PowerHA SystemMirror Enterprise Edition nodes and the underlying Truecopy/HUR PAIRs are in a PAIR state. Related concepts Planning worksheets for Truecopy/HUR Management on page 135 You can use planning worksheets to start planning for your Truecopy/HUR Management implementation.
134
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
135
Horctakeover Timeout Value The -t option for the horcrtakeover command. Use a short timeout for horctakeover of TrueCopy Synchronous. A short timeout or long timeout can be used for horctakeover of the Hitachi Universall Replicator (HUR). It is your decision what timeout option you want to use. For example, if returning to production is more important to you than having the most current data in a failure use a short timeout. If ensuring that you lose as little data as possible in a disaster, even if that means the recovery time is elongated, use a long timeout. Horcm Instance The horcm instance that you used. For example, you use the horcm0.conf file then the value is horcm0. If you use the horcm.conf file then the value is horcm. All nodes in the PowerHA SystemMirror Enterprise Edition cluster must use same horcm instance. For example, if node1 uses the horcm0.conf file then the same instance (horcm0.conf) must be used by all other nodes in the cluster. Pairevtwait Timeout Value The -t option for the pairevtwait command. Specifies the interval of monitoring a status specified using the -s option and the time-out period in units of 1 second. Use along timeout for any pairevtwait monitoring a pairresync. The default value is 3600 seconds. Use the following worksheet to record the Truecopy/HUR replicated resources that go in the PowerHA SystemMirror Enterprise Editionresource groups:
Table 5. SYNCRONOUS Truecopy
Truecopy Resource Name TRU_RES1 TRU_RES2 Truecopy Mode SYNC SYNC Device Groups Oradb1 Oradb2 Oradb3 Oradb5 Oradb6 Horctakeover Recovery Action Timeout Value MANUAL MANUAL 300 300 Horcm Instance horcm0 horcm0 Pairevtwait Timeout Value 3600 3600
TRU_RES3
ASYNC
AUTO
3600
horcm0
3600
Related concepts Limitations for Truecopy/HUR Management on page 133 To implement Truecopy/HUR Management correctly, you must know its limitations.
136
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
You must make sure that the Hitachi hdisks are made available to your nodes before you continue the configuration process. If the Hitachi hdisk are not available to your nodes you can reboot the nodes and run the cfgmgr command. Note: PowerHA SystemMirror Enterprise Edition does not create replication pairs using the Hitachi interfaces. You must use the Hitachi Storage interfaces to create the same replicated pairs before using PowerHA SystemMirror Enterprise Edition to achieve HA/DR solution. For information about setting up Truecopy/HUR pairs, see the Hitachi Command Control Interface (CCI) User and Reference Guide that is maintained by Hitachi.
2. On the PowerHA SystemMirror Enterprise Edition nodes find the hdisk that will be managed by the RG (Resource Group) and get the LDEV mapping by running the lsdev command. In the following example, hdisk4 and hdisk5 need to be managed by a HA/DR solution, and the LDEV numbers are 256 for hdisk4 and 257 for hdisk5.
# lsdev -C -c disk | grep hdisk | raidscan -find DEVICE_FILE UID S/F PORT TARG hdisk4 0 F CL2-A 0 hdisk5 0 F CL2-A 0 LUN 0 1 SERIAL 45306 45306 LDEV 256 257 PRODUCT_ID OPEN-V OPEN-V
3. Use the HORCM LDEV section in the horcm2.conf file to identify the dev_group that will be managed by PowerHA SystemMirror Enterprise Edition. For example if hdisk4 and hdisk5 will be part of the resource group then identify the dev_group from the Horcm configuration file. In the following example, dev_group VG01 (LDEV 256) and VG02 (LDEV 257) need to be managed by the PowerHA SystemMirror Enterprise Edition Truecopy/HUR solution.
HORCM_LDEV #dev_group VG01 VG02 dev_name oradb1 oradb2 Serial# CU:LDEV(LDEV#) 45306 256 45306 257 MU# 0 0
4. Verify that the PAIRs are established by running the pairvolchk command or the pairdisplay command on the dev_group.
# pairvolchk -g VG01 -IH2 pairvolchk : Volstat is P-VOL.[status = PAIR fence = NEVER MINAP = 3 ]
Note: If pairs are not yet established, you must create the pairs. For instructions about creating pairs, see the CCI/Storage Navigator documentation that is maintained by Hatachi. The following example is displayed when pairs are not created.
# pairvolchk -g VG02 -IH2 pairvolchk : Volstat is SMPL.[status = SMPL]
5. Verify that dev_group is consistency enabled by running the pairvolchk command and CTGID is in the output. You can also verify if the dev_group is managed by Truecopy SYNC or HUR. If the fence value is NEVER, the dev_group is managed by Truecopy SYNC. If the fence value is ASYNC, the dev_group is managed by HUR.
# pairvolchk -g VG01 _-IH2 pairvolchk : Volstat is P-VOL.[status = PAIR fence = NEVER MINAP = 3 CTGID = 2]
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
137
4. Once the Truecopy relationship has completed copying, run the pairsplit command to SPLIT the relationship. If you do not complete this step future LVM operations will not complete successfully. 5. On the node at the backup PowerHA SystemMirror Enterprise Edition site, remove the paired Hitachi hdisk by running the rmdev command. For example, the hdisk4 at primary site is paired with hdiks9 at remote site and hdiks5 is paired with hdisk10.
#rmdev d l hdisk9 #rmdev d l hdisk10
7. Using SMIT or the command line on the backup PowerHA SystemMirror Enterprise Edition site (the site that is connected to the secondary Hitachi Storage) import the volume groups that you created in step 1. 8. Resync the pairs you split steps 4 by running the pairresync command.
#pairresync -g VG01 -IH2 #pairresync -g VG02 -IH2
9. Verify that on all cluster nodes, the AUTO VARYON feature for volume groups is set to NO by running the chvg command.
chvg -a n -Q y <volume group name here>
Note: PowerHA SystemMirror Enterprise Edition attempts to automatically set the AUTO VARYON to NO during verification, except in the case of remote Truecopy/HUR.
138
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
AIX Version 7.1, or later. v PowerHA SystemMirror Enterprise Edition Version 7.1, or later. v Command Control Interface (CCI) software for AIX. For more information about CCI, see Hitachi Command Control Interface (CCI) User and Reference Guide that is maintained by Hitachi.
All PowerHA SystemMirror Enterprise Edition Truecopy/HUR filesets are installed in the /usr/es/sbin/cluster/tc directory.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
139
Adding Truecopy/HUR Replicated Resources to PowerHA SystemMirror Enterprise Edition Resource Groups
After you have configured the Truecopy/HUR replicated resources you must add them to the PowerHA SystemMirror Enterprise Edition resource groups. To add a Truecopy replicated resource to a PowerHA SystemMirror Enterprise Edition resource group, complete the following steps: 1. From the command line enter, smitty hacmp. 2. In SMIT select, Extended Configuration Extended Resource Configuration Extended Resource Group Configuration. Depending on whether you are working with an existing resource group or creating a resource group. The Truecopy Replicated Resources entry appears at the bottom of the page in SMIT. This entry is a picklist that displays the resource names created in the previous step. Make sure that the volume groups selected on the Resource Group configuration screen match the volume groups used in the Truecopy/HUR Replicated Resource. a. If you are creating a resource group select, Change/Show Resource Group. b. If you are adding a resource group select, Add a Resource Group. 3. In the Truecopy Replicated Resources field verify that the volume group you specify is the same volume group that you specified on the Resource Group configuration panel. 4. Press Enter. Next you must verify the cluster.
4. Press Enter. The output from the verification process is displayed in the SMIT Command Status window. Note: If you receive error messages, make the necessary changes and run the verification procedure again. You might see warning messages if the configuration has a limitation on its availability. For example, only one interface per node per network is currently configured.
140
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
/usr/es/sbin/cluster/tc/utils/cl_verify_tc_config
Correct any configuration errors that appear, then run the script again.
PowerHA SystemMirror Enterprise Edition: Storage based High Availability and Disaster Recovery
141
2. In SMIT select, Extended Configuration Extended Resource Configuration Configure Hitachi Truecopy/HUR Replicated Resources Remove Hitachi Truecopy/HUR Replicated Resourc. and press Enter. 3. Update the PowerHA SystemMirror Enterprise Edition resource group information and synchronize the cluster.
142
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 1623-14, Shimotsuruma, Yamato-shi Kanagawa 242-8502 Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this
Copyright IBM Corp. 1998, 2010
143
one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation Dept. LRAS/Bldg. 903 11501 Burnet Road Austin, TX 78758-3400 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without notice. Dealer prices may vary. This information is for planning purposes only. The information herein is subject to change before the products described become available. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows:
144
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
(your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. Copyright IBM Corp. _enter the year or years_. If you are viewing this information softcopy, the photographs and color illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at www.ibm.com/legal/copytrade.shtml. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries.. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
Notices
145
146
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Index A
adding DSCLI management replicated resource 59 consistency group changing site configuration 73 configuring 69 installing 69 overview 64 planning 65 SVC management 109 planning resource group 66 sample configuration 66 SVC management changing 118 troubleshooting configuration 76 verifying configuration 75 Copy Services Server defining DSCLI managing 57 DSCLI management Copy Services Server Worksheet 86 planning for Direct management 22 planning for DSCLI management 50
C
CD-ROM 14 changing consistency group site configuration 73 Direct management replicated resource configuration 42 site configuration 42 DSCLI management replicated resource configuration 63 site configuration 63 SVC management consistency groups 118 relationships 117 resources 117 cl_verify_svcpprc_config command SVC management 121 cllsrelationship command SVC management 120 cllssvc command SVC management 120 cllssvcpprc command SVC management 120 cluster configuring Direct Management 35 SVC Management 111 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 5 starting Direct management 41 DSCLI management 62 SVC management 116 SVC management planning 107 SVC Cluster Worksheet 89 viewing 120 configuring consistency group 69 Direct management 29 cluster 35 overview 30 PPRC paths 22 PPRC tasks 31 prerequisites 29 resource groups 41 support 30 DSCLI management 56 replicated resource 57 resource groups 60 SVC management 111 cluster 111
D
defining Direct management ESS disk subsystems 35 PPRC pairs 36 PPRC tasks 37 replicated resource 35 DSCLI management Copy Services Server 57 ESS disk subsystems 58 SVC management replicated resource configuration 112 Direct management changing replicated resource configuration 42 changing site configuration 42 configuration overview 30 configuring 29 configuring cluster 35 configuring PPRC paths 22 configuring PPRC tasks 31 configuring prerequisites 29 configuring resource groups 41 configuring support 30 defining ESS disk subsystems 35 defining PPRC pair 36 defining PPRC task 37 defining replicated resource 35 ESS Disk Subsystem Worksheet 86 improving volume group performance 39 installation troubleshooting 29 installing 27 installing prerequisites 28 installing software requirements 28 planning 19 planning connections 21 planning Copy Services Server 22
147
Direct management (continued) planning integration 21 planning PPRC replicated resource 22 planning PPRC resource group 27 planning prerequisites 19 planning tasks for ESS 25 planning worksheets 83 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 18 PPRC-Mirrored Volumes Worksheet 83 sample configuration 20 starting the cluster 41 synchronizing PPRC configuration 40 User-Specific Tasks Worksheets 84 verifying PPRC configuration 40 volume groups 23 disk subsystem 3 Direct management ESS Disk Subsystem Worksheet 86 DSCLI management DS ESS Disk Subsystem Worksheet 88 planning for DSCLI management 51 DSCLI management adding replicated resource 59 changing replicated resource configuration 63 changing site configuration 63 consistency group 73 configuring 56, 111 configuring replicated resource 57 configuring resource groups 60 consistency group configuring 69 Copy Services Server Worksheet 86 defining Copy Services Server 57 defining ESS disk subsystems 58 DS ESS Disk Subsystem Worksheet 88 installation directories 56 installing 55 installing consistency group 69 installing software requirements 55 overview 4, 43 consistency group 64 planning 44 consistency groups 65 planning Copy Services Server 50 planning disk subsystems 51 planning PPRC replicated resource 53 planning prerequisites 44 planning resource group 51 consistency group 66 planning worksheets 86 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 43 PPRC Replicated Resources Worksheet 88 PPRC-Mirrored Volumes Worksheet 87, 90 sample configuration 45 consistency group 66 starting the cluster 62 synchronizing PPRC configuration 62 troubleshooting consistency group configuration 76 upgrading 56 verifying configuration consistency group 75 verifying PPRC configuration 61 volume groups 48
E
ESS planning for Direct management ESS disk subsystems defining Direct managing 35 DSCLI managing 58 25
F
fallback 7 fallover 7
H
hard disk 13
I
improving Direct management volume group performance 39 installation directories DSCLI management 56 installation server 13 installing CD-ROM 14 consistency group 69 Direct management 27 prerequisites 28 software requirements 28 troubleshooting 29 DSCLI management 55 directories 56 software requirements 55 hard disk 13 installation media 12 installation server 13 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 11 prerequisites for PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 11 recovering from failed installation 17 SVC management 110 troubleshoot 18
M
maintaining PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 78 mirrored volume Direct management PPRC-Mirrored Volumes Worksheet 83 DSCLI management PPRC-Mirrored Volumes Worksheet 87, 90 SVC management planning 108 PPRC-Mirrored Volumes Worksheet 87, 90 mirroring 3 modifying previous snapshots 18
148
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
O
overview 1 consistency group 64 Direct management configuration 30 DSCLI management 43 planning 8 pprc 3 SVC management 100
P
planning consistency group resource group 66 consistency groups 65 SVC management 109 Direct management 19 connections 21 Copy Services Server 22 integration 21 PPRC replicated resource 22 PPRC resource group 27 prerequisites 19 sample configuration 20 tasks for ESS 25 volume groups 23 DSCLI management 44 Copy Services Server 50 disk subsystems 51 PPRC replicated resource 53 prerequisites 44 resource group 51 sample configuration 45 volume groups 48 overview for PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 8 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 8 resource group 10 site 9 SVC management 101 clusters 107 mirrored volumes 108 prerequisites 102 sample configuration 102 support 105 volume groups 105 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror cluster 5 Direct management 18 DSCLI management 43 overview 43 fallback 7 fallover 7 installation media 12 installing 11 installing from CD-ROM 14 installing from hard disk 13 installing from installation server 13 installing prerequisites 11 maintaining 78 management types 8 planning 8 planning overview 8
PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror (continued) PowerHA SystemMirror Site Worksheet 81 Resource Groups for PPRC Replicated Resources Worksheet 82 SVC management overview 100 troubleshooting 79 upgrading 16 upgrading software 16 verifying upgrade 17 PowerHA SystemMirror for Metro Mirror recovering from failed installation 17 PowerHA SystemMirror Site Worksheet 81 PPRC pair defining Direct management 36 PPRC paths configuring for Direct management 22 PPRC replicated resource 6 planning for Direct management 22 planning for DSCLI management 53 PPRC resource group planning for Direct management 27 PPRC task defining Direct management 37 PPRC tasks configuring for Direct management 31 prerequisites 1
R
recovering from failed installation 17 relationship SVC management changing 117 removing 117 SVC PPRC Relationships Worksheet 91 removing SVC management 116 relationships 117 resources 118 replicated resource adding DSCLI management 59 changing configuration Direct management 42 DSCLI management 63 configuring DSCLI management 57 defining Direct management 35 defining configuration SVC management 112 Direct management 6 DSCLI management PPRC Replicated Resources Worksheet 88 DSCLI-managed 6 Resource Groups for PPRC Replicated Resources Worksheet 82 SVC management SVC PPRC Replicated Resources Worksheet 92 SVC-managed 6
Index
149
resource SVC management changing 117 removing 118 resource group 6 configuring Direct management 41 DSCLI management 60 planning 10 planning for consistency group 66 planning for DSCLI management 51 Resource Groups for PPRC Replicated Resources Worksheet 82
S
sample configuration consistency group 66 Direct management 20 DSCLI management 45 SVC management 102 site changing configuration consistency group 73 Direct management 42 DSCLI management 63 planning 9 PowerHA SystemMirror Site Worksheet 81 snapshots modifying previous 18 software requirements Direct management 28 DSCLI management 55 starting Direct management cluster 41 DSCLI management cluster 62 SVC management cluster 116 SVC management changing consistency groups 118 relationships 117 resources 117 cl_verify_svcpprc_config command 121 cllsrelationship command 120 cllssvc command 120 cllssvcpprc command 120 configuring cluster 111 defining replicated resource configuration 112 installing 110 overview 5, 100 planning 101 clusters 107 consistency groups 109 mirrored volumes 108 support 105 volume groups 105 planning prerequisites 102 PPRC-Mirrored Volumes Worksheet 87, 90 removing 116 relationships 117 resources 118 sample configuration 102 starting the cluster 116 SVC Cluster Worksheet 89
SVC management (continued) SVC PPRC Relationships Worksheet 91 SVC PPRC Replicated Resources Worksheet synchronizing PPRC configuration 114 troubleshooting 119 verifying PPRC configuration 114 viewing clusters 120 worksheets 89 synchronizing Direct management PPRC configuration 40 DSCLI management PPRC configuration 62 SVC management PPRC configuration 114
92
T
troubleshooting consistency group 76 Direct management installation 29 installation 18 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 79 SVC management 119
U
upgrading DSCLI management 56 PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 16 software for PowerHA SystemMirror for AIX Enterprise Edition for Metro Mirror 16 verifying 17
V
verifying consistency group configuration 75 Direct management PPRC configuration 40 DSCLI management PPRC configuration 61 SVC management PPRC configuration 114 upgrade 17 viewing SVC management clusters 120 volume group Direct management 23 DSCLI management 48 improving Direct management 39 SVC management 105
W
worksheets Direct management 83 ESS Disk Subsystem Worksheet 86 PPRC-Mirrored Volumes Worksheet 83
150
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
worksheets (continued) Direct management (continued) User-Specific Tasks Worksheets 84 DSCLI management 86 Copy Services Server Worksheet 86 DS ESS Disk Subsystem Worksheet 88 PPRC Replicated Resources Worksheet 88 PPRC-Mirrored Volumes Worksheet 87, 90 SVC management 89 PPRC-Mirrored Volumes Worksheet 87, 90 SVC Cluster Worksheet 89 SVC PPRC Relationships Worksheet 91 SVC PPRC Replicated Resources Worksheet 92
Index
151
152
PowerHA SystemMirror Version 6.1 for AIX Enterprise Edition: Storage Based High Availability and Disaster Recovery
Printed in USA
SC23-4863-14